I needed to search Cloud Logging for specific events from the command line. The CLI is much faster than clicking through the Console when you know what you’re looking for.
Basic query Link to heading
gcloud logging read 'resource.type="cloud_function"' --project=my-project
Filter by time Link to heading
gcloud logging read 'timestamp>="2025-10-11T10:00:00Z" timestamp<="2025-10-11T11:00:00Z"' \
--project=my-project
Filter by resource Link to heading
Cloud Functions:
gcloud logging read 'resource.type="cloud_function" resource.labels.function_name="my-function"' \
--project=my-project
GKE containers:
gcloud logging read 'resource.type="k8s_container" resource.labels.container_name="my-app"' \
--project=my-project
Output formats Link to heading
JSON (useful for piping to jq):
gcloud logging read 'resource.type="cloud_function"' \
--project=my-project \
--format=json | jq '.'
Just the message:
gcloud logging read 'severity>=ERROR' \
--project=my-project \
--format='value(textPayload)'
Limit results Link to heading
gcloud logging read 'resource.type="cloud_function"' \
--project=my-project \
--limit=100
Tip: The filter syntax is the same as in the Cloud Console, so you can prototype queries in the UI first.
Filters I use most often Link to heading
These are the queries I reach for constantly:
Errors in the last hour:
gcloud logging read 'severity>=ERROR timestamp>"-PT1H"' \
--project=my-project \
--limit=50
Specific user actions (audit logs):
gcloud logging read 'protoPayload.authenticationInfo.principalEmail="[email protected]"' \
--project=my-project
Cloud Run service logs:
gcloud logging read 'resource.type="cloud_run_revision" resource.labels.service_name="my-service"' \
--project=my-project \
--limit=100
GKE pods with specific labels:
gcloud logging read 'resource.type="k8s_container" labels.k8s-pod/app="my-app" severity>=WARNING' \
--project=my-project
CLI vs Console: when to use which Link to heading
The CLI works well when:
- You need to script log analysis (piping to jq, grepping for patterns)
- You want to combine logs with other CLI tools
- You’re already in the terminal debugging something
- You need to share exact queries with teammates (much easier to share a command than click instructions)
The Console is better when:
- You’re exploring logs and don’t know exactly what you’re looking for
- You need the query builder UI to figure out field names
- You want to see logs in real-time (the streaming view is quite good)
- You need to create log-based metrics or alerts
The Console is better for discovery; the CLI is better for precision.
Opinion on the filter syntax Link to heading
Honestly, the logging query language is a bit awkward. It’s not quite SQL, not quite grep, not quite anything familiar. The combination of AND being implicit but OR requiring parentheses trips me up every time.
Examples that confused me:
timestamp>="2025-01-01"works buttimestamp > "2025-01-01"doesn’t (no spaces around operators)- Matching text requires
=~not=or:like you might expect - The duration format
-PT1H(ISO 8601) is great once you know it, but who writes “past 1 hour” as-PT1H?
The saving grace is that you can prototype in the Console, copy the filter, and paste it into gcloud. Without that, I’d be reading the docs every time.
Still, it’s powerful once you learn it. Being able to filter on nested JSON fields (like protoPayload.methodName) is genuinely useful.