Cloud Logging via Pub/Sub
Ingesting GCP Cloud Logging with KubeSense
KubeSense supports ingesting logs from Google Cloud Logging by setting up a Pub/Sub subscription. This integration enables you to centralize your GCP logs alongside your Kubernetes and application observability data.
Note: This feature requires public ingress endpoint enabled for the KubeSense aggregator.
Prerequisites
Before you begin, ensure you have:
- GCP project with Cloud Logging enabled
- Pub/Sub API enabled in your GCP project
- KubeSense aggregator deployed with public ingress enabled
- The KubeSense aggregator Pub/Sub endpoint URL and API token
- Appropriate GCP IAM permissions to create Pub/Sub topics, subscriptions, and log sinks
Architecture
Cloud Logging logs flow through the following path:
GCP Services → Cloud Logging → Log Sink → Pub/Sub Topic → Pub/Sub Subscription → KubeSense AggregatorStep 1: Create Pub/Sub Topic
Create a Pub/Sub topic to receive logs from Cloud Logging:
Using gcloud CLI
gcloud pubsub topics create kubesense-cloud-logs \
--project=YOUR_PROJECT_IDUsing GCP Console
- Go to Pub/Sub Topics
- Click Create Topic
- Enter topic name:
kubesense-cloud-logs - Click Create
Step 2: Create Log Sink
Create a log sink to export Cloud Logging logs to Pub/Sub:
Using gcloud CLI
gcloud logging sinks create kubesense-sink \
pubsub.googleapis.com/projects/YOUR_PROJECT_ID/topics/kubesense-cloud-logs \
--log-filter='resource.type="gce_instance" OR resource.type="gke_cluster" OR resource.type="cloud_function"' \
--project=YOUR_PROJECT_IDUsing GCP Console
- Go to Cloud Logging
- Click Logs Router in the left menu
- Click Create Sink
- Configure:
- Sink name:
kubesense-sink - Sink destination: Select Cloud Pub/Sub topic
- Select Cloud Pub/Sub topic: Choose
kubesense-cloud-logs - Choose logs to include in sink: Configure filters as needed
- Sink name:
- Click Create Sink
Log Filter Examples
Filter logs by resource type:
# GKE cluster logs
resource.type="gke_cluster"
# GCE instance logs
resource.type="gce_instance"
# Cloud Functions logs
resource.type="cloud_function"
# All compute resources
resource.type=~"gce_|gke_"
# Specific log severity
severity>=ERRORStep 3: Grant Pub/Sub Permissions
The log sink creation automatically creates a service account. Grant it permissions to publish to Pub/Sub:
# Get the service account email (usually in format: service-ACCOUNT_NUMBER@gcp-sa-logging.iam.gserviceaccount.com)
gcloud logging sinks describe kubesense-sink \
--project=YOUR_PROJECT_ID \
--format="value(writerIdentity)"
# Grant Pub/Sub Publisher role
gcloud pubsub topics add-iam-policy-binding kubesense-cloud-logs \
--member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \
--role="roles/pubsub.publisher" \
--project=YOUR_PROJECT_IDStep 4: Create Pub/Sub Subscription
Create a subscription that will push messages to KubeSense aggregator:
Using gcloud CLI
gcloud pubsub subscriptions create kubesense-subscription \
--topic=kubesense-cloud-logs \
--push-endpoint=https://<KUBESENSE_AGGREGATOR_HOST>:<PORT>/pubsub \
--push-auth-token=<KUBESENSE_API_TOKEN> \
--project=YOUR_PROJECT_IDUsing GCP Console
- Go to Pub/Sub Subscriptions
- Click Create Subscription
- Configure:
- Subscription ID:
kubesense-subscription - Topic: Select
kubesense-cloud-logs - Delivery type: Select Push
- Endpoint URL:
https://<KUBESENSE_AGGREGATOR_HOST>:<PORT>/pubsub - Authentication: Configure authentication header
- Header name:
Authorization - Header value:
Bearer <KUBESENSE_API_TOKEN>
- Header name:
- Subscription ID:
- Click Create
Step 5: Configure KubeSense Aggregator
Configure the aggregator to accept logs from Pub/Sub:
aggregator:
customSources:
enabled: true
sources:
pubsub_logs:
type: gcp_pubsub
project: YOUR_PROJECT_ID
subscription: projects/YOUR_PROJECT_ID/subscriptions/kubesense-subscription
credentials_path: /etc/kubesense/gcs-key.jsonUpdate Helm Values
If deploying via Helm:
global:
cluster_name: "gcp-cluster"
aggregator:
customSources:
enabled: true
sources:
pubsub_logs:
type: gcp_pubsub
project: YOUR_PROJECT_ID
subscription: projects/YOUR_PROJECT_ID/subscriptions/kubesense-subscription
credentials_path: /etc/kubesense/gcs-key.jsonAlternative: Pull Subscription
Instead of push subscription, you can use pull subscription with the aggregator polling Pub/Sub:
Configure Pull Subscription
gcloud pubsub subscriptions create kubesense-pull-subscription \
--topic=kubesense-cloud-logs \
--project=YOUR_PROJECT_IDConfigure Aggregator for Pull
aggregator:
customSources:
enabled: true
sources:
pubsub_pull_logs:
type: gcp_pubsub
project: YOUR_PROJECT_ID
subscription: projects/YOUR_PROJECT_ID/subscriptions/kubesense-pull-subscription
credentials_path: /etc/kubesense/gcs-key.jsonLog Enrichment
KubeSense aggregator automatically enriches GCP Cloud Logging with:
- Resource metadata: Resource type, labels, location
- GCP project information: Project ID, project number
- Log metadata: Severity, timestamp, log name
- Source information: Service name, method name (for API logs)
Monitoring and Verification
- Check log sink: Verify logs are being exported to Pub/Sub
- Monitor Pub/Sub metrics: Check message count and delivery metrics
- Verify subscription: Ensure subscription is active and delivering messages
- Check KubeSense dashboard: Verify logs appear in KubeSense with GCP metadata
- Review aggregator logs: Check for any ingestion errors
Troubleshooting
Logs Not Appearing
- Verify log sink: Check that the sink is active and exporting logs
- Check Pub/Sub topic: Verify messages are being published to the topic
- Verify subscription: Ensure subscription is active and configured correctly
- Check IAM permissions: Verify service account has Pub/Sub publisher role
- Review network connectivity: Ensure GCP can reach KubeSense aggregator endpoint
- Check aggregator logs: Review aggregator logs for connection or parsing errors
Authentication Issues
- Verify API token: Ensure the API token is valid and has correct permissions
- Check authentication header: Verify header name and value are correct
- Review aggregator auth config: Ensure authentication is properly configured
Performance Issues
- Adjust batch size: Configure Pub/Sub batch settings
- Monitor message backlog: Check for message accumulation in subscription
- Scale aggregator: Increase aggregator resources if needed
- Use pull mode: Consider pull mode for better control
Best Practices
- Use specific log filters: Filter logs at the sink level to reduce volume and costs
- Organize by topic: Create separate topics for different log types or environments
- Monitor Pub/Sub quotas: Be aware of Pub/Sub quotas and limits
- Set up alerts: Configure alerts for subscription delivery failures
- Use structured logging: Ensure applications use structured logging for better parsing
- Tag resources: Use GCP resource labels for better log organization
- Monitor costs: Track Pub/Sub message and storage costs
Cost Considerations
- Pub/Sub messages: Charged per million messages
- Pub/Sub storage: Charged for message retention
- Cloud Logging export: No additional charge for log exports
- Data transfer: Consider data transfer costs between GCP and KubeSense
Advanced Configuration
Multiple Log Sinks
Create separate sinks for different log types:
# GKE logs
gcloud logging sinks create kubesense-gke-sink \
pubsub.googleapis.com/projects/YOUR_PROJECT_ID/topics/kubesense-gke-logs \
--log-filter='resource.type="gke_cluster"' \
--project=YOUR_PROJECT_ID
# Cloud Functions logs
gcloud logging sinks create kubesense-functions-sink \
pubsub.googleapis.com/projects/YOUR_PROJECT_ID/topics/kubesense-functions-logs \
--log-filter='resource.type="cloud_function"' \
--project=YOUR_PROJECT_IDCustom Log Processing
Configure aggregator for custom log processing using transforms (configured separately):
aggregator:
customSources:
enabled: true
sources:
pubsub_logs:
type: gcp_pubsub
project: YOUR_PROJECT_ID
subscription: projects/YOUR_PROJECT_ID/subscriptions/kubesense-subscription
credentials_path: /etc/kubesense/gcs-key.jsonConclusion
Cloud Logging via Pub/Sub integration provides real-time log streaming from GCP services, enabling comprehensive observability across your entire GCP infrastructure alongside your Kubernetes and application data.