Google Cloud Storage Integration
Overview
KubeSense supports configuring cold storage integration with Google Cloud Storage (GCS) for long-term storage of historical observability data. This enables you to move data to GCS after a specified duration while keeping your active storage optimized.
info: GCS integration provides cost-effective, scalable cold storage for traces and metrics in GCP environments.
Prerequisites
Before setting up GCS integration, ensure you have:
- KubeSense deployed in a GCP or Kubernetes environment
- A Google Cloud Project with billing enabled
- A GCS bucket created for cold storage
- Access to create HMAC keys for authentication
- Access to modify the KubeSense Helm values
Step 1: Create GCS Bucket
- Go to Google Cloud Console → Cloud Storage → Buckets
- Click Create bucket
- Enter a bucket name (e.g.,
kubesense-cold-storage) - Select location type and location
- Choose storage class (Standard recommended for cold storage)
- Configure access control (Uniform recommended)
- Create the bucket
note: Note down the bucket name and project ID for later use.
Step 2: Enable S3 Interoperability for GCS
Google Cloud Storage doesn't use AWS-style access/secret keys by default. You need to enable "interoperability" access to get HMAC keys (which mimic AWS-style S3 credentials).
- Go to Cloud Console → Cloud Storage → Settings → Interoperability
- Ensure interoperability access is enabled for your project
- Click Create a key for a service account or Create a new key
- You'll receive:
Access Key→ this becomes youraccessKeyIDSecret→ this becomes yoursecretAccessKey
info: HMAC keys provide programmatic access to GCS buckets using S3-compatible API calls.
Step 3: Grant Permissions to the Service Account
The HMAC keys belong to a service account or user. You need to ensure that identity has the right permissions on the GCS bucket.
Recommended IAM Roles
Attach the following GCS IAM roles to the service account:
| Role | Purpose |
|---|---|
Storage Object Viewer | Read access to objects (required for cold reads) |
Storage Object Creator | Write access to create and upload objects |
Storage Object Admin | Full control (optional, use during testing) |
Minimum Required Permissions
For read and write access to cold storage, assign:
roles/storage.objectViewer- Read accessroles/storage.objectCreator- Write access
To assign roles via gcloud:
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:YOUR_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"Step 4: Update KubeSense Configuration
Add the cold storage configuration to your KubeSense Helm values:
coldStorageConfig:
enabled: true
endpoint: https://storage.googleapis.com/kubesense-cold-storage/kubesense/
accessKeyID: GOOG************
secretAccessKey: YOUR_SECRET_ACCESS_KEY
cloudProvider: "gcp"Replace:
kubesense-cold-storagewith your actual bucket nameGOOG************with your HMAC access keyYOUR_SECRET_ACCESS_KEYwith your HMAC secret key- Add optional folder path in the endpoint if needed
note: The endpoint format is: https://storage.googleapis.com/<bucket-name>/<folder>/ The bucket name is optional in the endpoint for root-level access.
Step 5: Upgrade KubeSense
Apply the configuration by upgrading your KubeSense Helm deployment:
helm upgrade kubesense ./kubesense-chart \
-f values.yaml \
--namespace kubesenseConfiguration Parameters
Here's a detailed breakdown of the GCS cold storage configuration:
| Parameter | Type | Required | Description |
|---|---|---|---|
enabled | boolean | Yes | Enable or disable cold storage |
endpoint | string | Yes | GCS endpoint URL with bucket and optional folder |
accessKeyID | string | Yes | HMAC access key (starts with GOOG) |
secretAccessKey | string | Yes | HMAC secret key |
cloudProvider | string | Yes | Must be set to "gcp" |
Verifying Integration
After configuring GCS integration:
-
Check KubeSense logs to ensure GCS connection is successful:
kubectl logs -n kubesense sts/kubesense-datastore-shard-0 -f -
Verify data in GCS:
- Go to Google Cloud Console → Cloud Storage
- Navigate to your bucket
- Check for backup files and data being written
- Monitor data growth over time
-
Test cold storage functionality:
- Query historical data that should be in cold storage
- Verify data is accessible through KubeSense UI
- Check data retrieval performance
Best Practices
Security
- Use service account keys: Prefer service account HMAC keys over user credentials
- Rotate keys regularly: Implement key rotation policies for HMAC keys
- Limit permissions: Use the least privilege principle for IAM roles
- Monitor access: Set up Cloud Audit Logs to track bucket access
Performance
- Choose appropriate storage class: Use Standard for better query performance
- Consider regional buckets: For lower latency in specific regions
Troubleshooting
Issue: HMAC key errors
Symptoms: 403 Forbidden or InvalidAccessKeyId errors
Solution:
- Verify the HMAC key format (should start with "GOOG")
- Check that the key is active and not expired
- Ensure the service account has proper permissions
- Regenerate keys if needed
Issue: Bucket not found errors
Symptoms: Bucket does not exist errors
Solution:
- Verify the bucket name is correct in the configuration
- Check that the bucket exists in the correct project
- Ensure the bucket is in the specified region
- Verify project billing is enabled
Issue: Permission denied errors
Symptoms: Access denied or permission errors
Solution:
- Verify IAM roles are correctly assigned to the service account
- Check bucket-level IAM permissions
- Ensure the HMAC key belongs to a user/account with proper permissions
- Review Cloud Audit Logs for permission issues
Issue: Cold storage not working
Symptoms: Data not being written to GCS
Solution:
- Check Datastore logs for errors
- Verify the configuration parameters are correct
- Ensure HMAC keys are valid and active
- Check network connectivity to
storage.googleapis.com - Verify the bucket exists and is accessible
Conclusion
Google Cloud Storage integration provides a reliable, cost-effective solution for storing historical observability data in GCP environments. By properly configuring HMAC authentication, IAM permissions, and lifecycle policies, you can optimize both performance and costs for long-term data storage in GCS.