CloudWatch Logs
Ingesting CloudWatch Logs with KubeSense
KubeSense allows you to ingest logs from Amazon CloudWatch by setting up an Amazon Firehose stream. This integration enables you to centralize your AWS logs alongside your Kubernetes and application observability data.
Note: This feature requires public ingress endpoint enabled for the KubeSense aggregator.
Prerequisites
Before you begin, ensure you have:
- Access to AWS Console with permissions to create Firehose streams, IAM roles, and CloudWatch subscription filters
- KubeSense aggregator deployed with public ingress enabled
- The KubeSense aggregator Firehose endpoint URL and API token
Setup Process
The integration involves three main steps:
- Setup a Firehose stream - Configure Amazon Data Firehose to deliver logs to KubeSense
- Create an IAM role and policy - Set up permissions for CloudWatch to send logs to Firehose
- Create a subscription filter - Configure CloudWatch log groups to stream to Firehose
Step 1: Setup a Firehose Stream
- Go to Amazon Data Firehose
- Click on Create Firehose stream
- Configure the stream:
- Source: Select
Direct PUT - Destination: Select
HTTP Endpoint - Delivery stream name: Create a name for your stream, for example
kubesense-cloudwatch-logs
- Source: Select
- Configure destination settings:
- HTTP endpoint URL: Enter the KubeSense aggregator Firehose endpoint URL
- This can be obtained from your KubeSense deployment configuration
- Format:
https://<KUBESENSE_AGGREGATOR_HOST>:<PORT>/firehose
- Access key: Enter your KubeSense API token
- This can be retrieved from your KubeSense dashboard or deployment configuration
- Content encoding: Select
GZIP - Parameters: Add the following parameter:
env_name- Specify your environment name. This will show up in the KubeSense application under this environment
- HTTP endpoint URL: Enter the KubeSense aggregator Firehose endpoint URL
- Configure backup settings:
- Choose a backup S3 bucket, or create a new one
- This ensures logs are backed up in case of delivery failures
- Click Create Firehose stream
Step 2: Create an IAM Role and Policy
- Go to Amazon IAM
- Click on Roles in the sidebar
- Click on Create Role
- Select Custom trust policy
- Paste the following trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"Service": "logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}- Click Next twice (we'll attach permissions later)
- Provide a name for the role, for example
KubeSenseCloudWatchFirehoseRole - Click Create Role
Attach Permissions Policy
- Go to your newly created role
- In the Permissions section, click on Add permissions and then Create inline policy
- Click on JSON and paste the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "<YOUR_FIREHOSE_STREAM_ARN>"
}
]
}Important: Replace <YOUR_FIREHOSE_STREAM_ARN> with the actual ARN of your Firehose stream. You can find this in the Firehose stream details page in the AWS Console.
- Click Next
- Give the policy a name, for example
KubeSenseFirehosePutPolicy - Click Create Policy
Step 3: Create a Subscription Filter
Now that the Firehose stream and IAM role are configured, you can add a subscription filter to the desired CloudWatch log group.
Using AWS CLI
The following is an example of how to create a subscription filter through the AWS CLI:
aws logs put-subscription-filter \
--log-group-name "<LOG_GROUP_NAME>" \
--filter-name "<FILTER_NAME>" \
--filter-pattern "" \
--destination-arn "<FIREHOSE_STREAM_ARN>" \
--role-arn "<IAM_ROLE_ARN>"Replace the placeholders:
<LOG_GROUP_NAME>- The name of your CloudWatch log group<FILTER_NAME>- A name for your subscription filter<FIREHOSE_STREAM_ARN>- The ARN of your Firehose stream<IAM_ROLE_ARN>- The ARN of the IAM role you created in Step 2
Using AWS Console
- Go to Amazon CloudWatch Logs
- Navigate to the specific log group you want to stream
- Click on the Subscription filters tab
- Click on Create
- Select Create Amazon Data Firehose subscription filter
- Select the Firehose delivery stream created in Step 1
- Select the IAM role created in Step 2
- Configure log format and filters as needed:
- Filter pattern: Leave empty to stream all logs, or specify a pattern to filter specific log entries
- Log format: Select the appropriate format for your logs
- Choose a name for the subscription filter
- Click Start streaming
Step 4: Configure KubeSense Aggregator Custom Sources
After setting up the Firehose stream, you need to configure the KubeSense aggregator to accept logs from the Firehose endpoint.
Update KubeSense Aggregator Configuration
Update your KubeSense aggregator configuration to include the Firehose endpoint as a custom source. This can be done through Helm values or configuration files:
Using Helm Values:
aggregator:
customSources:
enabled: true
sources:
firehose_logs:
type: aws_kinesis_firehose
address: 0.0.0.0:443
access_keys:
- "<KUBESENSE_API_TOKEN>"Or update the aggregator deployment directly:
- Locate your KubeSense aggregator deployment configuration
- Ensure the aggregator service is exposed with the appropriate port (typically
30052for logs) - Verify the Firehose endpoint is accessible from AWS
Note: The exact configuration method may vary depending on your KubeSense deployment. Refer to your deployment documentation or contact your KubeSense administrator for specific configuration details.
Verifying the Integration
After completing all steps, you can verify the integration:
- Check Firehose metrics: In the AWS Console, navigate to your Firehose stream and check the delivery metrics
- Check CloudWatch subscription filter: Verify the subscription filter is active and processing logs
- Check KubeSense logs: In your KubeSense dashboard, navigate to the logs section and verify logs are being ingested from CloudWatch
- Monitor aggregator logs: Check the KubeSense aggregator logs to ensure it's receiving data from Firehose
Troubleshooting
If logs are not appearing in KubeSense:
- Verify Firehose stream status: Check that the Firehose stream is active and not throttled
- Check IAM permissions: Verify the IAM role has the correct permissions to write to Firehose
- Verify subscription filter: Ensure the subscription filter is active and attached to the correct log group
- Check network connectivity: Verify that AWS can reach your KubeSense aggregator endpoint
- Review aggregator logs: Check the KubeSense aggregator logs for any errors or connection issues
- Verify API token: Ensure the API token configured in Firehose is valid and has the necessary permissions
Best Practices
- Use specific filter patterns: Instead of streaming all logs, use filter patterns to only send relevant logs to reduce costs and improve performance
- Monitor Firehose delivery: Set up CloudWatch alarms to monitor Firehose delivery failures
- Backup configuration: Always configure an S3 backup bucket for Firehose to prevent log loss
- Environment naming: Use descriptive environment names in the
env_nameparameter to easily identify logs in KubeSense - Cost optimization: Consider using log group filters to only stream logs that require analysis in KubeSense
Conclusion
With CloudWatch logs integration configured, KubeSense provides a unified platform for analyzing AWS logs alongside your Kubernetes infrastructure and application data. This enables comprehensive observability across your entire technology stack.