ArgoCD Metrics
Monitoring ArgoCD with KubeSense
KubeSense can ingest ArgoCD Prometheus metrics through the OpenTelemetry (OTEL) Collector. Once configured, you can visualize GitOps health, controller performance, and repository activity alongside the rest of your observability data.
Prerequisites
Before you begin, ensure you have:
- An ArgoCD deployment running in your Kubernetes environment
- KubeSense installed with the OTEL collector enabled in the same cluster
- Network access from the collector to the ArgoCD metrics endpoints (
/metricson the API, repo, and application servers) - Access to modify the KubeSense Helm values file and redeploy the chart
Collecting ArgoCD Metrics
Step 1: Update KubeSense Helm Values File
Add a Prometheus receiver to scrape the ArgoCD metrics endpoints. Update your KubeSense Helm values (under the kubesensor section) with the following configuration:
otel-collector:
config:
receivers:
prometheus/argocd:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: argocd-metrics
metrics_path: /metrics
static_configs:
- targets:
- kubesense-argocd-metrics.argocd.svc.cluster.local:8082
- kubesense-argocd-api.argocd.svc.cluster.local:8083
- kubesense-argocd-repo.argocd.svc.cluster.local:8084
appendToPipelines:
metrics:
receivers:
- prometheus/argocdNote: Replace the targets with the correct service names or endpoints for your cluster. If ArgoCD runs in a different namespace or with custom service names, update the hostnames accordingly.
Step 2: Apply the Configuration
Redeploy the KubeSense Helm chart to apply the changes:
helm upgrade kubesense kubesense/kubesense \
-f values.yaml \
--namespace kubesenseKey Metrics
The ArgoCD metrics endpoint exposes a rich set of Prometheus metrics, including:
- Application and repository server health (number of healthy, processing, suspended, or degraded applications)
- Synchronization and reconciliation counters
- Kubernetes API interaction metrics
- Git repository operation latency and status
- Cluster cache age and managed resource counts
Refer to the ArgoCD Prometheus metrics reference for the full list of available series.
Troubleshooting
If metrics are missing:
- Verify connectivity – Ensure the OTEL collector pod can resolve and reach the ArgoCD services.
- Check service names – Confirm the targets match your ArgoCD service names and ports.
- Inspect collector logs – Run
kubectl logs -n kubesense <otel-collector-pod>to look for scrape errors. - Validate metrics endpoint – Port-forward the ArgoCD service and
curlthe/metricsendpoint to confirm it returns Prometheus-formatted data:kubectl port-forward -n argocd svc/argocd-metrics 8082:8082 curl localhost:8082/metrics - Check ArgoCD version – Some metrics are version-dependent; ensure your ArgoCD release exposes the desired metrics.
Best Practices
- Use Kubernetes service DNS or headless services to load-balance the scrape targets.
- Keep the
scrape_intervalaligned with your operational needs (default 60s). - Label metrics with
kubesense.clusteror other resource attributes if you aggregate multiple ArgoCD installations. - Monitor collector logs regularly to catch target changes or authentication errors.