Kubesense

ArgoCD Metrics

Monitoring ArgoCD with KubeSense

KubeSense can ingest ArgoCD Prometheus metrics through the OpenTelemetry (OTEL) Collector. Once configured, you can visualize GitOps health, controller performance, and repository activity alongside the rest of your observability data.

Prerequisites

Before you begin, ensure you have:

  1. An ArgoCD deployment running in your Kubernetes environment
  2. KubeSense installed with the OTEL collector enabled in the same cluster
  3. Network access from the collector to the ArgoCD metrics endpoints (/metrics on the API, repo, and application servers)
  4. Access to modify the KubeSense Helm values file and redeploy the chart

Collecting ArgoCD Metrics

Step 1: Update KubeSense Helm Values File

Add a Prometheus receiver to scrape the ArgoCD metrics endpoints. Update your KubeSense Helm values (under the kubesensor section) with the following configuration:

otel-collector:
  config:
    receivers:
      prometheus/argocd:
        config:
          global:
            scrape_interval: 60s
          scrape_configs:
            - job_name: argocd-metrics
              metrics_path: /metrics
              static_configs:
                - targets:
                  - kubesense-argocd-metrics.argocd.svc.cluster.local:8082
                  - kubesense-argocd-api.argocd.svc.cluster.local:8083
                  - kubesense-argocd-repo.argocd.svc.cluster.local:8084
    appendToPipelines:
      metrics:
        receivers:
        - prometheus/argocd

Note: Replace the targets with the correct service names or endpoints for your cluster. If ArgoCD runs in a different namespace or with custom service names, update the hostnames accordingly.

Step 2: Apply the Configuration

Redeploy the KubeSense Helm chart to apply the changes:

helm upgrade kubesense kubesense/kubesense \
  -f values.yaml \
  --namespace kubesense

Key Metrics

The ArgoCD metrics endpoint exposes a rich set of Prometheus metrics, including:

  • Application and repository server health (number of healthy, processing, suspended, or degraded applications)
  • Synchronization and reconciliation counters
  • Kubernetes API interaction metrics
  • Git repository operation latency and status
  • Cluster cache age and managed resource counts

Refer to the ArgoCD Prometheus metrics reference for the full list of available series.

Troubleshooting

If metrics are missing:

  1. Verify connectivity – Ensure the OTEL collector pod can resolve and reach the ArgoCD services.
  2. Check service names – Confirm the targets match your ArgoCD service names and ports.
  3. Inspect collector logs – Run kubectl logs -n kubesense <otel-collector-pod> to look for scrape errors.
  4. Validate metrics endpoint – Port-forward the ArgoCD service and curl the /metrics endpoint to confirm it returns Prometheus-formatted data:
    kubectl port-forward -n argocd svc/argocd-metrics 8082:8082
    curl localhost:8082/metrics
  5. Check ArgoCD version – Some metrics are version-dependent; ensure your ArgoCD release exposes the desired metrics.

Best Practices

  • Use Kubernetes service DNS or headless services to load-balance the scrape targets.
  • Keep the scrape_interval aligned with your operational needs (default 60s).
  • Label metrics with kubesense.cluster or other resource attributes if you aggregate multiple ArgoCD installations.
  • Monitor collector logs regularly to catch target changes or authentication errors.