Kube-Prometheus Stack Secure Metrics Scraping

Kubernetes Series (Article 6): Kube-Prometheus Stack Setup with Secure Metrics Scraping

This section covers how to securely deploy and configure the kube-prometheus-stack with TLS authentication, node-level metrics scraping, and service monitoring in a Kubernetes cluster.

Components Deployed

  • Prometheus: Cluster monitoring and alerting
  • Grafana: Visualisation dashboard
  • Alertmanager: Alerting system
  • Node Exporter: Node-level system metrics
  • Kube State Metrics: Kubernetes resource state
  • cAdvisor: Container-level metrics (exposed by Kubelet)

TLS Certificate Setup for Prometheus

Prometheus scrapes metrics from the Kubernetes control plane (e.g., scheduler, controller-manager, etc.) over HTTPS. We generate a client certificate signed by a trusted CA to authenticate and authorise these requests securely.

Certificate Creation Steps

openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=Prometheus CA"
openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -out tls.csr -subj "/CN=prometheus-client"
cat <<EOF > tls.ext
extendedKeyUsage = clientAuth
EOF
openssl x509 -req -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt -days 365 -sha256 -extfile tls.ext
Bash

Breakdown of Commands

  • Generate CA and client keys
  • Sign a TLS client certificate
  • Mark the certificate for client authentication

Create Kubernetes Secret

kubectl -n monitoring create secret generic prometheus-client-tls \
  --from-file=tls.crt --from-file=tls.key --from-file=ca.crt
Bash

This is required because:
Suppose you don’t properly configure TLS authentication for Prometheus when scraping secure endpoints (like the Kubernetes scheduler, controller manager, or etcd). In that case, the targets will show as DOWN in the Prometheus Status then Targets page.

As a result:

  • Metrics won’t be scraped
  • Dashboards will lack critical data
  • Alerts will not trigger
  • Troubleshooting will be painful and incomplete

Enable Metrics Access on Kubernetes Components

Update bindings in control plane manifests and kube-proxy config.

ComponentFile PathChange
Controller Manager/etc/kubernetes/manifests/kube-controller-manager.yaml--bind-address=0.0.0.0
Scheduler/etc/kubernetes/manifests/kube-scheduler.yaml--bind-address=0.0.0.0
Etcd/etc/kubernetes/manifests/etcd.yaml--listen-metrics-urls=http://0.0.0.0:2381
Kube Proxy

Edit ConfigMap:

kubectl -n kube-system edit configmap kube-proxy
Bash
Change:
metricsBindAddress: 127.0.0.1:10249 #to 
metricsBindAddress: 0.0.0.0:10249
YAML
Restart kube-proxy:
kubectl -n kube-system delete pod -l k8s-app=kube-proxy
Bash

Renewing TLS Certificates in the future

To check expiry:
openssl x509 -in tls.crt -noout -enddate
Bash
To renew:
  • Re-sign the CSR
  • Replace the secret
  • Restart Prometheus
kubectl -n monitoring rollout restart statefulset prometheus-kube-prometheus-prometheus
Bash
Troubleshooting kube-proxy Metrics

If you see:

Error scraping target: dial tcp ...:10249: connect: connection refused

Ensure kube-proxy is bound to 0.0.0.0:10249 and restarted correctly.

Test endpoint from inside the cluster:

curl http://<node-ip>:10249/metrics
Bash
Why Default to 127.0.0.1?

Kubernetes components expose metrics locally by default for security. Only switch to 0.0.0.0 if:

  • Prometheus runs inside the cluster
  • You secure the endpoints (TLS, NetworkPolicy)
Grafana Access

Default credentials:

  • Username: admin
  • Password: prom-operator (or value from values.yaml)
Dashboard Summary

Prebuilt Grafana dashboards include:

  • API Server, Scheduler, Controller Manager, etcd
  • Node Exporter, Kubelet, cAdvisor
  • Workloads, Namespaces, Alerts
If You Don’t Fix These:
  • Metrics won’t be scraped
  • Dashboards will lack critical data
  • Alerts will not trigger
  • Troubleshooting will be painful
Back to top arrow