If you’re building a cloud-native application, chances are you expose application-level metrics through an HTTP endpoint like /metrics, typically for a system such as Prometheus. This has become standard practice in modern systems—especially in Kubernetes environments.
A metrics endpoint is an invaluable source of insight into how your application behaves in production. With tools like Prometheus collecting data in near real time and platforms like Grafana visualizing it, you gain deep visibility into performance, reliability, and usage patterns.
To make this work, Prometheus periodically sends HTTP requests to your /metrics endpoint and scrapes the response, storing it for use across your observability pipeline.
However, one critical detail is often overlooked: security.
In many deployments, the /metrics endpoint is left completely unprotected. If your application is accessible over the public internet, that endpoint is too—along with all the information it exposes. This can include request rates, error counts, infrastructure details, and other signals that reveal how your system behaves under load.
For an attacker, this is reconnaissance data. It can be used to identify bottlenecks, target weak points, and craft more effective attacks.
Leaving /metrics unsecured is an operational security risk.
Securing the endpoint using a secret token
When exposing a /metrics endpoint, you should protect it with middleware that verifies whether the client is authorized to access it. A simple and effective approach is to require a secret token, configured as part of your application, that is shared only with your infrastructure.
Each request to /metrics must include this token, allowing your application to verify that the request is coming from a trusted source—such as your Prometheus instance—rather than an arbitrary external client.
Here is how you would go about it, we will use Go as the example backend but the concepts apply to every backend stack.
export METRICS_VERIFICATION_KEY="some-super-secret-key"
In Go, you don’t need anything fancy for this—just a small piece of middleware that wraps your /metrics handler and checks a header (or query param) against METRICS_VERIFICATION_KEY.
package main
import (
"crypto/subtle"
"log"
"net/http"
"os"
)
// NewMetricsAuthMiddleware creates a middleware that verifies a secret token.
func NewMetricsAuthMiddleware(expectedKey string) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Only protect /metrics. If you use a router, you might apply
// this middleware only to the specific route instead of checking the path.
if r.URL.Path == "/metrics" {
if expectedKey == "" {
next.ServeHTTP(w, r)
return
}
key := r.Header.Get("X-Metrics-Key")
// Optionally allow query param fallback
if key == "" {
key = r.URL.Query().Get("key")
}
// Use constant-time comparison to prevent timing attacks
if subtle.ConstantTimeCompare([]byte(key), []byte(expectedKey)) != 1 {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
}
next.ServeHTTP(w, r)
})
}
}
Now whenever Prometheus is requesting metrics, it has to send the equivalent of this CURL request
$ curl "http://localhost:8080/metrics?key=some-super-secret-key"
# Or using a Header
$ curl -H "X-Metrics-Key: some-super-secret-key" http://localhost:8080/metrics
Configuring Prometheus
To make Prometheus send your verification key, you configure it in prometheus.yml under scrape_configs.
You can configure Prometheus to pass they key via query params:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "my-app"
static_configs:
- targets: ["localhost:8080"]
metrics_path: /metrics
params:
key: ["some-super-secret-key"]
Sending key via Headers
You can also configure Prometheus to send the key via Headers which may be more secure as headers aren’t typically logged by servers or proxies. However, it’s important to note that the headers field is supported in newer versions. If you’re on an older version, it won’t work,
global:
scrape_interval: 15s
scrape_configs:
- job_name: "my-app"
static_configs:
- targets: ["localhost:8080"]
metrics_path: /metrics
headers:
X-Metrics-Key: "some-super-secret-key"
🐳 If using Docker
services:
prometheus:
image: prom/prometheus
environment:
- METRICS_VERIFICATION_KEY=some-super-secret-key
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
In case your are using the Prometheus Operator in Kubernetes: You’d use additionalScrapeConfigs or authorization depending on setup—slightly different structure.
Network-level security
While a secret token adds a necessary layer of protection, it should ideally be combined with network-level security:
Mutual TLS (mTLS): The gold standard for service-to-service communication.
IP Whitelisting: Configure your application or firewall to only allow traffic to /metrics from the known IP addresses of your Prometheus scrapers.
Internal Networking: In VPC or Kubernetes environments, ensure the metrics endpoint is only accessible on an internal interface.
Conclusion
To secure your /metrics endpoint, you can introduce a simple shared-secret mechanism between your application and Prometheus:
- Configure a secret token via environment variables.
- Add middleware that protects the /metrics endpoint by validating incoming requests using constant-time comparison.
- Update Prometheus to include the secret in its scrape configuration.
This approach ensures that only trusted internal systems can access your performance data while keeping the implementation lightweight and easy to integrate.