Skip to content

A lightweight Python Flask application demonstrating how to expose Prometheus custom metrics in Kubernetes (Minikube) for horizontal auto-scaling.

Notifications You must be signed in to change notification settings

YuKitAs/prometheus-custom-metrics-in-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README

This demo shows how to set up the infrastructure to expose custom metrics from a Flask application to Kubernetes' Custom Metrics API, which can be used for Horizontal Pod Autoscaler (HPA) to perform auto-scaling.

Environment

  • Minikube: v1.35.0
  • Docker (optional, can be replaced by Minikube's built-in Docker daemon)
  • Kubectl (optional, can be replaced by minikube kubectl)
  • Helm (optional, used to install Prometheus and Prometheus Adapter in this demo)
  • Python 3.12

Walkthrough

  1. Create a Flask app and collect the custom metric http_requests_total, which can be found at localhost:5000/metrics.

  2. Start Minikube with

$ minikube start [--driver=docker]

If Docker is installed, it should be used as the default driver.

  1. Build a Docker image with Minikube's internal Docker:
$ eval $(minikube docker-env) $ docker build -t localhost/prometheus-custom-metrics-in-k8s .

Adding localhost here because the default namespace would be docker.io/library.

Note: if Minikube can't find the built image, try this workaround:

$ docker image save -o prometheus-custom-metrics-in-k8s.tar localhost/prometheus-custom-metrics-in-k8s $ minikube image load prometheus-custom-metrics-in-k8s.tar
  1. Deploy the app to K8s with
$ kubectl apply -f k8s/

deployment.yaml and service.yaml will be applied to create a Deployment and a Service in the default namespace.

  1. Verify the app is running:
$ kubectl port-forward svc/prometheus-custom-metrics-in-k8s 5000:80 $ curl localhost:5000/hello
  1. Add prometheus-community to Helm repo:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts $ helm repo update
  1. Install Prometheus to monitoring namespace with Helm:
$ helm install prometheus prometheus-community/prometheus -n monitoring --create-namespace

Alternatively, prometheus-community/kube-prometheus-stack can be installed, but in this example not all the components are needed.

  1. After Prometheus is installed, we can find the Service prometheus-server, it's exposing port 80 by default:
$ kubectl -nmonitoring get svc prometheus-server
  1. Install Prometheus Adapter to monitoring namespace with helm:
$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring --set prometheus.url=http://prometheus-server.monitoring.svc --set prometheus.port=80

It's important to set the correct url and port here. If you chose to install kube-prometheus-stack, the Prometheus Server will be managed by the Prometheus Operator, and these configs will be different, then you should check the Service prometheus-kube-prometheus-prometheus instead.

  1. Add a job to the ConfigMap of Prometheus Server:
$ kubectl -nmonitoring get cm prometheus-server -o yaml > k8s/prometheus-config.yaml

Add a new job under scrape_configs:

- job_name: prometheus-custom-metrics-in-k8s kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_namespace] target_label: namespace - source_labels: [__meta_kubernetes_pod_name] target_label: pod metrics_path: '/metrics' static_configs: - targets: ['prometheus-custom-metrics-in-k8s.default.svc.cluster.local:80']

The relabel_configs are needed to add namespace and pod labels to the custom metric so that it should look like this

http_requests_total{instance="<pod-ip>:5000", job="prometheus-custom-metrics-in-k8s", namespace="default", pod="<pod-name>"} 

Apply the config:

$ kubectl -nmonitoring apply -f k8s/prometheus-config.yaml

You could also edit the ConfigMap directly with

$ kubectl -nmonitoring edit cm prometheus-server 
  1. Restart the prometheus-server pod with the new config.

  2. Add a rule to the ConfigMap of Prometheus Adapter:

$ kubectl -nmonitoring get cm prometheus-adapter -o yaml > k8s/prometheus-adapter-config.yaml

Add a new query under rules to fetch the http_requests_per_second metric from Prometheus:

- seriesQuery: 'http_requests_total' resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"} name: matches: "^(.*)_total" as: "${1}_per_second" metricsQuery: sum(rate(http_requests_total[5m])) by (namespace, pod)

The metricsQuery can be validated on Prometheus UI at localhost:9090 via port forwarding:

$ kubectl -nmonitoring port-forward svc/prometheus-server 9090:80 

If it shows the correct result, apply the config:

$ kubectl -nmonitoring apply -f k8s/prometheus-adapter-config.yaml

You could also edit the ConfigMap directly with

$ kubectl -nmonitoring edit cm prometheus-adapter 
  1. Restart the prometheus-adapter pod with the new config.

  2. Verify the metric can be retrieved by Custom Metrics API:

$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

In the resources section we should find the metric namespaces/http_requests_per_second.

About

A lightweight Python Flask application demonstrating how to expose Prometheus custom metrics in Kubernetes (Minikube) for horizontal auto-scaling.

Topics

Resources

Stars

Watchers

Forks

Contributors 2

  •  
  •