DEV Community

cosckoya
cosckoya

Posted on

Prometheus: Alertmanager with SendGrid and Slack API

I've been working in with Prometheus a lot, it's about 3 years now when I changed my mind from Nagios / Zabbix monitoring to a metrics monitoring. I was developing lot of different Perl, Python and Shell scripts to monitor infrastructures and applications via SNMP, NPRE o NSCA "protocols".

I discovered Kubernetes in their second year of life, and then Prometheus. You can read a lot of different Prometheus implementations like adding a Grafana, maybe doing some bigger Prometheus with Cortex or Thanos, but what is really useful to me on my personal projects? Setting up Alertmanager.

Im here I will show you how to manage AlertManager. At first I installed it with Helm and configured a custom values.yaml file, but let's make it simple. I will show you how to edit YAMLs and later how to custom Helm charts :D

AlertManager

Configure AlertManager

AlertManager is the tool that sends all notifications via mail or API. Prometheus server has all the alert rules, when an alert is triggered by a rule, Alertmanager will send the notification.

First step is to configure Alertmanager config file (alertmanager.yml)

In here we could add some customization parameters like:

  • Global: Where we can set our notification sender by SMTP (SendGrid, Gmail...), API (Slack, WeChat..) or simply set the timeouts.

  • Route: We set a sending logic here, how the mails and to who the notifications will be sent.

  • Inhibit Rules: Add another set of rules to set diffferent route rules to different receivers

  • Receivers: Here we set the different receivers. Maybe we need to send notifications to slack, mail, or both. In here we need to add as much receviers that we want to.

In this sample I will set up an API (via Slack) and a mail (via SendGrid)

global: # Sendgrid SMTP properties. smtp_smarthost: 'smtp.sendgrid.net:587' smtp_from: 'Alertmanager <alertmanager@cosckoya.io>' smtp_auth_username: 'apikey' smtp_auth_password: '<API KEY HERE>' # Slack API properties slack_api_url: 'https://hooks.slack.com/services/<API TOKEN HERE>' receivers: - name: mail email_configs: - to: "admins@mail.me" headers: Subject: "Alert ({{ .Status }}): {{ .CommonLabels.severity }} {{ .CommonAnnotations.message }} ({{ .CommonLabels.alertname }})" html: | Greetings, <p> You have the following firing alerts: <ul> {{ range .Alerts }} <li>{{.Labels.alertname}} on {{.Labels.instance}}</li> <li>Labels:</li> <li>{{ range .Labels.SortedPairs }} - {{ .Name }} = {{ .Value }}</li> <li>{{ end }}Annotations:</li> <li>{{ range .Annotations.SortedPairs }} - {{ .Name }} = {{ .Value }}</li> <li>{{ end }}---</li> {{ end }} </ul> </p> - name: slack slack_configs: - channel: '#alerting' send_resolved: true icon_url: https://avatars3.githubusercontent.com/u/3380462 title: "|-" [{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }} {{- if gt (len .CommonLabels) (len .GroupLabels) -}} {{" "}}( {{- with .CommonLabels.Remove .GroupLabels.Names }} {{- range $index, $label := .SortedPairs -}} {{ if $index }}, {{ end }} {{- $label.Name }}="{{ $label.Value -}}" {{- end }} {{- end -}} ) {{- end }} text: >- {{ range .Alerts -}} *Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }} *Description:* {{ .Annotations.description }} *Details:* {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{ end }} {{ end }} route: group_wait: 10s group_interval: 5m receiver: mail repeat_interval: 10s routes: - match: severity: High repeat_interval: 1m receiver: slack 
Enter fullscreen mode Exit fullscreen mode

This parameters will setup AlertManager to send notifications to the declared "routes"; mail as default and Slack API when the match case is setted up.

Now we have to setup some alerts. These ones are just for demostration and testing that the alert triggers works.

 groups: - name: deadman.rules rules: - alert: deadman expr: vector(1) for: 10s labels: severity: Critical annotations: message: Dummy Check summary: This is a dummy check meant to ensure that the entire lerting pipeline is functional. - name: pod.rules rules: - alert: PodLimit expr: kubelet_running_pods > 0 for: 15m labels: severity: High annotations: message: 'Kubelet {{$labels.instance}} is running {{$value}} pods.' 
Enter fullscreen mode Exit fullscreen mode

Last Step is to setup "alerting" in the prometheus server config file (prometheus.yml):

[...] alerting: alertmanagers: - static_configs: - targets: - prometheus-alertmanager:80 [...] 
Enter fullscreen mode Exit fullscreen mode

If you managed to set it up correctly, then you will see a "gatling gun" Alertmanager notifications in your mail and Slack channel.

Enjoy!

Top comments (0)