DEV Community

Cover image for Grafana Cloud PDC agent and Istio
Yevhen Tienkaiev
Yevhen Tienkaiev

Posted on

Grafana Cloud PDC agent and Istio

When you need to setup Grafana Private Data source Connect in Kubernetes you need to apply some tricks in order to make it work.

Here I will describe what I did in order to use it.

Used links:

I created my custom helm chart that contains next deployment.yaml:

apiVersion: apps/v1 kind: Deployment metadata: labels: app: {{ .Release.Name }} name: {{ .Release.Name }} name: {{ .Release.Name }} spec: replicas: {{ .Values.minReplicas }} selector: matchLabels: name: {{ .Release.Name }} strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: name: {{ .Release.Name }} annotations: proxy.istio.io/config: | holdApplicationUntilProxyStarts: true spec: containers: - name: {{ .Release.Name }} env: - name: CLUSTER valueFrom: secretKeyRef: key: cluster name: {{ .Release.Name }} - name: HOSTED_GRAFANA_ID valueFrom: secretKeyRef: key: hostedGrafanaId name: {{ .Release.Name }} - name: TOKEN valueFrom: secretKeyRef: key: token name: {{ .Release.Name }} args: - -cluster - "$(CLUSTER)" - -gcloud-hosted-grafana-id - "$(HOSTED_GRAFANA_ID)" - -token - "$(TOKEN)" - -ssh-key-file - "/home/pdc/.ssh/grafana_pdc_v3" image: grafana/pdc-agent:{{ .Values.version }} imagePullPolicy: Always resources: limits: cpu: 1024m memory: 1Gi requests: cpu: 1024m memory: 1Gi securityContext: allowPrivilegeEscalation: false privileged: false runAsNonRoot: true capabilities: drop: - all securityContext: runAsUser: 30000 runAsGroup: 30000 fsGroup: 30000 topologySpreadConstraints: - labelSelector: matchLabels: app: {{ .Release.Name }} maxSkew: 1 minDomains: {{ .Values.minReplicas }} topologyKey: "kubernetes.io/hostname" whenUnsatisfiable: DoNotSchedule matchLabelKeys: - pod-template-hash nodeAffinityPolicy: Honor nodeTaintsPolicy: Honor - labelSelector: matchLabels: app: {{ .Release.Name }} maxSkew: 1 minDomains: {{ .Values.minReplicas }} topologyKey: "topology.kubernetes.io/zone" whenUnsatisfiable: DoNotSchedule matchLabelKeys: - pod-template-hash nodeAffinityPolicy: Honor nodeTaintsPolicy: Honor 
Enter fullscreen mode Exit fullscreen mode

Nuances

Istio

Sidecar

Set holdApplicationUntilProxyStarts: true for the pods, so they will not start until istio sidecar not starts.

Access (optional)

If you not allow outbound traffic - set ServiceEntry that will allow several urls.
What I have for API access:

apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: {{ .Values.name }}-api spec: hosts: - private-datasource-connect-api-<cluster>.grafana.net location: MESH_EXTERNAL ports: - name: https number: 443 protocol: HTTPS resolution: DNS 
Enter fullscreen mode Exit fullscreen mode

What I have for ssh access:

apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: {{ .Values.name }}-ssh spec: hosts: - private-datasource-connect-<cluster>.grafana.net location: MESH_EXTERNAL ports: - name: tcp number: 22 protocol: TCP resolution: DNS 
Enter fullscreen mode Exit fullscreen mode

Grafana PDC config

Key Pair force regeneration

I set -ssh-key-file to /home/pdc/.ssh/grafana_pdc_v3 because if there already host in allowed list(for ssh access) - then it not starts and fail in constant restarts.
This should be addressed in GitHub issue

Log level

Currently, in PDC agent log level set to debug level.
Unfortunately, as of today, when you use -ssh-key-file parameter you cannot change it.
This should be addressed in GitHub issue

Top comments (0)