Kubernetes Quickstart

Learn how to deploy a self-hosted router in Kubernetes using Helm charts


note
Apollo recommends using the Apollo GraphOS Operator for production deployments and when managing multiple routers or complex architectures. The Operator provides declarative Kubernetes resources to manage routers, supergraphs, graph schemas, and subgraphs, making it easier to maintain consistency and automate deployments across your infrastructure.

Use the Operator when you need:
  • Production-grade deployments with declarative configuration management
  • Simplified management of multiple routers, supergraphs, and subgraphs
  • Support for complex architectures including single-cluster, multi-cluster, and hybrid configurations
  • Integration with existing CI/CD workflows through deploy-only patterns
For more details, see the Operator workflow patterns.
note
The Apollo Router Core source code and all its distributions are made available under the Elastic License v2.0 (ELv2) license.

This guide uses Helm charts to deploy a self-hosted router in Kubernetes. Using Helm is suitable for quick deployments, testing, or when you prefer direct Helm chart management.

This guide shows how to:

  • Get the router Helm chart from the Apollo container repository.

  • Deploy a router with a basic Helm chart.

Prerequisites

note
This guide assumes you are familiar with Kubernetes and Helm. If you are not familiar with either, you can find a Kubernetes tutorial and a Helm tutorial to get started.
  • A GraphOS graph set up in your Apollo account. If you don't have a graph, you can create one in the GraphOS Studio.

  • Helm version 3.x or higher installed on your local machine.

  • A Kubernetes cluster with access to the internet.

GraphOS graph

Set up your self-hosted graph and get its graph ref and API key.

If you need a guide to set up your graph, you can follow the self-hosted router quickstart and complete step 1 (Set up Apollo tools), step 4 (Obtain your subgraph schemas), and step 5 (Publish your subgraph schemas).

Kubernetes cluster

If you don't have a Kubernetes cluster, you can set one up using kind or minikube locally, or by referring to your cloud provider's documentation.

Quickstart

To deploy the router, run the helm install command with an argument for the router's OCI image URL. Optionally, you can add arguments for the values.yaml configuration file and/or additional arguments to override specific configuration values.

Bash
1helm install <name_for_install> --namespace apollo-router --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router

The necessary arguments for specific configuration values:

Some optional but recommended arguments:

  • --namespace <router-namespace>. The namespace scope for this deployment.

  • --version <router-version>. The version of the router to deploy. If not specified by helm install, the latest version is installed.

Verify deployment

Verify that your router is one of the deployed releases with the helm list command. If you deployed with the --namespace <router-namespace> option, you can list only the releases within your namespace:

Bash
1helm list --namespace <router-namespace>

Deployed architecture

The default deployed architecture will be:

Router Helm chart configuration

Apollo provides an application Helm chart with each release of Apollo Router Core in GitHub. Since the router version v0.14.0, Apollo has released the router Helm chart as an Open Container Initiative (OCI) image in the GitHub container registry.

note
The path to the OCI router chart is oci://ghcr.io/apollographql/helm-charts/router and tagged with the applicable router release version. For example, router version v2.3.0's Helm chart would be oci://ghcr.io/apollographql/helm-charts/router:2.3.0.

You customize a deployed router with the same command-line options and YAML configuration options using different Helm CLI options and YAML keys through a values file.

Each router chart has a defult values.yaml file with router and deployment settings. The released, unedited file has a few explicit settings, including:

Click to expand values.yaml for router v2.3.0
The values of the Helm chart for Apollo Router Core v2.3.0 in the GitHub container repository, as output by the helm show command:
Bash
1helm show values oci://ghcr.io/apollographql/helm-charts/router
YAML
1# Default values for router. 2# This is a YAML-formatted file. 3# Declare variables to be passed into your templates. 4 5replicaCount: 1 6 7# -- See https://www.apollographql.com/docs/graphos/reference/router/configuration#yaml-config-file for yaml structure 8router: 9 configuration: 10 supergraph: 11 listen: 0.0.0.0:4000 12 health_check: 13 listen: 0.0.0.0:8088 14 15 args: 16 - --hot-reload 17 18managedFederation: 19 # -- If using managed federation, the graph API key to identify router to Studio 20 apiKey: 21 # -- If using managed federation, use existing Secret which stores the graph API key instead of creating a new one. 22 # If set along `managedFederation.apiKey`, a secret with the graph API key will be created using this parameter as name 23 existingSecret: 24 # -- If using managed federation, the name of the key within the existing Secret which stores the graph API key. 25 # If set along `managedFederation.apiKey`, a secret with the graph API key will be created using this parameter as key, defaults to using a key of `managedFederationApiKey` 26 existingSecretKeyRefKey: 27 # -- If using managed federation, the variant of which graph to use 28 graphRef: "" 29 30# This should not be specified in values.yaml. It's much simpler to use --set-file from helm command line. 31# e.g.: helm ... --set-file supergraphFile="location of your supergraph file" 32supergraphFile: 33 34# An array of extra environmental variables 35# Example: 36# extraEnvVars: 37# - name: APOLLO_ROUTER_SUPERGRAPH_PATH 38# value: /etc/apollo/supergraph.yaml 39# - name: APOLLO_ROUTER_LOG 40# value: debug 41# 42extraEnvVars: [] 43extraEnvVarsCM: "" 44extraEnvVarsSecret: "" 45 46# An array of extra VolumeMounts 47# Example: 48# extraVolumeMounts: 49# - name: rhai-volume 50# mountPath: /dist/rhai 51# readonly: true 52extraVolumeMounts: [] 53 54# An array of extra Volumes 55# Example: 56# extraVolumes: 57# - name: rhai-volume 58# configMap: 59# name: rhai-config 60# 61extraVolumes: [] 62 63image: 64 repository: ghcr.io/apollographql/router 65 pullPolicy: IfNotPresent 66 # Overrides the image tag whose default is the chart appVersion. 67 tag: "" 68 69containerPorts: 70 # -- If you override the port in `router.configuration.server.listen` then make sure to match the listen port here 71 http: 4000 72 # -- For exposing the metrics port when running a serviceMonitor for example 73 metrics: 9090 74 # -- For exposing the health check endpoint 75 health: 8088 76 77# -- An array of extra containers to include in the router pod 78# Example: 79# extraContainers: 80# - name: coprocessor 81# image: acme/coprocessor:1.0 82# ports: 83# - containerPort: 4001 84extraContainers: [] 85 86# -- An array of init containers to include in the router pod 87# Example: 88# initContainers: 89# - name: init-myservice 90# image: busybox:1.28 91# command: ["sh"] 92initContainers: [] 93 94# -- A map of extra labels to apply to the resources created by this chart 95# Example: 96# extraLabels: 97# label_one_name: "label_one_value" 98# label_two_name: "label_two_value" 99extraLabels: {} 100 101lifecycle: {} 102# preStop: 103# exec: 104# command: 105# - /bin/bash 106# - -c 107# - sleep 10 108 109imagePullSecrets: [] 110nameOverride: "" 111fullnameOverride: "" 112 113serviceAccount: 114 # Specifies whether a service account should be created 115 create: true 116 # Annotations to add to the service account 117 annotations: {} 118 # The name of the service account to use. 119 # If not set and create is true, a name is generated using the fullname template 120 name: "" 121 122podAnnotations: {} 123deploymentAnnotations: {} 124 125podSecurityContext: 126 {} 127 # fsGroup: 2000 128 129securityContext: 130 {} 131 # capabilities: 132 # drop: 133 # - ALL 134 # readOnlyRootFilesystem: true 135 # runAsNonRoot: true 136 # runAsUser: 1000 137 138service: 139 type: ClusterIP 140 port: 80 141 annotations: {} 142 targetport: http 143 144serviceMonitor: 145 enabled: false 146 147ingress: 148 enabled: false 149 className: "" 150 annotations: {} 151 # kubernetes.io/ingress.class: nginx 152 # kubernetes.io/tls-acme: "true" 153 hosts: 154 - host: chart-example.local 155 paths: 156 - path: / 157 pathType: ImplementationSpecific 158 tls: [] 159 # - secretName: chart-example-tls 160 # hosts: 161 # - chart-example.local 162 163# set to true to enable istio's virtualservice 164virtualservice: 165 enabled: false 166 # namespace: "" 167 # gatewayName: "" # Deprecated in favor of gatewayNames 168 # gatewayNames: [] 169 # - "gateway-1" 170 # - "gateway-2" 171 # Hosts: "" # configurable but will default to '*' 172 # - somehost.domain.com 173 # http: 174 # main: 175 # # set enabled to true to add 176 # # the default matcher of `exact: "/" or prefix: "/graphql"` 177 # # with the <$fullName>.<.Release.Namespace>.svc.cluster.local destination 178 # enabled: true 179 # # use additionals to provide your custom virtualservice rules 180 # additionals: [] 181 # - name: "default-nginx-routes" 182 # match: 183 # - uri: 184 # prefix: "/foo" 185 # rewrite: 186 # uri: / 187 # route: 188 # - destination: 189 # host: my.custom.backend.svc.cluster.local 190 # port: 191 # number: 80 192 193# set to true and provide configuration details if you want to make external https calls through istio's virtualservice 194serviceentry: 195 enabled: false 196 # hosts: 197 # a list of external hosts you want to be able to make https calls to 198 # - api.example.com 199 200resources: 201 {} 202 # We usually recommend not to specify default resources and to leave this as a conscious 203 # choice for the user. This also increases chances charts run on environments with little 204 # resources, such as Minikube. If you do want to specify resources, uncomment the following 205 # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 206 # limits: 207 # cpu: 100m 208 # memory: 128Mi 209 # requests: 210 # cpu: 100m 211 # memory: 128Mi 212 213autoscaling: 214 enabled: false 215 minReplicas: 1 216 maxReplicas: 100 217 targetCPUUtilizationPercentage: 80 218 # targetMemoryUtilizationPercentage: 80 219 # 220 # Specify container-specific HPA scaling targets 221 # Only available in 1.27+ (https://kubernetes.io/blog/2023/05/02/hpa-container-resource-metric/) 222 # containerBased: 223 # - name: <container name> 224 # type: cpu 225 # targetUtilizationPercentage: 75 226 227# -- Sets the [rolling update strategy parameters](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment). Can take absolute values or % values. 228rollingUpdate: 229 {} 230# Defaults if not set are: 231# maxUnavailable: 25% 232# maxSurge: 25% 233 234nodeSelector: {} 235 236tolerations: [] 237 238affinity: {} 239 240# -- Sets the [pod disruption budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) for Deployment pods 241podDisruptionBudget: {} 242 243# -- Set to existing PriorityClass name to control pod preemption by the scheduler 244priorityClassName: "" 245 246# -- Sets the [termination grace period](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution) for Deployment pods 247terminationGracePeriodSeconds: 30 248 249probes: 250 # -- Configure readiness probe 251 readiness: 252 initialDelaySeconds: 0 253 # -- Configure liveness probe 254 liveness: 255 initialDelaySeconds: 0 256 257# -- Sets the [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) for Deployment pods 258topologySpreadConstraints: [] 259 260# -- Sets the restart policy of pods 261restartPolicy: Always

Separate configurations per environment

To support your different deployment configurations for different environments (development, staging, production, etc.), Apollo recommends separating your configuration values into separate files:

  • A common file, which contains values that apply across all environments.

  • A unique environment file per environment, which includes and overrides the values from the common file while adding new environment-specific values.

The helm install command applies each --values <values-file> option in the order you set them within the command. Therefore, a common file must be set before an environment file so that the environment file's values are applied last and override the common file's values.

For example, this command deploys with a common_values.yaml file applied first and then a prod_values.yaml file:

Bash
1helm install <name_for_install> --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml --values common_values.yaml --values prod_values.yaml