Loading

Deploy an Elasticsearch cluster

ECK

To deploy a simple Elasticsearch cluster specification, with one Elasticsearch node:

cat <<EOF | kubectl apply -f - apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: 8.16.1 nodeSets: - name: default count: 1 config: node.store.allow_mmap: false EOF 

The operator automatically creates and manages Kubernetes resources to achieve the desired state of the Elasticsearch cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use.

Warning

Setting node.store.allow_mmap: false has performance implications and should be tuned for production workloads as described in the Virtual memory section.

Note

If your Kubernetes cluster does not have any Kubernetes nodes with at least 2GiB of free memory, the pod will be stuck in Pending state. Check Manage compute resources for more information about resource requirements and how to configure them.

Note

The cluster that you deployed in this quickstart guide only allocates a persistent volume of 1GiB for storage using the default storage class defined for the Kubernetes cluster. You will most likely want to have more control over this for production workloads. Refer to Volume claim templates for more information.

For a full description of each CustomResourceDefinition (CRD), refer to the API Reference or view the CRD files in the project repository. You can also retrieve information about a CRD from the cluster. For example, describe the Elasticsearch CRD specification with describe:

kubectl describe crd elasticsearch 

Get an overview of the current Elasticsearch clusters in the Kubernetes cluster with get, including health, version and number of nodes:

kubectl get elasticsearch 

When you first create the Kubernetes cluster, there is no HEALTH status and the PHASE is empty. After the pod and service start-up, the PHASE turns into Ready, and HEALTH becomes green. The HEALTH status comes from Elasticsearch's cluster health API.

NAME HEALTH NODES VERSION PHASE AGE quickstart 1 8.16.1 1s 

While the Elasticsearch pod is in the process of being started it will report Pending as checked with get:

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart' 

Which will output similar to:

NAME READY STATUS RESTARTS AGE quickstart-es-default-0 0/1 Pending 0 9s 

During and after start-up, up that pod’s logs can be accessed:

kubectl logs -f quickstart-es-default-0 

Once the pod has finished coming up, our original get request will now report:

NAME HEALTH NODES VERSION PHASE AGE quickstart green 1 8.16.1 Ready 1m 

A ClusterIP Service is automatically created for your cluster as checked with get:

kubectl get service quickstart-es-http 

Which will output similar to:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es-http ClusterIP 10.15.251.145 <none> 9200/TCP 34m 

In order to make requests to the Elasticsearch API:

  1. Get the credentials.

    By default, a user named elastic is created with the password stored inside a Kubernetes secret. This default user can be disabled if desired, refer to Users and roles for more information.

    PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}') 
  2. Request the Elasticsearch root API. You can do so from inside the Kubernetes cluster or from your local workstation. For demonstration purposes, certificate verification is disabled using the -k curl flag; however, this is not recommended outside of testing purposes. Refer to Setup your own certificate for more information.

    • From inside the Kubernetes cluster:

      curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200" 
    • From your local workstation:

      1. Use the following command in a separate terminal:

        kubectl port-forward service/quickstart-es-http 9200 
      2. Request localhost:

        curl -u "elastic:$PASSWORD" -k "https://localhost:9200" 

This completes the quickstart of deploying an Elasticsearch cluster. We recommend continuing to: