Create an JinaAI inference endpoint Generally available; Added in 8.18.0

PUT /_inference/{task_type}/{jinaai_inference_id}

Create an inference endpoint to perform an inference task with the jinaai service.

To review the available rerank models, refer to https://jina.ai/reranker. To review the available text_embedding models, refer to the https://jina.ai/embeddings/.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank or text_embedding.

  • jinaai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, jinaai.

    Value is jinaai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the jinaai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your JinaAI account.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • model_id string

      The name of the model to use for the inference task. For a rerank task, it is required. For a text_embedding task, it is optional.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from JinaAI. By default, the jinaai service sets the number of requests allowed per minute to 2000 for all task types.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • similarity string

      For a text_embedding task, the similarity measure. One of cosine, dot_product, l2_norm. The default values varies with the embedding type. For example, a float embedding type uses a dot_product similarity measure by default.

      Values are cosine, dot_product, or l2_norm.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • return_documents boolean

      For a rerank task, return the doc text within the results.

    • task string

      For a text_embedding task, the task passed to the model. Valid values are:

      • classification: Use it for embeddings passed through a text classifier.
      • clustering: Use it for the embeddings run through a clustering algorithm.
      • ingest: Use it for storing document embeddings in a vector database.
      • search: Use it for storing embeddings of search queries run against a vector database to find relevant documents.

      Values are classification, clustering, ingest, or search.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents. If this inference endpoint is used in a text_similarity_reranker retriever query and top_n is set, it must be greater than or equal to rank_window_size in the query.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or rerank.

PUT /_inference/{task_type}/{jinaai_inference_id}
PUT _inference/text_embedding/jinaai-embeddings { "service": "jinaai", "service_settings": { "model_id": "jina-embeddings-v3", "api_key": "JinaAi-Api-key" } }
resp = client.inference.put( task_type="text_embedding", inference_id="jinaai-embeddings", inference_config={ "service": "jinaai", "service_settings": { "model_id": "jina-embeddings-v3", "api_key": "JinaAi-Api-key" } }, )
const response = await client.inference.put({ task_type: "text_embedding", inference_id: "jinaai-embeddings", inference_config: { service: "jinaai", service_settings: { model_id: "jina-embeddings-v3", api_key: "JinaAi-Api-key", }, }, });
response = client.inference.put( task_type: "text_embedding", inference_id: "jinaai-embeddings", body: { "service": "jinaai", "service_settings": { "model_id": "jina-embeddings-v3", "api_key": "JinaAi-Api-key" } } )
$resp = $client->inference()->put([ "task_type" => "text_embedding", "inference_id" => "jinaai-embeddings", "body" => [ "service" => "jinaai", "service_settings" => [ "model_id" => "jina-embeddings-v3", "api_key" => "JinaAi-Api-key", ], ], ]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"jinaai","service_settings":{"model_id":"jina-embeddings-v3","api_key":"JinaAi-Api-key"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/jinaai-embeddings"
client.inference().put(p -> p .inferenceId("jinaai-embeddings") .taskType(TaskType.TextEmbedding) .inferenceConfig(i -> i .service("jinaai") .serviceSettings(JsonData.fromJson("{\"model_id\":\"jina-embeddings-v3\",\"api_key\":\"JinaAi-Api-key\"}")) ) ); 
Request examples
Run `PUT _inference/text_embedding/jinaai-embeddings` to create an inference endpoint for text embedding tasks using the JinaAI service.
{ "service": "jinaai", "service_settings": { "model_id": "jina-embeddings-v3", "api_key": "JinaAi-Api-key" } }
Run `PUT _inference/rerank/jinaai-rerank` to create an inference endpoint for rerank tasks using the JinaAI service.
{ "service": "jinaai", "service_settings": { "api_key": "JinaAI-Api-key", "model_id": "jina-reranker-v2-base-multilingual" }, "task_settings": { "top_n": 10, "return_documents": true } }