Perform sparse embedding inference on the service Added in 8.11.0
Query parameters
-
Specifies the amount of time to wait for the inference request to complete.
Values are
-1
or0
.
POST /_inference/sparse_embedding/{inference_id}
Console
POST _inference/sparse_embedding/my-elser-model { "input": "The sky above the port was the color of television tuned to a dead channel." }
curl \ --request POST 'http://api.example.com/_inference/sparse_embedding/{inference_id}' \ --header "Authorization: $API_KEY" \ --header "Content-Type: application/json" \ --data '"{\n \"input\": \"The sky above the port was the color of television tuned to a dead channel.\"\n}"'
Request example
Run `POST _inference/sparse_embedding/my-elser-model` to perform sparse embedding on the example sentence.
{ "input": "The sky above the port was the color of television tuned to a dead channel." }
Response examples (200)
An abbreviated response from `POST _inference/sparse_embedding/my-elser-model`.
{ "sparse_embedding": [ { "port": 2.1259406, "sky": 1.7073475, "color": 1.6922266, "dead": 1.6247464, "television": 1.3525393, "above": 1.2425821, "tuned": 1.1440028, "colors": 1.1218185, "tv": 1.0111054, "ports": 1.0067928, "poem": 1.0042328, "channel": 0.99471164, "tune": 0.96235967, "scene": 0.9020516 } ] }