- Notifications
You must be signed in to change notification settings - Fork 25.5k
Closed
Closed
Copy link
Labels
:SearchOrg/RelevanceLabel for the Search (solution/org) Relevance teamLabel for the Search (solution/org) Relevance team:mlMachine learningMachine learningbug"" muted="" aria-describedby="MDU6TGFiZWwyMzE3Mw==-tooltip :R7b96b:">>bug
Description
Elasticsearch Version
any
Installed Plugins
any
Java Version
bundled
OS Version
any
Problem Description
When making an inference request, its possible that its cancelled due to the process stopping. This likely shouldn't bubble up as a internal server error.
This is possible in a _search
request. It seems if its being stopped, that was triggered by the user, and it wasn't a crash. Indicating its a user configuration problem and thus we should return an error indicating as much.
It seems to me that the PyTorchResult
should indicate the exception type (or status code) and that needs to be bubbled up to the user.
Steps to Reproduce
na
Logs (if relevant)
org.elasticsearch.ElasticsearchStatusException: Error in inference process: [inference canceled as process is stopping] at org.elasticsearch.xpack.ml.inference.deployment.AbstractPyTorchAction.onFailure(AbstractPyTorchAction.java:120) at org.elasticsearch.xpack.ml.inference.deployment.InferencePyTorchAction.processResult(InferencePyTorchAction.java:181) at org.elasticsearch.xpack.ml.inference.deployment.InferencePyTorchAction.lambda$doRun$3(InferencePyTorchAction.java:150) at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:257) at org.elasticsearch.xpack.ml.inference.pytorch.process.PyTorchResultProcessor.lambda$notifyAndClearPendingResults$3(PyTorchResultProcessor.java:148) at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1608) at org.elasticsearch.xpack.ml.inference.pytorch.process.PyTorchResultProcessor.notifyAndClearPendingResults(PyTorchResultProcessor.java:147) at org.elasticsearch.xpack.ml.inference.pytorch.process.PyTorchResultProcessor.process(PyTorchResultProcessor.java:135) at org.elasticsearch.xpack.ml.inference.deployment.DeploymentManager.lambda$startDeployment$2(DeploymentManager.java:180) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:977) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.lang.Thread.run(Thread.java:1575)
Metadata
Metadata
Assignees
Labels
:SearchOrg/RelevanceLabel for the Search (solution/org) Relevance teamLabel for the Search (solution/org) Relevance team:mlMachine learningMachine learningbug"" muted="" aria-describedby="MDU6TGFiZWwyMzE3Mw==-tooltip :Re5pmb:">>bug