Is the LLM forced to generate a valid json? #92
Unanswered
michelonfrancoisSUMMIT asked this question in Q&A
Replies: 1 comment
-
| In turns out it is. I have checked the code and from what i understood, the agent class return a grammar object which forces the LLM to follow it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am working on an agent based project. I am using an object of the class LlamaCppAgent (here the version of the lib i use: llama-cpp-agent==0.2.35). I am using the "get_chat_response" method on the object with a message and the structured_output_settings parameters containing tools. My question is : "Is the LLM forced to generate a valid (with curly brackets, the correct arguments as a key and an argument value of the correct type) json?". I have not been able to be sure about that when looking at the documentation. I am using the "grammar' argument for other use cases and here the LLM is forced to follow a strict output format (due to token filtering), is the case also when using tools in my case? I have tested a lot the LLM and for now it seems to work fine but i want to be sure.
Best
Beta Was this translation helpful? Give feedback.
All reactions