Hello @Anil_G ,
First of all, thank you for your suggestion.
Just to clarify, in my initial tests the temperature was set to 0.2. I later reduced it to 0 specifically to ensure the LLM would follow the rules defined in the system prompt more strictly.
If you review the attached prompt, you’ll see that it clearly instructs the model to return a JSON-formatted response. Given this, I don’t believe the LLM should be attempting to invoke a tool - especially one that doesn’t exist - which is what’s causing the error.
That said, I’ll experiment with a higher temperature as you suggested, but based on the behavior and prompt design, I don’t believe that will resolve the core issue.
Thanks again for your input!