I’ve been exploring the Agents Solution in UiPath, and I recently encountered an issue with how it handles LLM calls.
Specifically, I noticed that a system prompt that works perfectly with the Content Generation activity fails when used in an Agent. I tested the same prompt directly in Claude and ChatGPT, and it worked as expected in both cases.
Below are screenshots showing the different behaviors:
From my observations, it seems the Agent may be attempting to force a tool call in the background - possibly a behavior introduced by UiPath. This could be causing the prompt to fail under Agents while still working normally in other contexts.
I’ve attached the system prompt I used for your reference: prompt.txt (6.0 KB)
Just to clarify, in my initial tests the temperature was set to 0.2. I later reduced it to 0 specifically to ensure the LLM would follow the rules defined in the system prompt more strictly.
If you review the attached prompt, you’ll see that it clearly instructs the model to return a JSON-formatted response. Given this, I don’t believe the LLM should be attempting to invoke a tool - especially one that doesn’t exist - which is what’s causing the error.
That said, I’ll experiment with a higher temperature as you suggested, but based on the behavior and prompt design, I don’t believe that will resolve the core issue.