I’ve encountered what seems to be a limitation (or possibly a bug) when working with larger datasets (200+ rows and 50+ columns) in a conversational agent setup.
When the data is returned from the tool, the agent responds with:
“… the full dataset returned by the tool is marked ‘omitted for brevity’ …”
Looking at the trace, the datatable appears to be returned correctly by the tool, so the issue seems to be with the handoff between the tool and the agent, not with data generation itself.
Note: I’ve tried with gpt-5, gpt-5-mini models with max tokens and it is still the same. But the same process works fine with Autonomous agent.
Workaround:
As a temporary solution, summarizing or reducing the dataset before passing it to the agent seems to help.
Has anyone else experienced this behavior? Would appreciate any insights or confirmations.
Possible Root Cause (Hypothesis)
It seems likely that the Conversational Agent pipeline internally limits serialized payload size or truncates large structured objects (like DataTables) to prevent excessive token usage or serialization overhead.
This would explain why:
The tool’s output is intact.
The agent’s input shows “omitted for brevity”.
The Autonomous Agent (which likely has a different streaming mechanism) handles it correctly.
Questions for the UiPath Team / Community
Is this a known limitation in Conversational Agents when handling large datasets between tools and the agent context?
Is there a recommended approach to transfer large data objects (e.g., via storage bucket, queue, or temporary file reference) instead of passing the raw DataTable?
Will this be enhanced or documented in future updates of the Conversational Agent SDK?
1 Like