Currently, We have a solution where workflow will loop through emails and send the subject, emailbody and sender as string arguments to an Agent where agent will analyze these strings and determine if this email is for processing and then Identify important fields and send them out as output arguments. if there are many mistakes in the outcome, we modify the prompts. please clarify these points for me
- Based on our design, is it correct that the AI Agent is not learning and will not improve quality no matter how many emails it processed?
- is there a better way we can do this process where we can minimize the times that we have to modify prompts?
- can we use document understanding for this use case?
@Ezekiel_Gomez1
Agent does not learn from the executions. The emails you are processing, does this have large volume? If yes, have you considered communications mining option?
What types of mistakes/issues you are having? - any specific issue?
If you want to process documents, then you can try DU (IXP) as it is now feasible for most type of documents
Have you created evaluation sets? how many? - if you are using any agent is production, you must have large number of evaluation sets to ensure it does not break or give incorrect result.
Yes, that is correct for a system using only static prompts that you would have keep on changing to fix errors or in future if you plan to expand your process scope or change your business rule. (It is basically you are changing the logic ~prompt based on keywords in subject or body, even though we are using agent to make a decision, it is still rule-based in plain text instead of logic building)
My approach would be to classify the types of emails which are in scope for processing.
We are looking at three attributes (Subject, Body and Sender) to make the decision to route the mail for processing (Type of Email) - Create a mapping table in DB with these four attributes/Column
(Subject, Body and Sender, Type of Email) foe all known scenarios.
This table will be referenced by the Agent to make the decision (we have implemented mini-RAG app) - Augmenting the data table with new mail attributes.
System prompt will be- you are expert (liaison) who reference the mapping table and (Subject, body and Sender) attributes of new email received and take decision.
For Unknown scenarios, we will be implementing a Human in the Loop process to create a task in action center for manual review. And the decision taken based on inputs by business user will be added as a new row to out mapping table (use RPA workflow to insert a row in that table) - this will effectively create a feedback loop for the agent to refer.
Other option as you asked-
And yes, you can (IXP) Communication mining or UiPath Document Understanding for the data extraction from emails. Here you would have train your custom ML model to extract from emails and classify. You can also use generative extractor instead.
Document Understanding has inherent ability to learn. Data Manager allows you to label documents, and human input from the Validation Station can be used to retrain the ML models, improving accuracy over time.
Hope it helps!
What types of mistakes/issues you are having? - any specific issue?
-It categorizes the emails wrong. usually because of new scenarios but there will be times when it is a known scenario and will still mark it wrong.
If you want to process documents, then you can try DU (IXP) as it is now feasible for most type of documents
-currently we are not reading attachments, just the email. but it might be a future use case
Have you created evaluation sets? how many? - if you are using any agent is production, you must have large number of evaluation sets to ensure it does not break or give incorrect result.
-we have 2 sets with 25 and 18 evaluations
More context to the process. We have to analyze if the email is a new case or not. sometimes the customers were already in a discussion with another party where they mention details of the case then they forward that email thread to us. In such scenarios it will still me a new case even if there was an existing discussion in the thread. this is one of the difficult scenarios we have were usually we say that if there is an existing discussion, consider it an old case
Is it possible to train a custom ML model ourselves by feeding selected emails(to control unit consumption) and then use the model as grounding for our existing agent? we get an average of 800 emails per day and the operation is 365 days a year so we are also being cautious on the AI and Agent unit consumptions
Hello @Ezekiel_Gomez1,
You can leverage two products in UiPath to solve your problem: IXP (Communications Mining) and Agents.
In IXP, you can apply filters like sender email or domain to control which emails are ingested by the platform, thereby managing the number of emails.
If your filter is based on the subject or requires additional criteria, you can build agents and configure an agent trigger with multiple condition filters based on the incoming email. This can control your unit usage.
Thanks,
Karthik