Context Grounding in the Content Generation Activity
We’re very excited to announce the GA of Context Grounding in GenAI Activities. You can now create and reference Context Grounding indexes or via a file directly in your custom prompt activities using your favorite AI Trust Layer LLM.
This will unlock a bevy of high-value use cases where proprietary data and/or data contained in individual documents needs to be referenced at runtime in order to faithfully answer the prompt.
By using the Content Generation activity, you can set up dynamic prompts that reference an index that has been created in the AI Trust Layer admin page.
By using the new AI Trust Layer section in Admin, you can create a Context Grounding index that can be referenced by the Content Generation activity. Indexes are created and synced based on documents available in a UiPath Orchestrator Storage Buckets (Azure and S3 bucket compatible). You can also identify a location in SharePoint/OneDrive or Google Drive to sync documents from.
You can also automate the ingestion of new documents and data in an index by using a combination of Storage Bucket activities and the Update Context Grounding Index.
Note: it is best practice to ensure that the index has successfully synced/updated before running an activity that needs to reference the newly synced index. This can be accomplished using delays or by executing the content generation activity in a separate workflow. Please be on the lookout for a trigger activity for Indexes in the future!
Your other option for files that don’t need to be persisted is a just in time Context Grounding where you can reference a file in your workflow directly to ground your prompt in the content of the file.
Context Grounding supports PDF, TXT, CSV, DOCX, XLS and JSON. Be on the lookout for new document type support including images in upcoming releases.
Context Grounding FAQ.
New Activities
Besides the Update Context Grounding Index mentioned above, we’ve also added two new curated activities including Image Classification and Context Grounding Search.
New Models
We’re very excited to offer access to Anthropic Claude Sonnet 3.5 which supports a token context window limit of 200k (most other models in the AI Trust Layer support a max of 120k). Importantly, you must accept the terms and conditions in order to use this model which are available in the Automation Ops AI Trust Layer policies (see below). Without deploying an AI Trust Layer policy with the model enabled your requests to the model will fail.
We’ve also added GPT-4o-mini-2024-07-18 and GPT-4o-2024-08-06. See full list of supported models and regional support here.
Model selection can happen on Image Analysis or Content Generation. Other activities are already optimized to use the most effective model for the curated prompt developed.
Full release note available here.