We’re starting to integrate GenAI/LLMs into some UiPath workflows (document processing, summarization, email drafting, ticket classification, etc.).
The productivity gains look great, but we handle sensitive customer and financial data, so privacy/compliance is a big concern.
Before allowing bots to call external AI services, we’re evaluating things like:
• Is data used for model training?
• Are prompts stored or logged externally?
• Can we anonymize/redact PII before sending to AI?
• Private/VPC or on-prem deployment options?
• Audit logs + access controls?
• Any enterprise-safe architecture patterns?
Curious how others are approaching this:
Are you using public APIs (OpenAI, etc.) or private models?
Do you mask/redact data before sending? How?
Any best practices or architectures that worked well in UiPath?
Are some orgs blocking GenAI completely?
Would love to learn from real-world setups.