We have an Agent with 2 string input parameters (email subject and email body). We have concerns with regards to sensitive information in these strings. Are these strings sent out to OpenAI and does OpenAI store them somewhere? do we have to clean out our strings before giving it to the AI as input parameter? can someone point me out to any documents that is detailing these concerns?
So does this mean that all those information that we used LLM might be stored with the LLMs?
for PII masking it would not
unless specifically specified it might be. Here is overview check the last rows
cheers
Hello @Ezekiel_Gomez1,
Under the AI Trust layer, you need to enable PII inflight masking. You can define the type of data and threshold, based on which PII redaction will occur before sending the data to the LLM.
You can add any type of entities you want in the AI Trust Layer Policy under Admin.
The inflight masking will remove the actual information and pass on the entity type, as shown below:
Still, this data will go to Azure Cognitive Services, but not to third-party LLMs.
Thanks,
Karthik
so when we were not using PII masking, does it mean we sent out all information into the LLM and they kept it in their models?
Hello @Ezekiel_Gomez1,
It will send the data without filtering, and your data protection and reuse will be based on the third-party models you are using. In an enterprise, ideally, they will neither breach nor keep your data, but it all depends on their data protection T&C.
Thanks,
Karthik
Thank you for this. I am all too new to this and there are much to learn as always
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.



