Some of your inquiries regarding Orchestrator cloud accounts overall performance have been heard. We acknowledge the need for a faster environment, in which operations happen in the blink of an eye.
For that, we will be introducing Data Retention and Archiving policies for queue items in Orchestrator, which ensures built-in data off-loading capabilities. We are relying on your collaboration to strengthen operational performance of your cloud Orchestrator.
This change will impact all Automation Cloud organizations, please read carefully.
The implementation for data retention and archiving policies is broken down into three phases, giving you enough time to be informed and to plan your next steps:
Implementation phases
Phase 0 is a pre-deployment phase, in which we inform all Automation Cloud organizations about the upcoming policies, how your account is impacted, the feature behavior and the rollout mechanism. At the end of phase 0, the feature UI and functionality will be deployed to all cloud environments, but no policy will be activated.
Phase 1 is a 6 weeks period of time between the policies deployment and their activation, giving you the time to adjust and prepare your queues. Worry not that you may overlook preparing your account before the policies live date, as an Application Information counter will display the remaining days until retention policies apply. We will also provide a link to the feature documentation, in which you’ll find guidelines about the available policy configuration options. At the end of phase 1, all policies, either the default one or the ones you configured, are applied.
Phase 2 means policies became active and your cloud account data is offloaded based on their configuration. Phase 2 has no end date. This means that if you configure a new policy, it will apply immediately.
Targeted Resources
The policies are applied to the queue items (and related queue item events and queue item comments) in the associated queues.
That means all the queues in your tenant will be mapped to a Data Retention Policy (the default one or the one you configured), and this policy applies to the corresponding queue items.
The policy mechanism
Here are some highlights to bear in mind when you prepare your queues for data offload:
You can configure the policy via the UI while creating a new queue or editing an existing one
A new API called Queues-Retention will be available, allowing you to configure the policy programmatically.
You can choose a maximum of 180 days as the retention duration;
Of course you can choose a lower duration, but it cannot be higher than 180 days. That’s 6 months.
The duration refers to queue items that havenot been modified for longer than specified.
Only queue items in final states will be subject to the policy, this includes those in status Failed, Successful, Abandoned, Retried, Deleted
A background job will be regularly checking queue items and their policy and take the necessary actions.
Queue items affected by the policy can be either deleted or archived, depending on what you choose as the outcome:
If you set the policy to Delete , all items older than the specified duration will be removed.
If you set the policy to Archive , all items older than the specified duration will be archived into an existing storage bucket, that is NOT read-only.
To put it briefly, you must create a, or use an existing, non read-only storage bucket with the main purpose of holding your queue items archive, and then select it.
One storage bucket can be used to archive queue items from different queues
You can use the Orchestrator built-in storage bucket functionality for this purpose, as well as your own external storage bucket.
To retrieve the archived information, simply access the archive files from the designated storage bucket. We will publish the format and structure used subsequently in our documentation.
The Default Policy keeps your queue items for 120 days and then deletes them. Be careful, as you cannot undo deletion.
We will preserve items’ Unique References, just to guarantee the occurrence of validations after the policy applied.
Queue item events and queue item comments will be included within the archive.
Policy output will NOT affect data synchronization towards UiPath Insights. That is, any board containing the archived or deleted queue items will still be valid and contain their information.
Your collaboration
We do have a default policy of 120 days+deletion, but you are not bound to use it. However, you are bound to set a retention policy. You can use the Orchestrator UI or API to configure your own retention policy, based on your business requirements.
In case our default policy does the trick for you, then you’re all set, and you don’t have to do anything specific from this point onward.
If you chose to archive them you will be able to retrieve the data from the bucket via file download, the data itself will not be available any longer in the orchestrator database and according orchestrator views as it will no longer be considered hot-data.
As mentioned in the post Insights functionality will not be affected, you will still have all QueueItems available in Insights even if they have been archived or removed in orchestrator.
The choices available will be archiving or removal from the orchestrator hot storage.
Please be aware this will affect only transactions that are in final states and have been unchanged for up to half a year.
Not opting in for archiving will mean the default removal policy will apply.
hi @VISHNU07 ,
The API will be made available when we enter Phase 1 and will be described in the according documentation that will be published with Phase 1.
We currently expect to reach Phase 1 readiness in 2-4 weeks.
Thank you,
Alex.
Hi @KaylaC,
As mentioned in the last bullet in the post:
“* Policy output will NOT affect data synchronization towards UiPath Insights. That is, any board containing the archived or deleted queue items will still be valid and contain their information.”
This change will not affect insights as data will continue to be syncronized and store in insights and the policies only affect the orchestrator data.
Thank you,
Alex.
Hi @Alexandru_Szoke ,
Currently I have few queues created with the option checked for Unique reference key as on, with this new policy I cannot have the transactions stored in queue greater than 180 days, now I have a scenario where a transaction which was already added in the queue can reappear in the input after 180 days and currently since I have the unique reference key as on, this data is still available in the queue and it wouldn’t get added in to the queue, now with this policy getting established my current logic will get affected, how can I overcome this issue without making any changes to my process.
@Alexandru_Szoke Thank you for the reply, yes I went over that point but I was not sure how that works, so you are saying even after the transactions are deleted or archieved from the queue which has the unique reference key as on it wouldn’t add the duplicates or for the queue that has the unique key reference is on the queue retention policy wouldn’t apply at all?
@rishi8686
The retention policy will apply and the queueitems removed/archived.
We will ensure though that all unique reference keys are still kept and used when performing the validation upon addition of new queueitems in that queue.
I had hoped from the headline that the new functionality would provide an easy way to configure away from the 24 hour deadline for “New” queue items to be processed or be set to “Abandoned” but I see that is not what this change addresses.
Is there any chance that this (New to Abandoned) limit may also be made easily configurable in the future?
Hi @steve.oram
Would you like this to be just configurable as a multiplier of days?
Should it be only for new?
Per queue configurable?
Can you elaborate on the UC you would use it at.
Thank you,
Alex.
Hi @Thong_Mai_Tr_ng_Hoang
Yes there will be a specific format we intend to publish once we enter Phase 1.
We aim at a multiple daily per Queue CSVs limited to x-thousand entries with a flattened QI representation including Comments and Events.
Thank you,
Alex.