Orchestrator in Automation Cloud™ - Data Retention and Archiving for QueueItems

Phase 1 is enabled in Community and is planned to reach Enterprise later this week.

The Documentation can be found under:

We will also include a mention in our Cloud Release Notes.

Hi @Jack_Johnson

We will be publishing the exact format used in a couple of days here:

Since the export is CSV based it can be parsed and a certain line can be used to create a new QueueItems via a automation/robot or API.
The clone functionality “per se” is not available but can be accomplished if needed with the above described means.

Kind Regards,
Alex.

Hi @Alexandru_Szoke,

I am not the one who talked about it first.

But in my opinion, it should even have a multiplier of hours so that we don’t have to wait for 24 hours.

And yes per queue configurable.

I would also suggest a way to reset a queue without having to delete the queue and creating the same one after.

Thank you very much,
Carlos Abegão

1 Like

Hi @carlos.m.abegao

Can you share some additional details on the scenario where you would use h as granularity?
Would you want them deleted/archived after 3h or rather immediate after they go to a terminal state.

Resetting the queue would you see it as having a retention applied immediately or just for the purpose of deleting all content?

Thank you for the feedback,
Alex.

Hi,

The answer was not about Data Retention, but the functionality of Queue Item passing from “New” to “Abandoned”.

Resetting was deleting all content.

All this was thinking on tests.

Thank you,
Carlos Abegão

Hi,

Is this feature will be available for On-Premise Orchestrator? may be in future?

Thanks,
Sree Latha.

Hi @Sreelatha278

We plan to introduce a certain flavor of this feature to on-prem as well post 22.10.

Thank you,
Alex.

1 Like

Hi @Alexandru_Szoke ,
I have a logic in one of the process where it will get student data(student_dt) from a database in the init state to add in to queues, before adding the data into queue I have a for each row where it will check if there is already a student in the queue with student_dt using the get queue items activity and if the queueitem count is zero then it will go ahead and add the items in to queue and in the else loop I have a log message saying student already exists in the queue so I skip adding the item in to queue, now my concern is the queue is not toggled to unique key reference as yes since I have unique key reference check right within the code, with the new queue retention policy will my above logic get affected as the items in the queue will either be deleted or archived

Hi @rishi8686
The above logic will be affected by queue retention policies, the uniqueness check will only apply for unarchived items.
You can either migrate to unique references, or use a database for this uniqueness check as easy as data service or some local uniqueness check based on a file or db or a custom log of processed students.
Thank you,
Alex.

My Queue has been saying Retention Policies will be applied today. But so far nothing has happened?

Hi @rmorgan

We are in progress of rolling things out ensuring we don’t generate any unwanted performance impact, it will take us around one month to guarantee we caught up with all the backlog.
The above part of the documentation explains how we are proceeding.
You will know things started applying for your tenant when that message will disappear.
We will change the message content to reflect this.
Thank you,
Alex.

1 Like

For Cloud Orchestrator, when Actions are created for Older Robot versions the Queue items go into abandoned state after 24 hours, and SMEs on Holidays would not get time to complete the actions in Action Center. This is a specific usecase, but there are many other instances where Persistence alone would not work because of Robot upgrades are not being done by client infra team.

Hi @Raghavendraprasad
You can set the policy to higher than 30days if that timeframe does not yield a review by SMEs in your cases.
Thank you,
Alex.

1 Like

Hi @Alexandru_Szoke,

“You will know things started applying for your tenant when that message will disappear.
We will change the message content to reflect this.”

In this message previously, it’s stated that it would be known when the queue retention policies for the tenant would be applied when the message disappears. Yesterday, however, it appears the message has in fact disappeared, but I still have items in my queue from around 8 months ago.

Is there an actual timeline that this will be applied? If so, may you please let me know here in this thread?

I have some changes that must be made accordingly to a process in production, and without this concrete knowledge I cannot know when these changes must be applied. Thank you for any and all insight.

Hi @jordan.shepardson,
We do roll this out on 10% of tenants per day in our north-europe and east-us scaleunits (the rest are fully rolled out) .
On north-europe the retention job collided with azure db patches and has not finished it’s batch of processing. It will be retried this night.
We can look into the situation in detail if we know the tenant and the queue identifier you are referring to giving you more details about the rollout in private.
Thank you,
Alex.

Hi @jordan.shepardson
Please recheck this today as last nights run was successful.
Your data should now match your expectations. If not please get in touch so we can invetsigate.
Thank you,
Alex.

@Alexandru_Szoke
I firstly thought this is a bug, but as I see, this is a feature.
I am using unique references in my scenario.
We need to re-run certain amount of items from past. (some hundreds of them)
Some of them already dissappeared due to the default 30 days retention policy.
The rest, I will delete (they will become Removed).
And now I am running the dispatcher to create those items.
And suddenly I am getting duplicate reference error on items I cannot find in the queue (because they are inaccessible for me). :frowning_face: :face_with_head_bandage:

UGH!
Sorry to say that but instead of giving the users more possibilities you are tying our hands.
Now what? I can:
1-change my dispatcher code to use slight change of the Reference (no, not good)
2-delete all the queue and start from scratch (ehh… please no)
3- what are my other options? What would you do in this scenario?

Seriously, I am furious. You should give these extra features as optional. Something that can be turned on/off.

Hi @Roman_hruska,

Thank you for getting in touch and providing the feedback, we regret this has caused a unpleasant experience and will be incorporating your scenario and feedback in our future product improvements.

For the scenario at hand we are looking into providing a unique reference override mechanism such that the duplicate reference error can be avoided in intentional cases being useful for retrying existing transactions by inserting them.

As direct options in the scenario you described that one can end up in regardless of retention use (the difference being accessing the item causing the error) our recommendation would be to use a dedicated queue for such re-run scenarios where input data is pre-vetted ensuring no uniqueness checks are necessary.

Unique reference works during addition not for unique items execution in case of failures, hence we suggest using unique reference when the producer will by design attempt to add duplicates and they’re processing is not idempotent (reprocessing is harmful).

A goal of this extra feature is ensuring better performance for queue processing in the cloud, by offloading old and processed historical data from the operational database, allowing us to optimize certain high volume workloads.

Kind Regards,
Alex.

Hello Alexandru,

thank you for your reply.
I found myself a workaround, don’t worry. Take it as a feedback and scenario where I indeed want to ensure uniqueness of a reference but at the same time I want to be able to delete some of the items from the queue and then run the dispatcher again to re-add and re-run those.

I could simply Clone them but they are gone due to the retention.
Creating a new queue is not very systematic solution. It is again a workaround.
Overall I think you guys underestimate how the end users are operating with the Queues.
That brings me to some of my feedback here:

I understand the need of improving the performance of your servers. But I think you have to be more graceful and make the retention time much longer, lets say 2 years. Surely it is not that much data, it is just a text in the end. A queue export of cca 4000 items is about 1,5MB big csv. :floppy_disk:

Well I had similar experience.

A queue is having ‘Unique Reference’ enabled.
So, if a queue item was added on April 1st, 2022.
The same queue item was processed successfully on the same day.

History of changes in queue retention settings:
‘Delete’ after ‘180’ days on July 7th, 2022
‘Delete’ after ‘90’ days on August 1st, 2022
‘Delete’ after ‘30’ days on August 2nd, 2022

Could you please tell me which of the above setting was applied to the queue item that is added on April 1st, 2022.

For example, if we say – the last setting was applied to the queue item, that is: ‘Delete’ after ‘30’ days on August 2nd, 2022.

Does this imply that the queue item will be deleted by August 29, 2022?
(30 + 120 = 150 days, starting from April 1st, 2022)

Could you please help us to calculate as to when that same queue item can be re-added into the queue.
Or calculate the total retention period for this case.
We would like to be able to re-add the same queue item into the queue without getting error: Duplicate reference.

Please note that the same queue item is ‘NOT’ seen in the transaction lists.
So, it might have got deleted by the retention policy. And we have no archives.

Even with help of Export option from Queue page, we are not able to check for the same queue item that was added on April 1st, 2022.