Hi, does the specific content from the queue item get stored after it’s archived?
Also, is it possible to clone a queue item from the archive?
Hi, does the specific content from the queue item get stored after it’s archived?
Phase 1 is enabled in Community and is planned to reach Enterprise later this week.
The Documentation can be found under:
We will also include a mention in our Cloud Release Notes.
We will be publishing the exact format used in a couple of days here:
Since the export is CSV based it can be parsed and a certain line can be used to create a new QueueItems via a automation/robot or API.
The clone functionality “per se” is not available but can be accomplished if needed with the above described means.
I am not the one who talked about it first.
But in my opinion, it should even have a multiplier of hours so that we don’t have to wait for 24 hours.
And yes per queue configurable.
I would also suggest a way to reset a queue without having to delete the queue and creating the same one after.
Thank you very much,
Can you share some additional details on the scenario where you would use h as granularity?
Would you want them deleted/archived after 3h or rather immediate after they go to a terminal state.
Resetting the queue would you see it as having a retention applied immediately or just for the purpose of deleting all content?
Thank you for the feedback,
The answer was not about Data Retention, but the functionality of Queue Item passing from “New” to “Abandoned”.
Resetting was deleting all content.
All this was thinking on tests.
Is this feature will be available for On-Premise Orchestrator? may be in future?
We plan to introduce a certain flavor of this feature to on-prem as well post 22.10.
Hi @Alexandru_Szoke ,
I have a logic in one of the process where it will get student data(student_dt) from a database in the init state to add in to queues, before adding the data into queue I have a for each row where it will check if there is already a student in the queue with student_dt using the get queue items activity and if the queueitem count is zero then it will go ahead and add the items in to queue and in the else loop I have a log message saying student already exists in the queue so I skip adding the item in to queue, now my concern is the queue is not toggled to unique key reference as yes since I have unique key reference check right within the code, with the new queue retention policy will my above logic get affected as the items in the queue will either be deleted or archived
The above logic will be affected by queue retention policies, the uniqueness check will only apply for unarchived items.
You can either migrate to unique references, or use a database for this uniqueness check as easy as data service or some local uniqueness check based on a file or db or a custom log of processed students.
My Queue has been saying Retention Policies will be applied today. But so far nothing has happened?
We are in progress of rolling things out ensuring we don’t generate any unwanted performance impact, it will take us around one month to guarantee we caught up with all the backlog.
The above part of the documentation explains how we are proceeding.
You will know things started applying for your tenant when that message will disappear.
We will change the message content to reflect this.
For Cloud Orchestrator, when Actions are created for Older Robot versions the Queue items go into abandoned state after 24 hours, and SMEs on Holidays would not get time to complete the actions in Action Center. This is a specific usecase, but there are many other instances where Persistence alone would not work because of Robot upgrades are not being done by client infra team.
You can set the policy to higher than 30days if that timeframe does not yield a review by SMEs in your cases.
“You will know things started applying for your tenant when that message will disappear.
We will change the message content to reflect this.”
In this message previously, it’s stated that it would be known when the queue retention policies for the tenant would be applied when the message disappears. Yesterday, however, it appears the message has in fact disappeared, but I still have items in my queue from around 8 months ago.
Is there an actual timeline that this will be applied? If so, may you please let me know here in this thread?
I have some changes that must be made accordingly to a process in production, and without this concrete knowledge I cannot know when these changes must be applied. Thank you for any and all insight.
We do roll this out on 10% of tenants per day in our north-europe and east-us scaleunits (the rest are fully rolled out) .
On north-europe the retention job collided with azure db patches and has not finished it’s batch of processing. It will be retried this night.
We can look into the situation in detail if we know the tenant and the queue identifier you are referring to giving you more details about the rollout in private.
Please recheck this today as last nights run was successful.
Your data should now match your expectations. If not please get in touch so we can invetsigate.
I firstly thought this is a bug, but as I see, this is a feature.
I am using unique references in my scenario.
We need to re-run certain amount of items from past. (some hundreds of them)
Some of them already dissappeared due to the default 30 days retention policy.
The rest, I will delete (they will become Removed).
And now I am running the dispatcher to create those items.
And suddenly I am getting duplicate reference error on items I cannot find in the queue (because they are inaccessible for me).
Sorry to say that but instead of giving the users more possibilities you are tying our hands.
Now what? I can:
1-change my dispatcher code to use slight change of the Reference (no, not good)
2-delete all the queue and start from scratch (ehh… please no)
3- what are my other options? What would you do in this scenario?
Seriously, I am furious. You should give these extra features as optional. Something that can be turned on/off.
Thank you for getting in touch and providing the feedback, we regret this has caused a unpleasant experience and will be incorporating your scenario and feedback in our future product improvements.
For the scenario at hand we are looking into providing a unique reference override mechanism such that the duplicate reference error can be avoided in intentional cases being useful for retrying existing transactions by inserting them.
As direct options in the scenario you described that one can end up in regardless of retention use (the difference being accessing the item causing the error) our recommendation would be to use a dedicated queue for such re-run scenarios where input data is pre-vetted ensuring no uniqueness checks are necessary.
Unique reference works during addition not for unique items execution in case of failures, hence we suggest using unique reference when the producer will by design attempt to add duplicates and they’re processing is not idempotent (reprocessing is harmful).
A goal of this extra feature is ensuring better performance for queue processing in the cloud, by offloading old and processed historical data from the operational database, allowing us to optimize certain high volume workloads.
thank you for your reply.
I found myself a workaround, don’t worry. Take it as a feedback and scenario where I indeed want to ensure uniqueness of a reference but at the same time I want to be able to delete some of the items from the queue and then run the dispatcher again to re-add and re-run those.
I could simply Clone them but they are gone due to the retention.
Creating a new queue is not very systematic solution. It is again a workaround.
Overall I think you guys underestimate how the end users are operating with the Queues.
That brings me to some of my feedback here:
I understand the need of improving the performance of your servers. But I think you have to be more graceful and make the retention time much longer, lets say 2 years. Surely it is not that much data, it is just a text in the end. A queue export of cca 4000 items is about 1,5MB big csv.