Processing Order of Queue Items Across Multiple Queues

I have found quite a bit of conversation on the topic of processing order for queue items but many, if not all, posts I read have related to the use of a single queue.

My concern is for automations serving multiple business units in our organization at the same time.

We will have multiple processes needing to run simultaneously using 2 VM’s/Unattended licenses and we want to prioritize the queue items available such that the time sensitive tasks are picked up first from our queues no matter what folder they reside under.

In our scenario, a job runs 1 time per 1 queue item so that a resource isn’t tied up on non-time sensitive tasks while new time sensitive records are added to a queue.

In my reading, I understand that there are at least 4 factors that play a part in how items are picked up from 1 queue.

  • Deadline assigned to a queue item
  • Priority assigned to the queue item
  • Priority assigned at the Orchestrator level (Process/Trigger)
  • First in/First out order of operation

Here’s a good YouTube video that demo’s 3 of the 4 very well (does not have Orchestrator level priority):
https://www.youtube.com/watch?v=Hah8pq5SQWo

There is also a very in depth chat about this topic on another forum post here:
Order processing queue - Help / Orchestrator - UiPath Community Forum

And of course, the official UiPath documentation that I referenced :slight_smile:
Queue Item Priority: https://docs.uipath.com/orchestrator/automation-cloud/latest/user-guide/about-queues-and-transactions#processing-order
Orchestrator Priority: Orchestrator - Jobs
*not sure why I the document isn’t available for Automation-Cloud so maybe it isn’t as applicable but its what I could find.

My experience when testing has not provided a clear answer so I thought I would see if anyone else has ironed out the details for making it work. Here are the results from 2 different tests.

Test 1 - Using Queue Item Priority Only
*please note, I understood queue item priority was likely to only be considered within a queue or even possibly the folder were multiple queues live together but I wanted to test and see how it would work.

Folder Queue Priority Added Exp Order Act Order
Folder 1 F1 - Queue1 Normal 1 1 1
Folder 1 F1 - Queue1 Low 2 5 7
Folder 1 F1 - Queue2 High 3 2 2
Folder 2 F2 - Queue1 Low 4 6 5
Folder 2 F2 - Queue2 High 5 3 4
Folder 3 F3 - Queue1 Normal 6 4 6
Folder 3 F3 - Queue2 Low 7 7 3

Test 2 - Orchestrator AND Queue Item Priority
Folder Queue Priority O. Priority Added Exp Order Act Order
Folder 1 F1 - Queue1 Normal Medium 1 1 6
Folder 1 F1 - Queue1 Low Medium 2 5 7
Folder 1 F1 - Queue1 High Medium 3 4 3
Folder 2 F2 - Queue1 Low Low 4 7 4
Folder 2 F2 - Queue2 High Low 5 6 5
Folder 3 F3 - Queue1 Normal High 6 2 2
Folder 3 F3 - Queue2 Low High 7 3 1

In some instances, the items are picked up just as I would expect but then in the same breath items were picked up entirely in the opposite order as I would have thought.

I tried looking at order the folders and queues were added to Orchestrator, and I looked at the queue item keys hoping to find some logic to explain why items were picked up in these orders but nothing adds up.

I’ve submitted a support ticket and will report back on that outcome as well. Thanks in advance folks!!

It seems my tab delimited table of test data didn’t display very nicely… here’s an attempted quick solution.

Test 1
image

Test 2

I would make it simple. One queue is for high priority items, a second queue for lower priority items, etc. Then which gets processed first depends upon which queue(s) your jobs are working out of.

Interesting idea. We’ve used Folders/Sub Folders for separating business units and business processes being automated historically as it provides a lot of “separation of duties” type segmentation that makes our IT Security folks happy.

On one hand, what you’ve said is possible so long as we grant the business process users access to a shared folder.

On the other hand, I don’t know how much I like the idea of mingling process information for multiple business units in the same queues. Although, I could go down a rabbit hole of just using the general High/Medium/Low queues for initiating the work and trigger jobs in the actual BU’s queue from those generic queue items…

Thanks for the idea! I may have to go down this road with our teams if there isn’t an actual solution to properly pick up queue items based on the priority setup UiPath has in place.

I still can’t believe we would have Orchestrator level prioritization options and still not have it working across multiple folders.

Anyone else have a best practice for handling this? I still don’t have an answer on my support ticket so I thought I’d check back here again.

I’ll throw in my thoughts.

There are two factors here, queue item priority and job priority.
If your queue items have differing priorities, the next item grabbed from the queue will depend on all the factors explained by myself and Jeevith in the thread you linked.

However the next job to start is based on Job priority, and if all your jobs run based on the same priority then it will just go seemingly randomly as the jobs will be triggered when items are added to the queue, but then not neccessarily picked up in that order cause of queue priority.

You’d need to also control job priority.
Its possible to do this if you sorted the queues into priorities, at that point the jobs triggered by them can also be created with an increased priority which will cause them to run before other pending jobs.

I do think this architecture gets quite messy though, your setup of one job per queue item does give me the ick.

Thank you for your thoughts! I’d like to clarify your statement about “Job Priority”.

Unless I’m forgetting something, jobs inherit their priority from either the Trigger or the Process.

In that case, this is what I was testing with group 2, which still did not provide consistent results:
" * Priority assigned at the [Orchestrator] level (Process/Trigger)"

Am I missing something on this?

Also, with regard to your comment about using a 1 to 1 ration of job to queue items. I understand why this is not appealing and thought I would ask if you have a suggestion for a better way to handle this scenario as a whole?

We have some automations that run during the day due to daily posting deadlines where up to a 1000 records may need to be worked. Not breaking this up would result in long run times that could potentially cause trouble for us with other high priority work.

Thanks again for your reply!

Perhaps I am not seeing the picture correctly but I dont see how breaking up the jobs so its one job per item increases efficiency or allows you to hit deadlines.

Lets do a worked example so I can explain and you can tell me what I am missing.

Lets say you have 3 queues.
20 items go into queue one, this triggers 20 jobs.
10 items go into queue two, this also triggers 10 jobs.

Then 30 minutes later 5 normal queue items and 5 high priority items get added to queue 3, again triggering 10 more jobs.

Now assuming all the jobs have equal priority, the first 30 jobs triggered will have to finish before the jobs connected to Queue 3 start, when queue 3 starts you will indeed get the high priority jobs go first, but it takes longer to get to those jobs since you wasted time starting and stopping the other jobs.

Lets say conservatively that it takes 30 seconds to start and stop each job, you did that 19 times more than needed before getting to queue 3 meaning you waste nearly 10 minutes.

Are you somehow managing job priority elsewhere?

I think a more sensible option to explore would be to design the processes to be transactional, but look at some way to make automated stop checks or something.

Your example matches my first test, which is to not use “job priority”. In that case, the idea to run 1 job per 1 queue item is inefficient because the queue item priority doesn’t take effect until queue 3 records are worked; just as you stated.

However, my concern is on the ability to use this “job priority” to drive those 5 higher priority records in queue 3 to get picked up as soon as any other jobs that were running before they arrived are complete.

If I don’t break up the work, then those 5 top priority items have to sit and wait while those entire batches of 20 & 30 are knocked out, since the work would have started in those queues first.

The scenarios we’re facing are more like 100-500+ records being worked that are of normal priority while 10-20 high priority records that need to be worked within a short amount of time show up.

Your point of adding automated checks for these types of records may be an option in the processes that have large workloads but low priority; I’m not sure if that will work across folders in Orchestrator but I will look into it so thank you for that note.

The concern over the Orchestrator level priority not working as expected remains and I am still waiting on UiPath Support for an answer on that but I will post back here when I have an answer.

Thanks again for taking the time to respond on this!!

hi,

I assume you are using re framework. So while fetching the queue item itself you can check the number of high priority queue items in that queue as well as the other queue based on that you can terminate your job or continue the process instead of 1 to 1 mapping of jobs to queue item.

And you can add a reference as well to your queue items which are high priority, and make the reference as an in argument in your process. And you can process those references first if the UiPath inbuilt priority for queue items is not behaving as expected.

Hi, thank you for the feedback! We are working items in a loop when there hasn’t been a need to drive 1 job per 1 queue item; often using a State Machine setup, although not necessarily REF due to it’s shortcomings when it comes to persistence.

Similar to my response on Jon Smith’s feedback; I need to test that I can get queue items across folders as these processes are for different business units. For multiple reasons, our setup is such that each Business Unit has it’s own folder/processes/queues/etc. so a bot user may not be assigned to a folder where other high priority queue items exist.

I will report back on my findings when I get a chance to try this out.

I dont understand. In my example I thought I demonstrated how breaking the jobs up slows down the time to get to the high priority jobs, and it doesnt speed it up. Can you explain again how it speeds it up?

I think I see a point that you’ve mentioned and that I didn’t account for in my earlier response. However, I believe this point ties directly into my question of how job priority should and/or does work.

You mentioned that with my suggested setup of 1 job per 1 queue item, 30 jobs are initiated for Q1 & Q2 and then later another 10 jobs are initiated in Q3.

The order of work you’ve described, where the first 30 must complete before the next 10 can be considered is what I would expect to happen without the use of “Job Priority”. i.e. the Process or Trigger priority settings that a job inherits it’s priority from.

My expectation, if Job Priority is in use, would be that while some work of the first 30 records is still pending in queues 1 & 2, if 10 jobs are initiated in queue 3 (5 of which are a higher priority), those 5 high priority records would take precedence and be picked up before some of the initial 30 jobs complete.

Does that make sense? That is where I’m trying to get to with the use of the Orchestrator level priority options that were tested.

What I am trying to get to the bottom of is, you claim that splitting the jobs already helps, and I don’t understand why. I get you want the job priority to be tied to queue item priority, but its not, so how is splitting your jobs helping?

I agree implementing job priority stuff could help but since thats not there it seems to be splitting the jobs just makes everything worse and since you say its not I’d like to understand what I am missing and what advantage you gain before leveraging job priority.

I apologize for being blunt here but I’m not quite sure why you’re hung up on what we do already but would like to stop circling what seems to be a point you cannot get past so that we can focus on the point of this post.

I would like to know if people have successfully used job priority to drive the order tasks are worked across folders in a Tenant, AND if not, how folks handle situations where some work has to take precedence over other work.

Possibly worth noting, we operate in an environment that has fairly limited resources (bot vm’s / licenses).

It sounds like you have quite a bit of experience, so if you don’t mind my asking, do you have experience with either the use of job priorities in situations where it spans multiple folders in a Tenant or with ensuring that the highest priority work across your organization gets picked up and worked in a timely manner?

So far, the ideas I’ve taken away from this post are:

A. Figure out if I can incorporate a check in each lower priority and/or higher volume process to look for high priority work in other queues across all folders in our Tenant and pause/end/suspend when found

B. Limit the number of records worked in a batch while working tasks in a loop, which speeds up the pace at which the lower priority items are completed and hopefully allows higher priority items to be picked up; though the idea that other higher priority jobs will start just because a lower priority job ended is not clear to be true based on my test cases.

C. Use the Reference field on queue items to help sorting work or flagging of priority work and picking up records.

D. Dump all work in 1 queue where queue item priority can be used.

Than you for the time and thoughts on this topic!!

Ok, I will drop it, the reason I ask is because if I don’t understand why you are doing what you currently do, because as I said it makes things worse here not better, that suggests I am missing some information if you claim it does make it better and its hard for me to give advice if I don’t understand your problem properly.

Yes I have, job priority works well and a year or two ago they gave us alot more options. I have often used it to drive urgent jobs to be next to execute. It works as advertised.

I’d advise against all your suggested topics.

The checks that you mention are very cumbersome. You need a robot to iterate over all the folders and check all the queue items in each folder. The API doesn’t do a search accross all folders for queue items.

What I would suggest is making a separate ‘priority check’ process. If you make it a background process it can run concurrently with the foreground process and not use any more licences.

Have the ‘priority check’ do this annoying iterating over the folders and if it finds any high priority queue items have it trigger a job connected to that queue using a higher priority then send a stop request to one of the running jobs.

The advantage here is you centralising the code for checking other jobs in one place, keeping your performers simple. As before I’d suggest abandoning the one job per queue item approach as I cannot understand why you do it, but I might be missing some information.