How can I get a Document Understanding job to process mulitple queue items?

I am using the document understanding framework and my job is getting suspended, correctly, as it waits for validation but that job only processes one queue item. I have a queue trigger set up but that doesn’t immediately start a new job. Is there any way to have one job process multiple queue items, like it does in the RE Framework even if it goes to suspension?

Hi @botman

The Document Understanding Framework was built to process one document by run.

If you have resources (licenses, machines, etc) you can set the Queue Trigger to run more than one job simultaneously

@rikulsilva Thanks for the insight on the Framework. I plan on increasing the simultaneous jobs when we get it into production which should help some but the problem will grow as the volume increases.

Do you know if they plan on adjusting the framework or if anyone has found a workaround? I was imagining a step prior to suspending the job that would check if there are new queue items and trigger a new job.

By the DU nature, this should not change I guess.

You can implement a dispatcher that get the document to process and use Start Job activity to run DU process, but the final result is same.

You can manage the volume in orchestrator reserving licenses for DU process or change the process priority to run first if necessary.

If you change the DU to work with more than one document by time you will encounter some difficult to troubleshooting and retry failed items

Understood, thanks for the information. Do you happen to know how often the queue trigger triggers? I find the behavior inconsistent for this scenario when multiple queue items come within a couple minutes of each other and only the first item gets picked up to process?

you’re welcome

30 minutes and it is an orchestrator setting… You can change it, but this behavior looks like more with simultaneous option in queue settings.

The framework is pretty undercooked to be honest, much like the REFramework which leaves alot to be desired.

I would recommend starting from scratch.
A job can suspend with multiple triggers (so waiting for more than one queue item) if you use multi threading for the wait activities, the downside of this is that, once a job resumes the status of every queue item it is waiting on will be checked. If that is 10 items then absolutely no problem. If its 500 it becomes a problem as making 500 API calls to the Orchestrator in one go from the robot will get treated like a DDOS attack and you’ll get errors. There is also a built in limit of triggers on a job, I cannot recall what it is but you cannot get close to it because of the issue I mentioned above.

Waiting on them single threading also doesn’t work, even if you were to modify the framework to not suspend if there is another item in the queue. Once you decide to suspend, if the actions are not done in order you still have a bottleneck as the process won’t resume until the first action, which it is waiting on, is completed. Perhaps better than above but not good processing.

Instead I find its best to run separate jobs and have it handled by what would usually be the dispatcher. Your robot that is actually making the transactions should, instead, make a job which can be responsible for handling the process. When you get more advanced you can loop queues back in and do ‘robot orchestration’ etc, but run before you can walk.

So TLDR, this is a design flaw in the template you are given, I recommend starting from scratch.


One way to hndle this is complete the create form tasks and save the output of it to array

And then after all are processed in yhe end process state use wait with for each prallel so that first all items are processed and forms are created and then waits at once for all the tasks …the downstream process can be done in a separate bot by adding the items to a different queue once the wait resumes


It looks like they added a new option in the queue trigger that solves the problem.


1 Like


Does it work for suspended also?
Did you happen to check it?