I started preparing to transfer from an own process scheduling solution to Orchestrator.
The use case is the following:
1/ Using powershell script to check mailbox and for received e-mail creating new transaction in a queue.
2/ Using queue trigger to start process. The process takes only one transaction from the queue and finish.
3/ In case of system failure (often case) the transaction should retry up to defined limit.
The problem is slowness of the retry process. The queue trigger reacts very slowly to retried transactions even. It takes several minutes to retry eventhough there is not any other job running and therefore the robot is vacant.
I tried to “motivate” Orchestrator to speed up by setting:
1/ duedate of the transaction
2/ high priority of transaction
3/ critical priority of trigger
but nothing helps.
Well, the way of working I use is not very typical
The transaction is realy just to trigger the job.
When the job fails it marks the trx as failed and finishes.
I expected that the queue trigger will fire immediately again to retry the job, but it is not
We are setting up on-prem Orchestrator. As I understand the behavior could be adjusted in this deployment scenario.
You need a looping process, not a once-and-done. That’s the issue here. You are waiting for a new job to be created to process the retry. Your process should loop and process all available (new) items, then your retry would be processed immediately.
And I do not think the problem is actually how quickly the process ends but in the way how queue trigger treats retried trx. The trigger actually fires only for a “brand new” trx but not for retried one. Retried are started in the 30 minutes cycle.
Thx for help. It would require complete process redesign to match Orchestrator “philosofy”
I’m not sure if this is correct. It could be that because the automation is already running when the queue item is retried, the trigger doesn’t create a second job. I’ve had this issue in various scenarios.
As a simple test, try setting the queue trigger to allow two jobs to be running at the same time. Then run your automation and simulate a failure/retry. See if the trigger does start a second job.
Ok then since you’re on cloud and can’t change the queue check timer, I think the only real solution would be to have your automation loop so it’ll immediately pick up the retry. Another option, less elegant, would be to just put your automation on a time trigger to run every 5 minutes or something - if there are no items in the queue it just ends gracefully.