Speed up queue trigger

Hi,
I started preparing to transfer from an own process scheduling solution to Orchestrator.

The use case is the following:
1/ Using powershell script to check mailbox and for received e-mail creating new transaction in a queue.
2/ Using queue trigger to start process. The process takes only one transaction from the queue and finish.
3/ In case of system failure (often case) the transaction should retry up to defined limit.

The problem is slowness of the retry process. The queue trigger reacts very slowly to retried transactions even. It takes several minutes to retry eventhough there is not any other job running and therefore the robot is vacant.

I tried to “motivate” Orchestrator to speed up by setting:
1/ duedate of the transaction
2/ high priority of transaction
3/ critical priority of trigger
but nothing helps.

Is there any chance to speed up queue trigger?

Thx

1 Like

Hey @J0ska

Are you using cloud or on-prem orchestrator ?

Thanks
#nK

Hi,
at this moment cloud.
Rgds

Okay, So we have an environment variable configuration which will actually speed it up for an on-prem version.

But let’s think on the cloud prespective.

The retry which you configured, is that a queue retry ?

Yes, it is.

Okay, little surprised why there is a delay.

Because when the bot is running and if there is a system exception the item will be immediately retried in the Queue & the same job will be continuing to process the retried item.

Could you please confirm the above ? or explain how that happens in your case ?

Thanks
#nK

Well, the way of working I use is not very typical :frowning:
The transaction is realy just to trigger the job.
When the job fails it marks the trx as failed and finishes.
I expected that the queue trigger will fire immediately again to retry the job, but it is not :frowning:

We are setting up on-prem Orchestrator. As I understand the behavior could be adjusted in this deployment scenario.

Cheers

If you set the queue retry, then the item is retried immediately after it fails and the job process the transaction again without any delay.

Not sure we talk about the same. The trx is retried immediately but queue trigger doesn’t fire immediately to start the job.

Currently I deployed a test process that does nothing just marks the trx as failed and exits.
And this is the timeline
2:56 PM
3:00 PM
3:30 PM
4:01 PM

image

You need a looping process, not a once-and-done. That’s the issue here. You are waiting for a new job to be created to process the retry. Your process should loop and process all available (new) items, then your retry would be processed immediately.

Hi,
it is bit difficult to loop due to clean-up in case of failure.
I do clean-up by recycling the robot machine in case of failure :slight_smile:

Probably this has no solution in cloud.

Cheers

Okay I got the issue, Since it’s the dummy process it ends quickly in a flash.

There is no need for Queue trigger actually for firing a job.

Could you please confirm are you actually using Re-Framework in your actual process ?

Looping and error handling are common things that you should learn how to do.

@postwick I am not sure if you know, @J0ska is a wizard himself in the forum, asking him to learn to loop and handle errors is like asking an expert chef if he has heard about about knives :slight_smile:

I am sure he has thought of it hard before asking us and just is looking for some input to get an alternative solution for the cloud environment.

@J0ska I am not sure how the heartbeat check/ queue status can be increased in cloud orchestrator. Now it happens every 30 minutes if I remember correctly.

What if you ditched the queue trigger rather every 10 minutes ran an unattended robot from an API call. Let the queue collect the cases and the unattended robot can then batch process the items.

1 Like

No, I do not use RE-Frame but an own framework.

And I do not think the problem is actually how quickly the process ends but in the way how queue trigger treats retried trx. The trigger actually fires only for a “brand new” trx but not for retried one. Retried are started in the 30 minutes cycle.

Thx for help. It would require complete process redesign to match Orchestrator “philosofy”

Cheers

I would not call myself “a wizard” but thank you :laughing:

And yes, I will need to reconsider current way of working which was okay without Orchestrator but brings some challanges with it.

Cheers

I’m not sure if this is correct. It could be that because the automation is already running when the queue item is retried, the trigger doesn’t create a second job. I’ve had this issue in various scenarios.

As a simple test, try setting the queue trigger to allow two jobs to be running at the same time. Then run your automation and simulate a failure/retry. See if the trigger does start a second job.

I tried this but no change. Still after failed job it waits until next “half hour”.

Cheers

Ok then since you’re on cloud and can’t change the queue check timer, I think the only real solution would be to have your automation loop so it’ll immediately pick up the retry. Another option, less elegant, would be to just put your automation on a time trigger to run every 5 minutes or something - if there are no items in the queue it just ends gracefully.

Hi @Nithinkrishna,

Just curious to know more about the configuration you hinted above. What approach are you referring to?

Thank you.