Speed up queue trigger

Yes, it is.

Okay, little surprised why there is a delay.

Because when the bot is running and if there is a system exception the item will be immediately retried in the Queue & the same job will be continuing to process the retried item.

Could you please confirm the above ? or explain how that happens in your case ?

Thanks
#nK

Well, the way of working I use is not very typical :frowning:
The transaction is realy just to trigger the job.
When the job fails it marks the trx as failed and finishes.
I expected that the queue trigger will fire immediately again to retry the job, but it is not :frowning:

We are setting up on-prem Orchestrator. As I understand the behavior could be adjusted in this deployment scenario.

Cheers

If you set the queue retry, then the item is retried immediately after it fails and the job process the transaction again without any delay.

Not sure we talk about the same. The trx is retried immediately but queue trigger doesn’t fire immediately to start the job.

Currently I deployed a test process that does nothing just marks the trx as failed and exits.
And this is the timeline
2:56 PM
3:00 PM
3:30 PM
4:01 PM

image

You need a looping process, not a once-and-done. That’s the issue here. You are waiting for a new job to be created to process the retry. Your process should loop and process all available (new) items, then your retry would be processed immediately.

Hi,
it is bit difficult to loop due to clean-up in case of failure.
I do clean-up by recycling the robot machine in case of failure :slight_smile:

Probably this has no solution in cloud.

Cheers

Okay I got the issue, Since it’s the dummy process it ends quickly in a flash.

There is no need for Queue trigger actually for firing a job.

Could you please confirm are you actually using Re-Framework in your actual process ?

Looping and error handling are common things that you should learn how to do.

@postwick I am not sure if you know, @J0ska is a wizard himself in the forum, asking him to learn to loop and handle errors is like asking an expert chef if he has heard about about knives :slight_smile:

I am sure he has thought of it hard before asking us and just is looking for some input to get an alternative solution for the cloud environment.

@J0ska I am not sure how the heartbeat check/ queue status can be increased in cloud orchestrator. Now it happens every 30 minutes if I remember correctly.

What if you ditched the queue trigger rather every 10 minutes ran an unattended robot from an API call. Let the queue collect the cases and the unattended robot can then batch process the items.

1 Like

No, I do not use RE-Frame but an own framework.

And I do not think the problem is actually how quickly the process ends but in the way how queue trigger treats retried trx. The trigger actually fires only for a “brand new” trx but not for retried one. Retried are started in the 30 minutes cycle.

Thx for help. It would require complete process redesign to match Orchestrator “philosofy”

Cheers

I would not call myself “a wizard” but thank you :laughing:

And yes, I will need to reconsider current way of working which was okay without Orchestrator but brings some challanges with it.

Cheers

I’m not sure if this is correct. It could be that because the automation is already running when the queue item is retried, the trigger doesn’t create a second job. I’ve had this issue in various scenarios.

As a simple test, try setting the queue trigger to allow two jobs to be running at the same time. Then run your automation and simulate a failure/retry. See if the trigger does start a second job.

I tried this but no change. Still after failed job it waits until next “half hour”.

Cheers

Ok then since you’re on cloud and can’t change the queue check timer, I think the only real solution would be to have your automation loop so it’ll immediately pick up the retry. Another option, less elegant, would be to just put your automation on a time trigger to run every 5 minutes or something - if there are no items in the queue it just ends gracefully.

Hi @Nithinkrishna,

Just curious to know more about the configuration you hinted above. What approach are you referring to?

Thank you.

Hi @J0ska
The default check interval is 30 minutes, and you’re right, the condition for instantly triggering a new job is a transaction with “New” status. Unprocessed transactions (including retried ones) need to wait for the 30 minutes interval.

Now for on premise Orchestrator, The “Queue.ProcessActivationSchedule” parameter can be used to adjust the default 30-minute check interval (with values between 0 and 59). This value is set within the “UiPath.Orchestrator.dll.config” file (you will need to restart IIS and to recreate the trigger)
This parameter is not exposed in cloud Orchestrator, but you can contact the support team to apply such changes.

2 Likes

Thx to all contributing this discussion. The behaviour was clearly explained by @Emira.
It looks like pretty “dummy” fixed retry interval mechanism.

Cheers

1 Like

Yep, it’s the one mentioned by @Emira

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.