Cool-off period between jobs

Is it possible to force or set the orchestrator in such a way, that once a robot has completed a task, it has a set amount of time to not pick up a new task, being idles so to say?

Problem scenario:
We run multiple bots, with multiple (foreground) console processes. One at the time per bot of course. Most jobs are time-schedule triggered, a few based on queue contents. Allocation is always dynamic. All scripts use REF + Queue as a framework. The # of queueitems can vary greatly, depending on what happens in the business. This results in a scheduling challenge; during peak hours we get a few pending jobs, which start as soon as a bot becomes available in the pool.

The challenge here is that the bot reports back to the Orchestrator that it is signed off and ready for work, while in fact it is still running a windows log-off precedure in the background. The bot is displayed in the Orchestrator as available at that point. When a new task is assigned to this bot while still shutting down, this leads to log on errors for the new task, making it fail automatically, and it requires manual intervention to restart the job, or waiting untill the next schedule round.

In my opinion, giving the bots some slack, a delay, by an orchestrator- or server setting would mitigate this problem. For example 30-60 seconds extra would greatly improve stability.

Any thoughts on this?

Hi @Jeroen_van_Loon,

We have been in a similar situation. Although not when logging off but logging in to our robot VDIs. Our app-v packer for windows loads all applications depending on the user being logged in. This means that, the time it takes for windows to be ready and for the robot service to start will vary. Our workaround has been to manually estimate the time it takes for app-v to finish loading, which turned out to be not more than 45 seconds after some 20 manually checks. We added another 35 seconds for good measure and possible delays in loading time.

We therefore manually added delay in the modified Reframework (our performer/dispatcher templates) which ensures that at runtime the robot waits for app-v to finish loading.

Another thing we do is also to ensure the delay value is a argument to the dispatcher / performer process. This way we can control the delay time if needed from UiPath Orchestrator–> triggers. Updating the delay from Orchestrator has worked well because some of our dispatchers can easily work without needing to wait for app-v to load. For such dispatchers, we use a very minimal delay of 20 or 10 seconds when setting up the triggers.


Walkthrough
Save an string asset (example 00:00:80)—> Pass the string asset name in the Triggers —> ReFramework gets asset name —> The initial state gets the asset from orch —> Convert to timespan —> start delay

Argument configuration in Reframework:


Project.json edits

Trigger parameter:

Overall, this workaround/solution has scaled quite well and we have not had any login issues for robots due to app-v load time. That said, our process execution time for each job has increased by the delay we input in the trigger.

Trade-off: Stability vs. Execution speed. Stability please :slight_smile:

Tnx for the very complete contribution, but I consider your example a separate challenge. The problem we face cannot be solved by script code and workarounds.

In my problem, the scripts won’t initialize at all, since no console session can be set up if it starts too soon. So there is nothing to execute to ‘fix’ things.

Would need to know more about your environment and how your machines are configured, but it sounds like you may want to override the timeout by setting an System Environment Variable for UIPATH_SESSION_TIMEOUT which I believe default is 60 seconds.

You’ll have to restart the Robot Service afterwards.

We already prolonged that session timeout, but it doesn’t mitigate this effect. It’s a timeout, not an idle time.

Okay, how about providing us the exact error message and details about what your have troubleshot so far along with more about your environment setup and configuration?

We are making assumptions from the description provided and sounds like you’ve already done some leg work.