Managing Work Load due to heavy response time

We have a requirement to work with Google Calendar - fetch events associated with a user’s calendar and generate a report.
While using traditional approach of fetching event info using UI Explorer, each user related fetch is resulting in 33 sec of bot process.
Max records we have is 4000 and hence it hits almost more than 2000 min for the bot process to be complete. As per requirement, the bot is to run every 2 hours which would make it difficult to complete before another schedule starts.

How do we manage such heavy process loads?

You can use the Orchestrator Queue functionality to distribute the workload among several robots.

  1. Create a queue in Orchestrator for the work items.
  2. Create a “Dispatcher” process that will add one work item in queue for each user related fetch. This is a simple process that will populate the queue with items that needs to be processes - relevant data for the event fetch.
  3. Create a “Performer” process that will take one work item from queue and process it. Repeat as long as there are items in queue. The process part means to fetch events from calendar for the current item.
  4. Aggregate the results from all robots.

The “Dispatcher” process is executed first, it will add one work item in queue for each fetch that you need from the calendar. Then you can execute the “Performer” process on multiple robots, each one will process different items from queue - will fetch different data from calendar, until all data is fetched. If you start the “Performer” process on 10 robots, all the data should be fetched 10 times faster then when a robot is working alone.

Also, maybe you can improve the time it takes to process one work item by fine tuning the delays in the activities (DelayBefore, DelayAfter, WaitForReady, etc.)


@Silviu - just a clarification here, whatever you meant to be a dispactcher and performer here is the two different projects right? in terms of laymen - dispatcher project will add the items in the queue, and performer will fetch the data from the queue once all the dispatcher process are complete.

If my perception is wrong here - how this both dispatcher and performer are integrated in the same project?

@Pradeep.Robot: Yes, you’re right, dispatcher and performer are two different projects.

But the projects can be independent, you don’t have to wait for the dispatcher process to complete. Once you have at least one item in the queue the performer can start processing it. The idea is that you can have different schedules for each one. The dispatcher is scheduled to run when you should have new data in the data source (every two hours as in the initial post), on a single robot. The performer can be scheduled to run on multiple robots, depending on the amount of data to process.

Thanks for your quick response. out of the topic - I have a situation, where the transaction items should be defined to specified bots on the run dynamically. like, say if i have 5 bots… is it possible to decide which transaction item to be mapped with what bot i decide in my performer?

You could do that by using different queues.
Then, you can store the queue name in an asset with different values for each robot.

Would that work in you case?

Hi @Silviu

a quick query about the dispatcher and performer

suppose we create a schedule for process in which performer run after half an hour of the execution but due to some reason there will be a exception occur in the dispatcher. so we have nothing in the queue to be processed for the bot. How we overcome these scenario.

Please suggest some approach to overcome these scenarios

Hi @Himanshu.joshi,

The are different ways to overcome this, depending on requirements and constraints.

The default configuration of the ReFramework is to stop execution when there are no more items to process. In that case, when the dispatcher will fail and no item will be added in the queue, the executor process will exit shortly with a information message that there are no more items to process.

Any problems that prevents the dispatcher to execute successfully should be reported as Errors in Orchestrator and a notification should be sent to a human supervisor. After the problem with the dispatcher is fixed, the performer can be manually run to process the items.

Other way could be to link performer and dispatcher together. At the end of dispatcher process, if everything was fine, the performer process can be started by using the Start Job activity.

1 Like


The last option of start job from dispatcher is sound interesting but how can we run a process in specific robots or VM’s for which they allocated. Because the option of specific robot is not present in the start job property.

Indeed, with the Start Job activity you can’t control exactly which robot will execute the process, unless you have only one robot per environment.

You could do it using the API via the Orchestrator HTTP Request activity.
Endpoint is

And the request payload can be something like


Where the ReleaseKey is the ID of the process and the RobotIds is an array with the IDs of robots to execute the process.


@Silviu: What if i want to use the same Project for both dispatcher and performer with multibot functionality. One option i can think of is to make the solution config/Asset controlled as per the bot. Lets say i can control as first bot will work as dispatcher and performer and rest will work as only performer through separate config file for each bot.
But i don’t think this is the best solution for this situation. Do you have any suggestions?

Dispatcher and Performer are different processes as one is feeding the input data into an Orchestrator Queue and the other is doing the actual processing of the data stored in the Queue.
If you are using a single robot, then you can implement a solution where both the Dispatcher and Performer are part of the same project. First, execute the “dispatching” part and after that is finished, start the “performer” part. But, if you plan to use several robots, I would say it’s much easier to keep the two parts as separate processes. If not, if you want to have a single process for both parts and execute it on multiple robots, you’ll have to implement a semaphore to make sure the Dispatcher part is not executed by multiple robots in parallel.