I am talking about queue triggering the bot as well. If there are multiple robots in an environment, and the dispatcher bot is running on one machine and the triggered performer bot is running on different machines, then you are right, for each transaction it will login to the site. That is exactly what I do not want. I want the performer bot triggered in a predictable manner so that all transactions are handled on one robot. That way, you don’t login to the same site once for each transaction. My site will flag me off, if I login to it “too many times”, within a limited span of time, as it suspects a DOS attack.
Anyway, while this is the issue at hand, I have a workaround solution for this. For such dispatcher/performer bots, we need to make sure that it is run on an environment that has only one robot. Then the performer bot can’t run on different machines.
By the way, are you triggering a job for each queue item which is added? Otherwise said, are you processing in a loop like in our re-framework? This being said “Max_Pending_Jobs” parameter set to one may ensure that only one robot is processing at a point of time.
We should be able to select a specific robot for a Queue Trigger as we do for the time based Trigger.
We have different Robot ID’s for different processes due to access rights, so having Orchestrator select the next available robot to process the queue does not make sense since that Robot it may select will not have access to that particular application it may use and the queue items will fail to process. This should have been the number one factor to consider.
@badita - Is there any news on which version of Orchestrator will have this feature available?
After many hours in testing today we also ended up in this particular solution. Originally we have a single environment that contained to full time unattended robots on two different servers. As the main robot is scraping through and processing ticket is sometimes also pass error messages to secondary robot to handle a a very time consuming process of collecting errors which is then passed back again to the ticket.
So we were not aware of this thread in this forum, but we landed on the same solution.
We created a new environment
We removed the secondary robot from the main environment
We attached the robot to the newly created environment
Further we also had to delete all the processes to the secondary robot because those were also attached to the original environment
Imported the processed all over again, but this time we selected that the secondary robot on the new environment.
This way we are still able to run the triggers how want it.
But, i think this solution is not the best. Is there really need to be a “by design” that prevent us from deciding where a certain task should be run? When creating a trigger in a timely manner, we can select which robot, where to run, why not so with triggers and queue, what is the deal of preventing this?
Lets say, one wants to split up the job in quick and easy process to run on one machine, create a queue item that must be run on a secondary robot on a secondary machine. Silly and beyond understanding why this has not yet been implemented.
Which robot that add a queue item to a given queue should NOT be the factor that decide where the robot that handle that queue should run. PLEASE fix.