@JanisK the workaround I ended up implementing is create a new environment with the one robot you want to run on. Create the process in the new environment and schedule use the new process. does that work in your situation?
@LeoRX Thanks! Unfortunately this approach would not work for us because we use the same environment for all processes.
This is why we want to keep the same environment:
We define the environment name in TFS Release Pipeline and deploy the processes to Orchestrator through its API from TFS Release. We use TFS to build nuget package and release it to each environment. We use the same template for every new project. Managing multiple environment names in TFS would make it much harder to manage.
Sorry @badita, I haven’t noticed your response till now.
Yes, that Credential Locker might be a great feature and could help to mitigate the issue I described.
We understand but this approach does not scale. Using one environment means you need to use specific robots schedules… which will end up in a management “hell” with large number of bots.
Well, I diagree. This is simple bad design from you as software provider. Why do you enable selecting the execution target for one type of schedule and not for another? Clearly the processes should run on specified target machines and users to ensure proper access rights control and segregation.
You should seriously consider adding this option as soon as possible.
“It runs in the same environment associated to the selected process”. What exactly does that mean? Any machine in that environment or on the same robot every single time? This is important because my performer process has to add records in a given site, one for each queue item. So I designed the performer bot so that for the first transaction has to open a browser and login to a site. The rest of the transactions has to work with that logged in site. The final transaction has to logout of that site and close the tab.
@badita - Mihai - If I understand correctly, this is equivalent to ‘Allocate Dynamically’ - so if a job has to run on a specific Robot (machine / user combination,) you could create an environment with just that Robot and deploy the process to that environment and use it for the trigger.
@savantsa - If you only trigger one robot, and you iterate through all the pending transactions in the queue, then what you suggest would work. What we are discussing here is the trigger which starts a job.
Without knowing the full process, it is hard to say for sure, however if you trigger multiple robots, each processor would log into the site with the 1st transaction, and when there are no more transactions could close the site and close the tab. As long as the site does not have an issue with concurrent logins by the same user (assuming you use the same credentials to log in for both performers) it should not be a problem - we do this all the time when we have a large volume of transactions to process.
I am talking about queue triggering the bot as well. If there are multiple robots in an environment, and the dispatcher bot is running on one machine and the triggered performer bot is running on different machines, then you are right, for each transaction it will login to the site. That is exactly what I do not want. I want the performer bot triggered in a predictable manner so that all transactions are handled on one robot. That way, you don’t login to the same site once for each transaction. My site will flag me off, if I login to it “too many times”, within a limited span of time, as it suspects a DOS attack.
Anyway, while this is the issue at hand, I have a workaround solution for this. For such dispatcher/performer bots, we need to make sure that it is run on an environment that has only one robot. Then the performer bot can’t run on different machines.
By the way, are you triggering a job for each queue item which is added? Otherwise said, are you processing in a loop like in our re-framework? This being said “Max_Pending_Jobs” parameter set to one may ensure that only one robot is processing at a point of time.
I am trying version 2020.10 and don’t see Credential Locker feature implemented yet. Do you have any estimate when can it become available?
Why make everything so complicated? Just let us select the Robot for a Queue Trigger the same way we do for a time based Trigger. There’s zero reason not to have both with the same options.
Yes I agree, this only makes sense.
We should be able to select a specific robot for a Queue Trigger as we do for the time based Trigger.
We have different Robot ID’s for different processes due to access rights, so having Orchestrator select the next available robot to process the queue does not make sense since that Robot it may select will not have access to that particular application it may use and the queue items will fail to process. This should have been the number one factor to consider.
@badita - Is there any news on which version of Orchestrator will have this feature available?
Hello all, this topic is one that I have found very useful and will hopefully drive the implementation of an Execution target for Queue based triggers.
Is there any word from the UiPath product team on the status of this change?
Thanks in advance.
Any updates on this?
Nothing here that I have seen updated as of yet
After many hours in testing today we also ended up in this particular solution. Originally we have a single environment that contained to full time unattended robots on two different servers. As the main robot is scraping through and processing ticket is sometimes also pass error messages to secondary robot to handle a a very time consuming process of collecting errors which is then passed back again to the ticket.
So we were not aware of this thread in this forum, but we landed on the same solution.
- We created a new environment
- We removed the secondary robot from the main environment
- We attached the robot to the newly created environment
- Further we also had to delete all the processes to the secondary robot because those were also attached to the original environment
- Imported the processed all over again, but this time we selected that the secondary robot on the new environment.
This way we are still able to run the triggers how want it.
But, i think this solution is not the best. Is there really need to be a “by design” that prevent us from deciding where a certain task should be run? When creating a trigger in a timely manner, we can select which robot, where to run, why not so with triggers and queue, what is the deal of preventing this?
Lets say, one wants to split up the job in quick and easy process to run on one machine, create a queue item that must be run on a secondary robot on a secondary machine. Silly and beyond understanding why this has not yet been implemented.
Which robot that add a queue item to a given queue should NOT be the factor that decide where the robot that handle that queue should run. PLEASE fix.
Is there a fix for this issue or are they hoping that by ignoring it, the issue will go away?
Sorry to be blunt, but wtf?!
I was really looking forward to working with Queue Triggers and tested it today. Really disappointing to not have this option I was using on Time Triggers all the time.
Is this a cycle of people don’t use it much → no company resources allocated to this issue → people won’t use it more?
Thank you and best regards,
Just to refresh the discussion a bit, I went to the first post that states:
Then I went to cloud.uipath.com and compared the screens for Time and Queue triggers.
Would you mind sharing your specific use case, it looks like on the latest version of Orchestrator one set a specific machine to run the process.
Hi, thanks for the reply!
I realize now, that I was looking on an old version of the orchestrator (on-prem).
I am happy, that the functionality is available now!