Does anyone know if it is possible to set execution target with ‘queue’ trigger type as it can clearly set for ‘time’ trigger type?
Queue triggers are available in orchestrator have a look on the link below
I did look through the doco, but can’t seem to find if its possible to set execution target for ‘Queue’ trigger type.
Can you elaborate your question. what is the exceution target.
whenever item adds to queue, we can trigger a job as shown below.
For Time Trigger, there is a tab for Execution Target setting, however, this tab seems not open for Queue Trigger.
Even though more than one robots are defined in one environment, sometimes we need each of the robot to run specific jobs… So I am just curious about how to select Specific Robots like Time Trigger…
Our team also needs this feature to select Execution Target (specific robots where to run the process) for the Queue Trigger.
It looks like the Orchestrator 2019.10.14 does not allow that. So we have to use the “Time Trigger” to schedule jobs that must run on specific robots. Which is not optimal to utilize these robots capacity.
Queue trigger does not support specific robots. Any free robot within the environment will pick up the job.
This is not a good design. We should be allowed to choose a specific robot, just like for time triggers.
It is always good design to let your users have all possible options, and let us choose how to use things. Artificial limitations will always impact someone negatively.
Example when it is not applicable:
-External system A allows to use one user account for one session only. Therefore, separate user accounts must be used for for for processes running simultaneously.
-We must run 3 instances of the process that logs into system A.
-We have 3 user accounts for system A available and have Orchestrator assets with user account credentials assigned to 3 specific robots.
-Currently we have Time trigger to start the process on these 3 robots simultaneously, approximately when the new items should be available in the Queue the process consumes.
-Using multiple environments is not feasible, because:
– We have dozens of processes that we upload to Orchestrator from TFS Release pipeline through Orchestrator API and use the same environment
– There are different systems alike System A that we have 2 to 5 accounts available and set as credentials on different robots.
– Setting specific machines once in a trigger is more manageable than joggling multiple environments.
Please consider adding execution target for Queue Trigger type, like you have for Time Triggers.
In the upcoming 20.4 the user and machine will separated. On a trigger you’ll be able to select the user but not the machine. Will this be fine?
No, that would not be very helpful in our scenario, since we use one user account to run unattended processes on all machines.
Ok, so you’re using one user windows account for all the machines.
You have 3 different credentials for app login which can’t be used simultaneously.
In the future we’re thinking of implementing something like a credential locker… which has an workaround (see below):
- create a queue Q containing 3 items of datatype string: “cred1”, “cred2”, “cred3”:
- before logging into your third party app do GetQueueItem(Q). This will lock “cred1”. the next job will use “cred2” since “cred1” is “in progress”.
- when finishing the job and logging out from app put “cred1” back into the queue
Would this work?
@JanisK the workaround I ended up implementing is create a new environment with the one robot you want to run on. Create the process in the new environment and schedule use the new process. does that work in your situation?
@LeoRX Thanks! Unfortunately this approach would not work for us because we use the same environment for all processes.
This is why we want to keep the same environment:
We define the environment name in TFS Release Pipeline and deploy the processes to Orchestrator through its API from TFS Release. We use TFS to build nuget package and release it to each environment. We use the same template for every new project. Managing multiple environment names in TFS would make it much harder to manage.
Sorry @badita, I haven’t noticed your response till now.
Yes, that Credential Locker might be a great feature and could help to mitigate the issue I described.
We understand but this approach does not scale. Using one environment means you need to use specific robots schedules… which will end up in a management “hell” with large number of bots.
Well, I diagree. This is simple bad design from you as software provider. Why do you enable selecting the execution target for one type of schedule and not for another? Clearly the processes should run on specified target machines and users to ensure proper access rights control and segregation.
You should seriously consider adding this option as soon as possible.
“It runs in the same environment associated to the selected process”. What exactly does that mean? Any machine in that environment or on the same robot every single time? This is important because my performer process has to add records in a given site, one for each queue item. So I designed the performer bot so that for the first transaction has to open a browser and login to a site. The rest of the transactions has to work with that logged in site. The final transaction has to logout of that site and close the tab.
@badita - Mihai - If I understand correctly, this is equivalent to ‘Allocate Dynamically’ - so if a job has to run on a specific Robot (machine / user combination,) you could create an environment with just that Robot and deploy the process to that environment and use it for the trigger.
@savantsa - If you only trigger one robot, and you iterate through all the pending transactions in the queue, then what you suggest would work. What we are discussing here is the trigger which starts a job.
Without knowing the full process, it is hard to say for sure, however if you trigger multiple robots, each processor would log into the site with the 1st transaction, and when there are no more transactions could close the site and close the tab. As long as the site does not have an issue with concurrent logins by the same user (assuming you use the same credentials to log in for both performers) it should not be a problem - we do this all the time when we have a large volume of transactions to process.