Fixing a user to a specific Machine in Modern Folders


Is it possible to fix a particular user to a specific physical or virtual machine when using Modern Folders?

The scenario I’d like to achieve is when allocated a job, user1 will only ever execute on machine1 and user2 will only ever execute on machine 2.

The reason for this is I want granular control over the access each user has and the applications installed on each machine but if I’m unable to find out how to specify which user runs on which machine it feels like I need to give every user the same access to all machines and install every application that is required for every process on all machines.

Hi @grlawlf,
Welcome to the Community!
Modern folder are made to work with Active Directory environments. To achieve what you need you could create separate folders for each machine and define only one user-robot for these folders.

1 Like

Thanks for coming back to me @Pablito. What you suggest will achieve the end result but requires multiple folders to do so, so it’s probably easier to stick with a Classic folder for processes requiring machines and users that need non standard set ups.

In a modern folder with floating unattended bots, how does Orchestrator work out which machine to execute the process on once you select the user?

You can define separate templates and connect particular robots to the template you need with it’s machine key.

Hi @Pablito,

I get connecting the robots with a machine key but when you execute the process you specify a user not a robot or a machine. What I can’t work out is once I have allocated a process to run under a particular user, how does Orchestrator decide which robot or machine to launch and execute the process on.

1 Like

Yes you’re right :smiley: By robots I meant “users”. I’m still trying to fix my mind and switch thinking to modern folders way :laughing:

This might help you. So when you will go on the top and select tenant (inside Orchestrator panel). You will have possibility to create the folders. Look that you can create sub-folders. Like this:

So what I meant is than you can divide processes for a separate sub-folders where each of it will have other machines attached. Then you can add for example two users to each of the sub-folder and you will know that if you will select the process which is in folder #2 it will run on machine which is part of folder #2.

Hopefully this make sense for you :slight_smile:
I know it requires folder creation etc, but as it’s a step forward to work with AD you can look at this from similar perspective where users can be divided by groups and groups can be divided by OU’s in Active Directory.

Thanks @Pablito. Like you I’m struggling to get my head round the new paradigm. I understand what you’ve explained re using the folder structure to segregate the machines and users and get the benefit of AD integration. Last thing is really just is in the Modern Folders once I have allocated a process to run under a particular user, how does Orchestrator decide which robot or machine to launch and execute the process on.

I talked to our devs to be super sure. And for this kind of deployment a heartbeat of the servers/computers on which Robot is installed determine which will take the job first.

Thanks for supporting on this @Pablito.

I guess the best solution and one which leverages the benefits of AD integration is multiple folders.

1 Like

As of now the modern paradigm assumes that within a folder any user is able to execute on any associated machine. Therefore you need to segregate groups of users and machines via different subfolders.

However, 20.4 brings more flexibility

  • to specify U-M pairs at trigger or start job level
  • to restrict the users that are able to login on a machine or template - On T1 only U1 and U2 are allowed
  • to associate standard machines to modern folders (not only templates) - useful for classic2modern migration without changing the machine key
  • asset per UserXMachine - similar with asset per robot in classic folders

This will solve the problem and close the gaps with classic robots.

The modern paradigm was designed to solve large infrastructure management issues - when you have 200M and 50U is very difficult to manage thousands of U-M pairs (classic robots) plus autoscaling (just start a machine from a template in AWS and it will work, no need to add a machine, to create a robot for that machine, to set a password, etc.)

In 20.4 we’ve added the classic folder granularity options. However, use it only when needed. Most of the times we would recommend that any user should be able to run on any machine within a folder. This approach is scalable in large environments.

1 Like

Hi Mihai @badita,

I’m currently struggling with the same situation - I need to run U1 on M1 only and U2 on M2 only (due to local user configurations), but at the same time i need my robots to have access to Data Services without giving “writer” access to “everyone”.

When you say it is possible:

do you mean this can now be done in modern folders somehow?

Edit: I have found info on User-Machine Mappings, but I do not see this option in Orchestrator Cloud Enterprise - any info on when we can expect this?


Go to Tenant → Settings and check whether you have enabled the account-machine mappings feature

I am still struggling to understand how this all works. We are trying to migrate to modern folders so we can upgrade Orchestrator.

Say we have Automation 1 and Automation 2. We have Server S1 and Server S2. We have User U1 and User U2. We need Automation 1 to run on Server S1 with User U1. We need Automation 2 to run on Server S1 with User U1. Automation 2 should run on Server S2 with User S2.

How do we control this if Automation 1 and Automation 2 are in the same folder?

In classic we would:

  • Create Robot R1 with User U1 and Server S1
  • Create Robot R2 with User U2 and Server S2
  • Create Environment E1 and put Robot R1 into it
  • Create Environment E2 and put Robot R2 into it
  • Create Process for Automation 1 in Environment E1
  • Create Process for Automation 2 in Environment E2

How would we do this with modern folders, having both Processes in the same folder?

Honestly this just tells me you don’t really understand how all this works in the real world, that different automations use different apps installed on the server and need different permissions, etc. and that we can’t just install every app on every server and give every user every permission.

1 Like


@loginerror @badita @Pablito

Hi @Pablito ,

Could you please share if there any effective way to monitor & action jobs across all the folders and subfolders in one place other than Monitoring at Teant Level(which has insufficient data points and unable to take any actions to stop/kill jobs)