I have been looking around in google and youtube and can’t find anything with best practices or some kind of steps related with setting up different environments in UiPath.
I come from Automation Anywhere A360, where (in a standard scenario with dev, uat and prod environments) you have 3 sepparate Control Rooms (each one with its own url and sepparated from each other). Hence you would have 1 Control Room for dev, another one for uat and another one for prod (in case the reader is familiar with this I include this information). Each Control Room is isolated from each other and it has its own set of users, roles, devices (VMs) and code (bots)
My question is, what are the best practices in UiPath orchestrator in order to have sepparated environments (dev, uat and prod)? From my investigation I am considering these:
3 sepparate orchestrators
3 different tenants
1 tenant and 3 folders (dev, uat, prod)
What is the best or standard way to go through with this?
packages for a tenant will be same and not different…also capabilities of folders is to seggregate the same lciense across the folders as needed
and say if there is a requirement to run specific functional bots on specific bots only then we can use foldrs to seggregate users as per folders.
Also when you deploy new packages for developmnet purpose it should not effect prod as the packages are stored at tenant level and not folder level
also if you considers app etc there is no folders seggregation in it and for sharring connections in integration servies we need to use shared folder only then there might be conflict for dev and prod
There are multiple options here, as commented before, ideally, it would be great to have 3 orchestrators, but it might have cost impacts (3 orchestrators ensure correct separation of environments and less risk).
From the other 2 options, 1 Tenant + 3 folders, or 3 Tenants with 1 folder, you need to think about the pros and cons of each one.
Case 1: 1 Tenant + 3 Folders
Pros: Since you will have 1 Tenant feed, your packages will be uploaded to the parent tenant, and you will be able to easily “move” from Dev to UAT, to PRD, since you can create the processes, assets, queues… and then share them with the rest of the modern folders.
Cons: All the environments are connected, it’s easy to accidentally modify an asset, or a queue, thinking you’re affecting UAT, or DEV, and you’re accidentally modifying PRD instead.
Case 2: 3 Tenant + 1 Folder
Pros: Less Risk, despite using the same orchestrator, you will have certain environment separation, and you will have less risk of accidentally messing with PRD environment
Cons: Sharing assets, queues, packages… might not be as straightforward as case 1, but there are tools to overcome that problem and support on migrating processes, queues, assets… from one environment to another.
Not an orchestrator guru here, but just dropping some thoughts to take into consideration when choosing
Thanks a lot for your responses muchachos! However, let me reason a bit (and correct me if I’m wrong, as I’m still quite newbie using UiPath terminology). All that I’m about to state is related with latest orchestrator (i.e. modern folders), no matter if it’s on-prem, PaaS or Cloud (i.e. Automation Cloud)
Assumptions:
Packages: When you guys refer to packages, I assume that you refer to the workflows and dependencies (i.e. .nupkg files). These are located in Orchestrator and they can be placed at tenant level or at folder level (if this is the case, you can publish your workflow to the specific folder, not to the tenant).
If folder level access option is used (provided you create a folder with ‘Create a new package feed for this folder’ enabled), you can directly isolate the packages by folders (hence you can isolate packages, assets, queues, robots, user permissions by folder, having each folder completely isolated from each other. You would give developers access to “dev” folder and they would have rights to only publish their code there). Please correct me if this is wrong!
Thoughts:
I did not know (thx @Anil_G ) is that with UiPath App’s there is no folder segretation, so this is not good
@ignasi.peiris, in the (ideal) case of having 3 orchestrators, one for each environment: How do you “move” the packages accross different orchestrators? I assume that you (as developer) can either “publish” it via UiPath studio to the dev orchestrator (or establish a CI/CD pipeline in Azure DevOps for instance, but I am going to omit that for now) and then someone takes the .nupkg file and import it in uat or prod? Is it like that or is it done differently?
Regarding the package move, from the 3 orchestrator case,
You would normally be connected to the DEV orchestrator, and you’d publish from studio directly there.
Now to answer the question “How do we move the packages to UAT and PRD” we have multiple options as well:
(Fully Manual) As you correctly assumed, simply login to the DEV orchestrator where the package is sitting (You are right, by package we mean the .nupkg that contains the source code), hit download, and store it somewhere locally on your PC. Right after that, whoever has permission to deploy, will log in to the destination orchestrator (UAT/PRD) and will upload the previously downloaded .nupkg file. By doing this, you have your package “moved” to the 2nd environment, but only the package. Assets, Queues, library dependencies… are still only on the original orchestrator, and would also need manual creation/movement.
Depending on the Orchestrator version you’re using, you can explore Solutions Management - Solutions Management overview Which would allow you to create “packs” of “Process A + Assets (A,B,C…) + Queues (Q1, Q2)…” in short, it would be like “zipping” all the configurations needed for the process to run, and then you just install that “zipped pack” on the 2nd orchestrator.
Custom CI/CD pipelines, that you can customize as you prefer, automating test cases, publishing… (we can omit this for now )
The 3rd option mentioned (CI/CD) is really really good.
We’re working using this approach on Azure Devops since 2021 and it bring a lot of benefits like the ability to put approvals in the publish flow, more control about who can publish package to orchestrator and even the ability to make some configuration in the robot server (like create ODBC connections ) without need to access it. It is a powerful approach
Let me point at few more issues as well with same tenant and 3 folders theough experience
when it comes to environment management, there are few things that we need to think…
How we are to manage updates
Disaster recovery
When it comes to development environments, usually things are messy. We change many things time to time, configure stuff, upload and test processes, delete… (basically we do everything possible)
In production, we need to have proper control, and less involvement from different people unless its really needed.
We could do this in the same instance using modern folders… Agree… but, when it comes to system updates, if we are to upgrade the orchestrator version, we are affecting all three environments at the same time. And this may end up with consequences that we wouldn’t even know and take time to resolve.
Coming to point of having folder level packages yes it spossible iwth modern …but that way…now there are no robots separately…users itself ac as robots…and if the robot licenses are being reused across folders from dev to uat and prod then the robot/user connected to the orchestrator by chance can have access to all 3 environments and can post them to any tenant
Isolations of things can be done…at the same level there are features that are missing and few concerns like upgrading issues might happen
1 repo by process and 1 build pipeline + 1 release pipeline that check which branch was commited
Homolog → send the package and create/update the process in test environment
Master → do the same thing but send to production environment
We have others utilities pipelines to reboot the servers, create odbc connections, install or update specific softwares, move secret files and so on. For these we need azure agent installed in these servers.
Let me recap to see if I understood you (at a very high level):
When the developer is developing the process he/she pushes the code to a “test” branch (not master). Whenever developer pushes to this test branch the build pipeline is run but not the deployment one (workflow analyser static analysis, etc.)
When pushed to master branch (the production code branch) then the approvals, etc. (the deploy pipeline) occur?
One pipeline deploy the automation to Test environment (approval not required) → homolog branch
Other pipeline deploy the automation to Production environment (approval required) → master branch. For this branch the deploy occurs when approved by our coordinator after all related things are done (change request documentation, pdd, sdd, uat etc)
We disable the option to publish the package to tenant feed directly from studio to make sure the both tenant as much equal as possible
Nice! Related to your setup, do you use a cloud orchestrator (automation cloud I think it’s called)? I still do not have clear how distinct environments are handled with automation cloud (I know that on-prem you can setup distinct orchestrators for each environment, but I don’t know how is this handled in their SaaS offering)