That is an excellent point touché. Looking forward to not entering the same password 100 times!
OK, I think I get it, the robots will no longer be the fixed “user*machine” pair we have now, the machines will be randomly assigned to the user as needed.
Right now we can specify the “Execution target” when creating a trigger, that screen will be changed and we’ll be able to choose a user then?
Yes. And you’ll need to make sure that within a folder that user can logon and execute on any machine. (any pair UxMy works within a folder)
@badita
I think it would be helpful to the community if you split your request in to several parts, as the 2 things you propose to change are not totally connected.
For instance, not being able to schedule to a Studio-Robot, does not effect our work that much.
The other thing about not being able to schedule to a SPECIFIC robot is a very important feature to us.
Usecases? One issue can be security, imagine a scenario where a specific robot, and just that robot is granted very wide rights, because it is not a GUI-client robot, but a robot that works as an API robot, like a service bus. Security might not allow this service to change to different robots, as they would then require those rights to firewall, filesystems, databases etc. as well.
Another scenario is very complex and sensitive robots, which might not be “happy” about running on a new machine.
I also think it might be worthwhile to breakdown and summarize your intentions clearly. Reading through the discussion there seems to be mis-understanding of Machine vs Robot.
Within our organization we have the following setup
Infrastructure Environment
Development: Used for infrastructure development and testing only before promoting the infrastructure changes to the other two environments. Typically is turned down.
Non-Production: This is where developers design and publish their packages. We use Organizational Units / Folders to provide separation for “development” and Pre-Production. Currently the Studio and handful of Non-Production robots are in a Classic Folder for development use, we’ll be looking to move the Studio robots to a Modern Folder. We then have a few Non-production robots in our Pre-Prod Classic folder and this robots are associated with machines that mimic as close as possible to our Production Machines and Robot profiles.
Production: Most of our developers are part of our CoE and administer the UiPath platform. Our production environment is isolated to production releases only. Currently we only make use of unattended robots.
Licenses
Studio: Developers use studio on their local machine or on a shared VM, they may initiate a job from Orchestrator, but no reason to schedule one. Being able to adjust parameters on a job for testing during development can helpful at times but not necessary.
Non-production: We have a mix of use cases primarily divided into “Unattended for Development” testing and “Unattended for UAT/QA” testing that mimics production.
Production: Only used within our Production infrastructure
Currently using 1 Machine for 1 Robot, but going to be moving towards a high-density model as we determine our machine sizing and risk tolerance in the event a machine goes down, but a process/environment would be split among 2 or more machines.
Re-guarding security we have a mixed approach depending on the services the processes are interacting with.
Domain Security Groups where possible that are specific to Prod/Pre/Dev Robots where all robots are allowed to access services in their respective environment to Business Domain or Process specific security groups where we need to be more explicit and only certain Robots are allowed to access certain resources.
We also have Firewall rules in place, most all machines can access destinations in their respective environment Prod to Prod, etc. But at times we have to be more explicit on which machines can be allowed to the various vlans. We also have some services that take an extra step to use an IP ACL on the destination host (Another reason we are moving towards HD model).
So if we have the ability to 1) Define which machines are usable within a Folder/Environment 2) Define which Robot Profile/Credential are usable within a Folder/Environment; I don’t believe we would have any issues with what is being proposed.
I think the bigger challenge in a Robot being able to run a process/job on any machine within a given pool is the initial setup for anything that might not be able to be customized when the local Windows profile is created or that cannot be controlled via policies. It’s something we are smoothing out, but either we need to build for the initial pre-reqs or ensure we configure the pre-reqs for each machine / profile.
It is up in the cloud and partially documented. Full docs and videos will follow soon and if something very wrong we’ll fix it in the next six months :).
A post was split to a new topic: Pending Allocation - Modern Folder, Floating Unattended Robot
Why not make it optional to keep the legacy scheduler?
If you remove the scheduling functionality from Orchestrator - you’ll have eliminated the primary use-case for Orchestrator (for our organization).
Hi @JTKC-BCBS. Scheduling is still there… on non-production and unattended bots. Why is this the primary use case for Studio?
ahh if only disabling it for studio machines and not unattended robots then yes no problem