Studio scheduling will be disabled - thoughts? (pending jobs)

Thanks. That’s why we’re getting feedback. To measure the annoyance versus the benefits :slight_smile:

1 Like

@badita But here for few process we might need some specific machine requirements or few application to be installed, If we disable this option then we should make sure all robots have all application installed and with same configuration. Is this holds true?

1 Like

Then create a folder with few machines (one) and few processes. Terminals Folder. Would this work? But why mix 10 different machines within an environment.

2 Likes

Yes! Thank you:)

1 Like

@badita But aren’t machines shared across the tenant level, not just a single folder?

1 Like

The per server license change allowed us to maximize our licensing utilization. We are aware of the maintenance overhead our model will incur, but other models will incur similar overhead - such as annual security reviews to determine that all the permissions granted to a shared account are correct and valid. It’s easier to review an get manager confirmation of permissions for a unique, limited account dedicated to a certain process than an account that has permissions spanning many different applications, shared folders, mailboxes, departments, etc

1 Like

Thanks. That’s why we’re getting feedback. To measure the annoyance versus the benefits

That’s why you should never remove nor change features that people rely on when you add new features.

3 Likes

It might be necessary to run a process on a specific machine in situations where software used in the workflow is only available on some machines

Ex- Acrobat DC has a nifty “convert to PDF” browser toolbar button, it’s very useful for when a web page is 1000+ pages (using a traditional “print to PDF” approach could fail/ timeout) but an organization might have a limited number of available licenses for Acrobat DC.

3 Likes

@badita
Does this address the issue I noticed the other day with Orchestrator Triggers on an Unattended bot… where if the Enterprise Licensed Unattended bot is busy, then the trigger will send the job to a Studio Development Bot that is in the same Environment as the licensed Unattended Bot?

i.e…

  • Studio Development Bot and Unattended Bot in Environment “Testing”
  • Orchestrator Queue Trigger is triggered, but Unattended bot is busy
  • Orchestrator Queue Trigger sends the job to available Studio Development Bot

This is what folders are for (which will replace environments). You put similar machines within the same folder. One machine can be part of multiple folders.

Let’s suppose your deployment grows in size and you need two robots in order to process PDFs. In the old model you will create a new robot, set the password etc. When scheduling you should create two schedules (for bot1 and bot2) since you’re using scheduling on specific bots. In the new paradigm all you have to do is to add the second machine to the PDF folder.

3 Likes

That is an excellent point :slight_smile: touché. Looking forward to not entering the same password 100 times!

1 Like

OK, I think I get it, the robots will no longer be the fixed “user*machine” pair we have now, the machines will be randomly assigned to the user as needed.

Right now we can specify the “Execution target” when creating a trigger, that screen will be changed and we’ll be able to choose a user then?

1 Like

Yes. And you’ll need to make sure that within a folder that user can logon and execute on any machine. (any pair UxMy works within a folder)

1 Like

@badita
I think it would be helpful to the community if you split your request in to several parts, as the 2 things you propose to change are not totally connected.
For instance, not being able to schedule to a Studio-Robot, does not effect our work that much.

The other thing about not being able to schedule to a SPECIFIC robot is a very important feature to us.

Usecases? One issue can be security, imagine a scenario where a specific robot, and just that robot is granted very wide rights, because it is not a GUI-client robot, but a robot that works as an API robot, like a service bus. Security might not allow this service to change to different robots, as they would then require those rights to firewall, filesystems, databases etc. as well.
Another scenario is very complex and sensitive robots, which might not be “happy” about running on a new machine.

1 Like

I also think it might be worthwhile to breakdown and summarize your intentions clearly. Reading through the discussion there seems to be mis-understanding of Machine vs Robot.

Within our organization we have the following setup

Infrastructure Environment
Development: Used for infrastructure development and testing only before promoting the infrastructure changes to the other two environments. Typically is turned down.

Non-Production: This is where developers design and publish their packages. We use Organizational Units / Folders to provide separation for “development” and Pre-Production. Currently the Studio and handful of Non-Production robots are in a Classic Folder for development use, we’ll be looking to move the Studio robots to a Modern Folder. We then have a few Non-production robots in our Pre-Prod Classic folder and this robots are associated with machines that mimic as close as possible to our Production Machines and Robot profiles.

Production: Most of our developers are part of our CoE and administer the UiPath platform. Our production environment is isolated to production releases only. Currently we only make use of unattended robots.


Licenses
Studio: Developers use studio on their local machine or on a shared VM, they may initiate a job from Orchestrator, but no reason to schedule one. Being able to adjust parameters on a job for testing during development can helpful at times but not necessary.

Non-production: We have a mix of use cases primarily divided into “Unattended for Development” testing and “Unattended for UAT/QA” testing that mimics production.

Production: Only used within our Production infrastructure


Currently using 1 Machine for 1 Robot, but going to be moving towards a high-density model as we determine our machine sizing and risk tolerance in the event a machine goes down, but a process/environment would be split among 2 or more machines.

Re-guarding security we have a mixed approach depending on the services the processes are interacting with.

Domain Security Groups where possible that are specific to Prod/Pre/Dev Robots where all robots are allowed to access services in their respective environment to Business Domain or Process specific security groups where we need to be more explicit and only certain Robots are allowed to access certain resources.

We also have Firewall rules in place, most all machines can access destinations in their respective environment Prod to Prod, etc. But at times we have to be more explicit on which machines can be allowed to the various vlans. We also have some services that take an extra step to use an IP ACL on the destination host (Another reason we are moving towards HD model).

So if we have the ability to 1) Define which machines are usable within a Folder/Environment 2) Define which Robot Profile/Credential are usable within a Folder/Environment; I don’t believe we would have any issues with what is being proposed.

I think the bigger challenge in a Robot being able to run a process/job on any machine within a given pool is the initial setup for anything that might not be able to be customized when the local Windows profile is created or that cannot be controlled via policies. It’s something we are smoothing out, but either we need to build for the initial pre-reqs or ensure we configure the pre-reqs for each machine / profile.

1 Like

It is up in the cloud and partially documented. Full docs and videos will follow soon and if something very wrong we’ll fix it in the next six months :).

2 Likes

A post was split to a new topic: Pending Allocation - Modern Folder, Floating Unattended Robot

Why not make it optional to keep the legacy scheduler?
If you remove the scheduling functionality from Orchestrator - you’ll have eliminated the primary use-case for Orchestrator (for our organization).

Hi @JTKC-BCBS. Scheduling is still there… on non-production and unattended bots. Why is this the primary use case for Studio?

1 Like

ahh if only disabling it for studio machines and not unattended robots then yes no problem :slight_smile:

2 Likes