Robot groups for assets / asset sharing

orchestrator
configuration
i_considering
assets

#1

I think a similar sentiment has been expressed here in original post - maybe not exactly, but still similar:

(Just to get it out of the way before someone shouts “But Security!” - Credential type asset should probably be out of scope for this idea)
(Also to note - I’m mostly familiar with 2016.2 and only have ok’ish view on 2017.1)

Basically what I’m after is having a way to group robots to have the same asset value.

Currently we have 2 options:

  • general asset - all robots get same value
  • per robot asset - each robot has individual value

Neither of those help with scaling in a reliable way:

  • using general assets limits separation between devtest and prod, as well as between robots
  • using per robot assets limits fluent scaling as it’s a PITA to manage

Of course, those limitations can be worked around (config files embedded in project or in reliable paths, scripts to add a config for a new robot via API etc.), but all of those present their own limitations. Most important ones IMHO are 1) you can’t have a clear view of what is the config for a particular robot and 2) there isn’t a straightforward way to add a robot with same config.

And I’m not ok with that :slight_smile:

Some ideas how that could be improved:

Asset per environment

While OUn general assets proposed by @Susana in the linked topic serve similar purpose, it’s not the same use case - main goal with OUn (as I understand it at least) is clear separation. Even if you adopt a OUn = Stage (as some did) way of work, you need to manage the assets separately. Sooner or later someone will forget to add/update something, especially if you have more than 2 stages (pretty likely).

A Per Environment asset should work in a similar way that a Per robot one does, just with different grouping.

That way if you add a new dev to a project, connect his machine (VM, laptop, w/e) and add it to a Dev environment… that’s your config mostly done (add credentials and you’re set). Same with adding a Test robot or a Prod robot.
It also alleviates the issue that if you need to recreate a robot (for whatever reason - moved to a different machine etc.) you don’t lose the per robot config, because it’s not per robot anymore.

From limitations perspective there is a potential conflict of having a robot be in different environments and assets conflicting.
This could be solved by using either JobStart data for unattended runs (since it knows from which environment it was triggered) and by picking environment for Studio runs in Studio options (which would be nice if we’d finally get as having options only in .config files is not exactly great UX).
For Attended robots I think it should know from which Environment it got the release? If not, IMHO that could be added for consistency of information.

Asset per robot group

Similar to asset per environment and with the same goal.
If used with OUns/Tenant separation, it should be manageable to design it without conflicts.

What it would also ease is feature toggling and/or fluent rollout of new functionality to large scale deployments.

Imagine a deployment with, just to have a number, 40 robots, split into 4 groups - A, B, C, D with 10 in each.
You start with toggling new feature for group A.
If everything goes well for the test period, you toggle group B.
Then group C.
Application you work with for the new feature (let’s say a REST API to check some registry) starts rejecting requests as you’re sending too many requests from a network range.
You toggle off group C.
Group A and B now work stable.
Group C now works stable.
Group D was never impacted, making sure that whatever happens here, at least some robots will work as it was.
You can talk with the owner of that API and negotiate a higher limit. Once done, you toggle next groups and see if everything is ok.

While it is possible to do it now with per robot assets (and/or using different package versions for different environments), it’s clearer with groups than individual robots. And much harder to mess up (oops… turned on/off on wrong robot, bye bye production).
It also enables easier crisis management (f.e. a network segment went down and you agree with customer to use test robots for prod to avoid SLA meltdown. Update their credentials, move them to prod group and you’re done).

Thoughts?


#2

If I understand this correctly, most of these will be solved by Containers, which are a grouping of Assets, Queues, Processes (+ robots&packs associated with them). Basically it’s a group of all the entities that are required in an end-to-end business process.

You’ll be able to duplicate them as a whole, which would help in your second scenario (the on/off toggling).

But it’s set for 19.1 since it’s a pretty major change.


#3

Hi guys,

I also potentially like the idea of process specific assets. So for example, rather than having to search through each individual asset you are presented with a grid (ala table) with each of the settings that only apply to that specific process.

@Mihai_Dunareanu For me this could potentially replace the config file direction that we’re moving in which I personally am not a fan of as everything else in RPA / technology seeks to move away from multiple inputs and spread sheets and move more towards databases. This could be segmented into process variables e.g. timeouts etc, assets, etc. Maybe you already had this idea :wink:

Also see my suggestion on static queues for a similar line of thought.

Thoughts?