Best practice: Asset management across multiple Orchestrator instances?

Wondering how you’re all dealing with asset management across multiple Orchestrator instances (dev --> test --> prod).

For example, if multiple processes are being created, some of which require NEW assets, others require UPDATED assets, and the devs perform those changes on the dev Orchestrator server, we know that pushing a new process to the dev server is as easy as clicking “Publish”.

Often a release manager will grab that package and physically drop it onto test instance of Orchestrator, and then eventually move it to prod.

However… what is your process for bringing the right assets along from Dev --> Test --> Prod?

How do you keep track of which ones are new vs. updates vs. deletes, etc and make sure the right changes occur in upper env?

2 Likes

Bump. Does anyone use anything more sophisticated than Excel for this?

To be honest, I find that storing global values across multiple projects and environments to be easiest when using some config file (Excel, json, or other format). And, store this file in a location that each user/robot can have access to, typically in the same directory as the workflow that it is associated with.

So, for me, I try to avoid using Assets for those type of values.

For Credentials, it’s a different story. We don’t really have a good solution in place currently, and just update the password in each Orchestrator (trying our best not to make human error). We are going to move to CyberArk, I believe for Credential storage, which should work better, so that might be something to consider.

2 Likes