We have 3 orchestrator environments. Development - Acceptance - Production.
Way of working now is that developers create a package of a robot-script. That file is manualy uploaded in the acceptance enviroment and when tested, we upload that same file manualy in production env.
It would be nice if we could “push” a version from 1 env. to the other. So that orchestrators are linked.
And even better is that you could select a version to move, including some assets. Now assets have to be created manualy in each system.
Is there such way of automated deployment or will it be in future versions? (if not, can I add this wish to the backlog? :))
Moved to Ideas, so your wish will be a little closer to become true.
Something that would help with migration is planned (18.2 earliest), but till then a question.
How are your 3 environments separated? Different orchestrator instances, different tenants on same instance, units?
@Ovi: Thanks for moving my item to “ideas”.
@Cosin: The 3 environments are installed in 3 different servers/machines.
Just wanted to revive this thread to ask whether this feature is still planned for a future release? @Cosin mentioned 18.2 as earliest possible, but as far as I can tell this feature has not been introduced yet. On another thread, some users mention using SQL to insert exported Assets into a second Orchestrator database directly, but we’d really like an official way to migrate Processes and Assets from Dev/Testing to Production servers. I imagine this will be more achievable once Asset “containers” are introduced so we can migrate only the Assets pertaining to a specific Environment or Process.
I’m very glad you’re thinking on how to improve your process deployment procedures. However, I’m curious to know few things about how you’re using assets.
For a process, do you use a config file in which you have the assets that are being used by the process?
For example ReFramework keeps them in an excel file alongside with other environment settings specific to the process.
In Orchestrator, do you follow some naming conventions for assets to map them easier to a specific process?
For example you have process ABCD and then all the assets used in that process are named ABCD_Asset1, ABCD_Asset2.
If you’re using any other ways of mapping assets to a process, it would be really useful for us to know how you do it.
We have a 'SystemConfig" excel file. We keep track of all assets and what process is using them. We do apply naming convention as you say, but we also have shared assets. They don;t have a prefix. The excel helps us a lot. In that excel we can also give the specific values per environment (acceptance/production). We also keep track in what environment the asset is created. So if e.g. process ABCD is moved to production, we filter on that process (incl general assets) and check what asset is not created in production yet.
I can’t speak for my entire dev team, but we are not really using Assets only specific to a process. Variables that are specific to a process are being kept in an Excel file that is stored near the other project files or near the shared component being invoked.
For the Assets we do use, which are mostly Credentials, start with a prefix that identifies it for a particular system like a certain web portal or application. For example, SAP_Credentials … and since we have many teams that support different parts of the company, we have another prefix to identify which team the Asset is being used by. For example, TEAM1.SAP_Credentials
However, it’s very difficult to keep that naming convention completely consistent, as some devs are using a period instead of an underscore or vice versa.
So short answer is we use a prefix to identify the system that the Asset is associated with.
Actually @Bogdan_Popescu said they developed something to integrate with Azure DevOps / VSTS. So the version control and multiple environment deployment could work though that. We are probably going to try taking that approach.
Thanks @ClaytonM , that would be great. Hopefully it doesn’t need a version upgrade. We are still in 2017.1/2018.1, aiming for Q2 next year for 2018.3 upgrade.
Thank you guys for your feedback! It is really valuable for us!
Regarding the integration with Azure DevOPS / TFS, we’ll release very soon a public preview of the module.
@Bogdan_Popescu Are there any news about this module?
It would certainly be good to have an export and import feature on the assets screen.
I, too, am looking forward to the export/import of assets feature. We use tenants and it would be nice to have an export feature where you are asked to chose which specific assets to export, so they can be imported to another tenant.
While a gui would be nice if this is something critical this can be done in <20 lines of powershell code.
Using the apis you can get all the assets, export them to CSV, delete the unrequired assets and bulk upload them to any number of tenants/organizational units/other orchestrator webapps/iis servers
the bulk upload takes less than 1 sec for >100 assets.
If you have shell access.
In our case, that would not be possible without us going through our IT helpdesk.
We are looking into building a deployment pipeline to connect our Orchestrator instances and allow automated PROD-promotion after approval, e.g. by using Jenkins and a webhook from Orchestrator to trigger a workflow. However, we don’t want to create customized pipelines if this is set to become out-of-the-box functionality.
Is the above feature planned to be included, if yes, what is the expected timeline?
I have used a jenkins pipeline script that can do the deployment but not assets -GitHub - skirankumars31/Jenkins-Deployment-Script
It will be nice to have an export configuration that will export all the things related to a process so that it could be imported into another environment