My organization is currently standing up our production environment and we have around 30 unattended processes we are moving from our studios to the Orchestrator. While testing, we’ve noticed that we all have different Chrome settings and we are preparing to set standards for what our Chrome settings will be. Keep in mind that we have multiple robot accounts, so we have to mirror these chrome settings in every single robot account.
The two major settings we are debating on are “ask where to download” and “enable pop ups”.
For “ask where to download”, one option is to automatically download files to a downloads folder and then have it moved to a destination folder when it finishes downloading. The other option is to have a SaveAs box pop up and we enter in the file location.
For “Enable Pop Ups”, one option is to disable all pop ups and then go into the settings and enter in the specific websites that need the settings enable. The other option would be to enable all pop ups, and a similar thing could be done if there were specific websites that needed them disabled.
I’m in the losing camp for both of these debates, but it appears we are going with automatically downloading files and disabling all pop us.
My concern with automatically downloading files is that it will prevent us from having parallel processes running chrome that both download files, but I’m not even sure if running parallel chrome processes is possible yet, so I folded on this debate since it appears to work faster.
My concern with disabling all pop ups is that we are going to have to manually go in and enter the which websites need pop ups enabled for every account. As we scale, we will have to add our list of websites to every new robot’s chrome account setting. Also, when we have new processes that need pop ups enabled, then we will have to log into every robot account and enable the website, which will get painful if we have 20 or 30 robot accounts. If we enabled pop ups, then I think we could just code for the unwanted pop ups as we develop processes.
One final note is that all of our processes are built in the robotic enterprise framework, so we want all robots to have the same settings so that we can have multiple robots working a single queue at the same time.
Has anybody else ran into these issues? If so, what route did you go or did you come up with an alternative solution?