It is nice idea but currently you can just trigger robot with a robot. For example you can make asset (bool type) in orchestrator and change it’s value to “true” at the end of successful robot 1 run. Then robot 2 will run at scheduled time only if this value will be true. Otherwise it will just stop.
I don’t feel like your answer is convenient nor complete.
I’m currently using pipelined jobs with lot of queues, and adding asset means that I need to rewrite jobs or add a controller (wich need to be schedule and also might need logic and lot of arguments).
The feature proposed above would be a must have for me when dealing with pipelined transactionnal jobs because I would implement each sub process separatly from the global one, while being easly reusable in case of a logic changes.
Also, it will add control in Orchestrator with a global view and more easly. Because I’m like the above example, and all with robots, it adds lot of process in the list, needs more attention about changes and dependances and need to know the full pipeline implementation, arguments, version etc in each process … which is not really scalable in terms of process implementation (and edition)
Thanks for the cobsideration :).
PS : I know some other vendors use this in their ‘orchestrator’ and I think I like this feature :)!
I can see some logic in allowing something like this on orchestrator, because theoretically you can have reusable processes and chain them up without modifying any of the process to get some outcome (we do have some generics ourselves, like customer comms).