Greetings, we are experiencing a problem similar to this thread: Process "Stopping" for hours using orchestrator 2020.10.14
There are several jobs scheduled on an hourly basis for an unattended robot, most of the time everything works fine but on occassion a job gets stuck at “stopping” forever blocking the robots queue.
Of course it does, the point is to avoid having a support engineer having to obtain access to the orchestrator job dashboard and press kill to release the robot from the stuck task.
The customer is planning to scale out the deployment from one robot to 5-10 in the coming period, having to manually kill jobs at random intervals isn’t very helpful in that scenario