Having an issue similar to that others have raised but I haven’t seen a solution applicable to me
We have 3 processes deployed to our production orchestrator.
These are called by daily time based triggers and we queue them up at specific times of the day, a minute apart to allow the robot to process all of them one by one in order.
We are having intermittency in 2 of the jobs randomly not running at all and not even registering jobs when the time trigger is reached. I would understand an issue if they never ran but i can see they ran this week already.
The odd thing is that the trigger once passed reports that it will be run again in a day, even though a job never happened and I have no (or dont know where to look for) a failure that isn’t listed in the jobs
I have tried recreating these triggers from scratch to rule out an error in that process but no change, randomness in whether it runs or not
The very first set of jobs that i setup runs every time without failure.
So it seems to relate to execution times and this is what is biting me “If the same process is scheduled on the same Robot multiple times and their execution time overlaps, only one process is queued”
I guess the way around this might be to create a new process for each group of executions.
So instead of having 3 processes I could perhaps create 6, one set of 3 for each set of arguments.