Manage/Review/Export All Job Triggers from Orchestrator

Hi Community :wave:

I want to know if there an easy way to manage all triggers and scheduling clashes with Orchestrator.

We have limited licences so I would like to be able to see, review and schedules to avoid over allocation of licences at certain points in the day.

How can I achieve this or get an export?

Any suggestions are welcome.

Thank you.

Hi @Steven_McKeering ,

As far I know I don’t see any option to find out the schedule clashes in orchestrator. Yeah this would be problem if we use minimal robot army for our lot of processes. Good idea from your side we should have some mechanism to identify these clashes.

Anyway I will check with my production support experts and will get back to you. Thanks again for your question it triggers the idea. Thanks.

1 Like

No current built in way at the moment, but if you are eager to do so you could write a Schedule Manager that manipulates the triggers using Orchestrator’s API.

Keep in mind depending on how to manage the initiation of all your jobs, triggers alone are not enough to give you the full picture as have time based triggers as well as queue based triggers and adhoc jobs that may be kicked off by other Users or Integrated systems/events.

UiPath Insights or other products like Splunk can help to visualize the usage so you can plan for capacity, etc.

Below snapshot is an example for a single process that runs adhoc jobs that is invoked on-demand by end users, there are about 10 robots dedicated to the process as it has a SLA of 40 seconds. As you can see all robots are not constantly busy but we need to handle a sudden spike in concurrent requests.

The visualization is created from Job Start/End log events, The last log event indicates when the job was completed and the length of the job.

If you strictly use Time based Triggers and leverage Cron Expression, it should be straight forward to pick your favourite language and use a cron expression framework to look for overlapping scheduled based on the maximum concurrent jobs you can handle across the board.

4 Likes

Thanks Tim for your valuable suggestion. Very informative. Today I have scheduled meeting to discuss on this idea with my team. Nice to hear your comments on this concept. Thanks once again.

1 Like

It’s a fun problem and can be as simple or as complex as you want it to be depending on your use cases and how much queue theory you want to get into.

Love to hear what you come up as a solution to your challenges.

3 Likes

Hi @codemonkee,

I am very interested to see a pseudo code of that panel visualization.
Is that a timechart sliced on robot execution time? Could you share the Splunk query here?

Hi @Steven_McKeering ,

As discussed earlier i have discussed with my prod support team and got to know from them that they have enabled the alerts in orchestrator and provided SMTP configuration so that they are receiving the alerts if any process schedule got missed due to any long running job. Currently this is the way they are understanding and analyzing the clashes. Thanks.

1 Like

@jeevith - This particular one is pretty simply and just one variation that we use for the specific use case.

The event log from Orchestrator is that of the Robot Log, transformed into JSON, more or less the same payload that is being sent to the database, but we also include the rawMessage as a child attribute.

Query for which ever properties you want to reduce the result set and (i.e. Process Name, Message=*execution ended) and then we also eval rawMessage.totalExecutionTimeInSeconds x 1000 (This is done as Timeline is expecting milliseconds) and then pipe it into a table of _time, robotName, and duration.

This is then visualized as a Timeline (gantt charts don’t exist out of box)

^^ Now with the above depending on how well you filter your result sets and how many jobs you are looking at… could make for a TON of elements in the chart for the browser to draw causing it to slow to a halt.

So I’m playing around with transaction to help bucket processes that have a lot of jobs in close succession together to display as a single element in the chart. Because transaction groups the events together, you have a bit more work to get your Start / End times and duration. The example below is on a per Process rather than per Robot.

index="uipath" processName="*" message="*execution ended"
| rename rawMessage.totalExecutionTimeInSeconds as ExecutionTimeInSeconds
| eval epoch_time=strptime(_time,"%s")
| eval newts=epoch_time-ExecutionTimeInSeconds
| eval ExecutionTimeInSeconds = (ExecutionTimeInSeconds * 1000)
| eval startTime = strftime(newts, "%F %T.%3N")
| transaction processName maxspan=20m mvlist=true
| eval endTime =  strftime((_time + duration), "%F %T.%3N")
| eval initalTime = mvindex(mvsort(startTime), 0)
| eval etis = mvindex(ExecutionTimeInSeconds, mvfind(startTime, mvindex(mvsort(startTime), 0)))
| eval etisdur = (strptime(endTime, "%Y-%m-%d %H:%M:%S.%3N") - strptime(initalTime, "%Y-%m-%d %H:%M:%S.%3N")) *1000
| eval processName = mvindex(processName, 0)
| table initalTime processName etisdur
| sort processName

Without the transaction span
Note how the elements are darker, that bottom one can have 10-15 jobs overlapping within the same 20-120 second time frame, and Splunk would be creating a visual element for each record.

With the transaction span
Still not ideal, but you still get mostly the same level of details although some very small non-running time periods can be masked because of the 20 minute span of the transaction, and it’s a lot nicer of the browser’s rendering.

Sorry @Steven_McKeering, kind of highjacked your topic!

Specific on how you can get the data I would either look at the API for JobTriggers or directly at the database table for Triggers.

https://cloud.uipath.com/<account_name>/<tenant_name>/orchestrator_/swagger/index.html#/JobTriggers

Hope the above posts give you some ideas!

2 Likes

Post away @codemonkee - the conversation is very good! :popcorn:

1 Like

:blush::blush::slightly_smiling_face::slightly_smiling_face::clown_face:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.