@jeevith - This particular one is pretty simply and just one variation that we use for the specific use case.
The event log from Orchestrator is that of the Robot Log, transformed into JSON, more or less the same payload that is being sent to the database, but we also include the rawMessage as a child attribute.
Query for which ever properties you want to reduce the result set and (i.e. Process Name, Message=*execution ended) and then we also eval rawMessage.totalExecutionTimeInSeconds x 1000
(This is done as Timeline is expecting milliseconds) and then pipe it into a table of _time
, robotName
, and duration
.
This is then visualized as a Timeline (gantt charts don’t exist out of box)
^^ Now with the above depending on how well you filter your result sets and how many jobs you are looking at… could make for a TON of elements in the chart for the browser to draw causing it to slow to a halt.
So I’m playing around with transaction to help bucket processes that have a lot of jobs in close succession together to display as a single element in the chart. Because transaction groups the events together, you have a bit more work to get your Start / End times and duration. The example below is on a per Process rather than per Robot.
index="uipath" processName="*" message="*execution ended"
| rename rawMessage.totalExecutionTimeInSeconds as ExecutionTimeInSeconds
| eval epoch_time=strptime(_time,"%s")
| eval newts=epoch_time-ExecutionTimeInSeconds
| eval ExecutionTimeInSeconds = (ExecutionTimeInSeconds * 1000)
| eval startTime = strftime(newts, "%F %T.%3N")
| transaction processName maxspan=20m mvlist=true
| eval endTime = strftime((_time + duration), "%F %T.%3N")
| eval initalTime = mvindex(mvsort(startTime), 0)
| eval etis = mvindex(ExecutionTimeInSeconds, mvfind(startTime, mvindex(mvsort(startTime), 0)))
| eval etisdur = (strptime(endTime, "%Y-%m-%d %H:%M:%S.%3N") - strptime(initalTime, "%Y-%m-%d %H:%M:%S.%3N")) *1000
| eval processName = mvindex(processName, 0)
| table initalTime processName etisdur
| sort processName
Without the transaction span
Note how the elements are darker, that bottom one can have 10-15 jobs overlapping within the same 20-120 second time frame, and Splunk would be creating a visual element for each record.
With the transaction span
Still not ideal, but you still get mostly the same level of details although some very small non-running time periods can be masked because of the 20 minute span of the transaction, and it’s a lot nicer of the browser’s rendering.