Global Logs of Jobs for more than 30

Hello Team,

How to get Global Logs of all the jobs that ran more than 30 ago from Onprem orchestrator.

Thanks,

Select Folder > Automation > Logs

Unless you specifically look for logs by going though a particular machine, robot, or job the above will display all logs by default.

Thanks for the reply . I need logs which are older than 30 days

There’s an “All” option in the filter.

Hi

Welcome back to UiPath forum

You can get all the job logs older than 30 days in orchestrator directly with this option

Or if you want to see only those job and logs then If you have configured Elastic Search with your orchestrator then you can download logs by filtering based on date.

Cheers @Vish2148

I got the logs but the thing i am trying is to get Logs between particular dates which are are more than 30 days. If we got “All” selected it will throw logs from the first run which contains more than a million rows. Sadly as it is a onprem orchestrator we can’t work on SQL database and or use elastic search.

Anyway thanks for the help @codemonkee

I’m assuming if it’s an on-prem Orchestrator and you’re unable to work on the SQL Database means that someone else manages the infrastructure? I would probably talk with them to see if they can dump a subset of the data for you from the database (assuming the logs are not routed anywhere else as well).

From an infrastructure perspective - Would also see if they and redirect the logs (Using NLog in Orchestrator) to direct the logs to another repository Elastic Search, UiPath’s Insights, Splunk, Snowflake, another DB, etc. this would free the data up without putting additional load on the Orchestrator DB, it would also allow general maintenance to be performed by cleaning up older records in Orchestrator.

It’s not ideal, but from the Logs UI you can export the data as a CSV. It doesn’t give you all the fields nor the rawmessage meta data, but it is something. If you can filter the logs further (if you know the machine, process, identity) that could help trim it down for you.

Our DB used to be 8M records on the logs table, didn’t notice any performance issues, but we’ve since introduced scheduled jobs to only maintain the last 15-30 days depending on the table. Everything is sent to Splunk where we keep it for a few months and generate any summary data we need for further retention.

Unload_RobotJobLogsByDate_ps1.txt (37.3 KB)
FolderRobotNames_csv.txt (188 Bytes)

Create two sub-directories [Input and Output]. Place ‘FolderRobotNames.csv’ in input folder and edit [add folder, robot {leave empty for all robots} and date range {leave empty for previous day}]

Edit ps1 - add tenant, local profile/password and on-prem URL

Run

Note: Known issue - does not extract logs for same BOT for different days [just pick last date]