Orchestrator Logs - too large, potentially causing errors

Hi all,

I accidentally put a “log message” inside a for each loop rather than outside it, so created over 10k pages of logs, instead of the usual 25, and now I can’t seem to open any job logs (even though I’ve since corrected and this level of logging only ran once). I’ve never had an issue opening log files before in Orch so can only assume it’s related - any suggestions?

Thanks,

Alex

They are now opening - no idea why, nothing changed in the meantime, but no longer receiving errors in Orchestrator or being shown blank logs. I guess time really does heal all?

Yeah, not sure but my guess would be that the robot machine where the Execution Logs are stored had no hard drive space. Depending on the drive partitioning, this can be a problem. But, you can also change the default location for the Execution Logs in the NuLog.config file, I believe.

If this is what happened, then maybe it got cleaned up and fixed itself. :man_shrugging:

1 Like

I doubt it was hard drive space with only that amount. 10k pages is max 500k log messages, which would be a couple of hundred MB at most and would throw a ton of local system errors, while not affecting Orchestrator at all.

I’d probably guess for local logs buffer choke, which would lead to sustained strain on Logs service on Orchestrator side.
Second guess is logs read query timeouts due to to much to fetch even with max filtering.
Third would be DB indexing needed to catch up, adding strain and leading to timeouts. This is supported by other jobs logs not opening either.

Local hard drive space issue is possible, but would probably manifest itself much more brutally and local logs are never cleaned by itself so it shouldnt clean by itself.

1 Like

true true. I just remember there was a time when for whatever reason the logs were taking up size in the GBs rather than MBs. I think things work much better now though, so it shouldn’t be a problem. However, if the disk space is being taken up by other locations like the Temp folder, it could potentially get cleaned up just enough for robots to generate data. I remember needing to explain to our local IT that each user profile takes up anywhere from 200MB to 1GB just to create the profile, so if you have 10 user accounts, that can get to 10GBs pretty quick. And, if your drive is partitioned where C drive has little free space, then it becomes an issue. — I’m just speaking from my own experience using our own Windows Server configuration, so it obviously depends on how well the VM servers are set up

Your suggestion is probably more likely though, and I agree.

1 Like

I’ll observe this closely in future, and might even stress test again - putting the Log Message back inside the For Each loop to recreate the issue.

I would go with @andrzej.kniola’s second guess, but it wasn’t only the bloated logs which wouldn’t open, but all subsequent job logs after I’d corrected the issue too (back down to maybe 15 pages of logs), which makes me think it was a more generalised problem than a timeout?

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.