Delays with real-time logging from Orchestrator to Splunk (NLOG)

Greetings all - We have configured our [Orchestrator]environment to send [robot] logs to Splunk via Nlog using the following target configuration in our UiPath.Orchestrator.dll.config file.

The problem we are seeing is that sometimes there is considerable delay (> 1 hour) between the job events and when the event data is in splunk. Obviously seems to be when we have very busy periods of time when there is a high amount of logging activity.

Is anyone else out there using Splunk Nlog targets and have any information to share? Thinking we may be able to tune some of the parameters (e.g. batchsize, buffer, connections)?

<\target name=“Splunk” xsi:type=“BufferingWrapper” flushTimeout=“5000”>
<target
xsi:type=“SplunkHttpEventCollector”
serverUrl=“https://http-inputs.splunkcloud.com/services/collector/raw”
token=“token value***”
channel=“channel-guid”
source=“${logger}”
sourceType=“_json”
index=“test”
retriesOnError=“0”
batchSizeBytes=“2048”
batchSizeCount=“20”
includeEventProperties=“true”
includePositionalParameters=“false”
includeMdlc=“false”
maxConnectionsPerServer=“10”
ignoreSslErrors=“true”
useProxy=“false”
proxyUrl=“http://proxy:8888
proxyUser=“username”
proxyPassword=“secret”

I’m not 100% sure, but with similar things we’ve experienced I think what happens is the log entries are written to the local drive log files by the Robot.exe process as the job runs, then a background process reads those files and puts them into Orchestrator (and in your case Splunk).

1 Like

Hi @chris_tsurumaki,

We use Splunk for our troubleshooting and reporting.

From what I read in your post, I assume that your forwarder might be scheduling sync with delays. Or in your case since you are using Nlog to push data to index directly there may be some consolidation errors and delay.

We solved this by writing a dedicated NLog target to a text file, the text file is then parsed by Splunk forwarder and incrementally adds key value pairs to your Splunk index.

All our robot VDI s have splunk forwarder in them by default, which helps us a lot. We do not need to maintain or check if it is up and running.

This way we knew that what is written in our orchestrator logs are also written to splunk index via the dedicated text file.

Another thing to note that is splunk admins do not prefer logs with new lines within them and some skip search on such events with new line char in Message field.
Unexpected errors in UiPath process generate human readable logs with new line values. Remember to replace / remove new lines in your logs. Splunk admins will love that!

I have a walkthrough in this thread which might give you some more ideas Logs - ElasticSearch - #2 by jeevith

Hope this helps.

I am tagging @codemonkee here as he is another forum member who has extensive knowledge over splunk usage for UiPath.

1 Like

Hi @jeevith - thank you very much for your helpful reply. As per your note, I’m thinking we are seeing the delays due to queueing and/or consolidation delays since we are currently sending events to splunk directly from orchestrator to the splunk http event collector URL.

The idea of logging to a local text file seems to be a better way to do this to avoid performance issues.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.