Orchestrator robot logging to Splunk real-time

Greetings all - We have configured our Orchestrator environment to send robot logs to Splunk via Nlog using the following target configuration in our UiPath.Orchestrator.dll.config file. This is working; however we have noticed that we intermittently lose some events from the robot logging.

Is anyone else out there using Splunk Nlog targets and have any information to share? Thinking we may be able to tune some of the parameters (e.g. batchsize, buffer, connections)?

<\target name=“Splunk” xsi:type=“BufferingWrapper” flushTimeout=“5000”>
token=“token value***”

Hi @chris_tsurumaki,

My first suggestion to you is to talk to your Splunk admin, they+Splunk index parsers do not like logs with newline charcters, they despise it! There can be an internal parser rule in Splunk which renders the forwarder useless for logs having newline chracter in them. Those logs which you are suspecting to be not found may well contain newline within it. We have been there a few times “head scratching” !

For example in one of our process logs we had multiple keys so the developer had used Key:value+Enviornment.Newline+Key:Value

The logs were registered in Orchestrator (Orchestrator support rendering of newline in log texts). The forwarder tried to send it to the Splunk index, but the Splunk parser had a rule that newline logs be ignored, we did not know this as the Splunk admins work outside the RPA team. The SPL query returned nothing interesting!

Our resolution was to include an internal best-practice to NEVER use Enviornment.Newline ever again in any developer written log messages/ write line activity. We also ensure that the auto generated error messages (application exceptions) from UiPath are also stripped off newline characters before logging them.

That said, in my experience the most stable of all parts in our RPA infrastructure setup is the Splunk Forwarder! So I would be very doubtful that it is that which is causing this.

Other things which might help you and your team:

Example Config
You can refer the way we have setup our Nlog config here and base search philosophy in Splunk.

Remember that order of the <rules> and <logger> tag are the most important as that is the sequence used by Nlog

Importance of dedup
Another heads up is regarding using dedup in your query.
Your splunk index will have both robot and orchestrator logs as source. To avoid duplicates we can use any one of these

  1. dedup _time consecutive=true
  2. dedup _message consecutive=true

I never trust the robot source type, I advise you stick to the above dedup approach or explicitly only use the Orchestrator logs. There is a reason why the above dedup approaches will be better than "source=orchestrator" in your SPL query.

I seem to have gotten excited seeing another team choosing Splunk!
Lastly, @codemonkee is another fellow forum member who has amazing insight into Splunk usage in RPA teams.

Hope this helps!

Hi Jeevith - thank you very much for the reply and insights. Will check into the newline issues. We have been using the splunk NLOG target with Orchestrator up until now, but I’m wondering if maybe we will be better off using the Universal-forwarder approach instead.

Will consult with our splunk team here as well.

Thanks again!