Splunk connection with Uipath

Hello Everyone,

I have to connect orchestrator with SPlunk.
What configuration do I need to do for that.
Can anyone share material or docs related to that integration. step by step configuration steps.

Shweta B


Hi @shweta_B

Please see this post:

As well as this topic:

I hope these are helpful.

Hi @shweta_B,

Thanks for making a new post!

There are a few options on how to forward your Orchestrator logs over to Splunk. It would help to have an idea of what your infrastructure looks like or what your desired state is along with what you’ve attempted so far or anything specifics you are having troubles and can provide.

  1. As pointed out in the posted linkes by @loginerror you can, of course, use the Splunk Universal Forwarder as a simple solution.

    Those posts don’t go into any meaningful details, but you could configure the Splunk agent to monitor the Robot and Orchestrator, both for the File system logs and Windows Event logs.

  2. NLog - UiPath relies on NLog for Robots and Orchestrator and configured in Web.config in the webroot. This is a good place to centrally configure your logs to go where you want and in the format that you want. NLog is responsible for routing your logging events to where they need to go, whether that’s the SQL Database, Windows Event Logs, Elastic Search, File System, Splunk or many other NLog Targets. More details can be found in the Logging Configuration documentation.

    2018.3.1 (Version we are currently running) the default logging looks like

      <logger name="BusinessException.*" minlevel="Info" writeTo="businessExceptionEventLog" final="true" />
      <logger name="Robot.*" final="true" writeTo="database" />
      <logger name="Quartz.*" minlevel="Info" writeTo="eventLogQuartz" final="true" />
      <logger name="*" minlevel="Info" writeTo="eventLog" />

    Where BusinessException, Quartz, and * are directed to Event Viewer.

As you originally asked on another thread related to AWS, I’ll mention a few things specific to that as well

  1. AWS Services - CloudWatch, Kinesis Data Firehose, Lambda. Configuring these services, you can have CloudWatch collect your logs similar to the Splunk Universal Forwarder, use Firehose to direct the logs through a Lambda and worry about forwarding the data over to Splunk including retries and all that jazz. The Lambda would be responsible for massaging your data into the format that you want before it’s sent over to Splunk.

At a broad level, those are the different avenues that I’ve investigated. Depending on your need, familiarity, network restrictions, company policies, etc. You may need to pick one or a combination thereof.

For example

  • If you wanted to direct the logs from Orchestrator as a central location, but not worry about configuration additional agents there are Splunk Heavy Event Collector (HEC) Target Plugins such as NLog.Targets.Splunk and a few others out there. (Note that I have not used this NLog plugin before, but its development does appear to be active)

  • For our setup, we use a combination of #2 and #3 for a few reasons, but mainly for common configuration with other systems in our company.

    We send out logs Robot > Orchestrator, NLog is configured to direct to SQL and a file (JSON format), CloudWatch monitors the logs (File, and Event Viewer), which in turn is subscribed to Firehose. Firehose sends the logs over to a Lambda (Node.js) which transforms the events into a format accepted by a Splunk HEC, sends it back to Firehose which then sends it over to Splunk.

    As mentioned above there are different NLog plugin for Splunk that would simplify this, but I haven’t tested them and by using Firehose for us, if we ever decide to move away from Splunk, it would be a simple adjustment at one point in the chain to make for various systems in our network.

    +-------+    +------------+
    | Robot +--->+Orchestrator|
    +-------+    +------------+  +------+       +----------+
                         |NLog+->+ FILE <------->CloudWatch|
                         +-+--+  +------+       +-----+----+
                           |                          |
                           |                          |
                           |                     +----v---+          +------+
                           v                     |Firehose+---------->Splunk|
                   +----------+                  +----^---+          |  HEC |
                   |SQL Server|                       |              +------+
                   |   RDS    |                       |
                   +----------+                   +---v--+

Hopefully, these ideas will get you started.



Based on the second method which to use the NLog method, there are only two option which is send the log to database and robotElasticBuffer.
If I would like to send the log to Splunk, can the NLog method still be used?

Please review the links provided, specifically NLog Targets will guide you where you need to look for available plugins to support Splunk.

Basically we have 2 way to send the log, please correct me if i’m wrong.
Option 1
Install the splunk universal forwarder agent at orchestrator and robot.
Since the orchestrator and robot log will be logged to windows event log, we can utilize the splunk universal forwarder agent to push the log to splunk

Option 2
Robot and orchestrator log will be send to orchestrator.
At the orchestrator,configure the nlog to send those log to splunk

Hi @clementson - I mention it in my quoted comment above. Both the Robot (hosts) and Orchestrator make use of NLog so you can configure it where ever it makes sense to do so for your purposes.

If you’d prefer to use the Splunk Agent over NLog you can, you could also configure NLog to redirect the log events somewhere else then have a Splunk Agent pick those up. (You’ll notice in reviewing your own NLog configuration, not everything goes into Windows Event Logs, some is directed to a flat file, others are sent to the SQL Server or if you have Elastic setup, sent there.

I suggest reading the NLog configuration for both Robots and Orchestrator and making an informed decision on how the flow works and what you would like to do with it.

@shweta_B were you able to come up with a solution? Curious if your use case includes the use of more than 1 RPA solution for monitoring. Cheers!