I apologize if this has already been covered (I tried doing a quick search) or if this is posted in the wrong place (first time using the forums).
I’m trying to figure out what the purpose of using ElasticSearch/Kibana would be in Orchestrator (full disclosure, I’m extremely new to the ElasticStack). Is Orchestrator literally just dumping the same logs that you can find in Orchestrator into ElasticSearch so that you can do analytics and stuff using Kibana? Does it add different details that you can’t get in Orchestrator? Is it pushing more than just logs? Unfortunately with how new I am to the ElasticStack, I was even fighting an uphill battle just trying to find the data that’s in the ElasticSearch instance to see if it had any value beyond what I can see in Orchestrator. I’m not necessarily looking for a super in-depth, time-consuming answer, but if anyone had any insight it would be super helpful.
By default out of box if you were to connect Orchestrator to ElasticSearch / Kibana it would send the Robot logs. If you are using an On-Prem Orchestrator you can review the UiPath.Orchestrator.dll.config and the define NLog Targets and also have the option of defining your own if you wanted other information to be sent to ES.
So to your question, yes the same logs are being dumped to ES, but not all the logs out of box.
The general idea IMO is to offload any additional impact of reporting off the core database for performance, but also generally speaking it is recommended to keep your DB limits in check for similar performance reasons.
ES indexes the data and then Kibana you build the dashboards/reports with.
UiPath Insights is similar in that it requires a secondary SQL Server for it to store the reporting data and another IIS based site deployed for the Insights UI. There is some data that is not inherently available from the logs alone and only available in the Orchestrator database, which is part of what Insights also does for its analytics.
We don’t use either of those components in our setup, most of our data is sent to Splunk via NLog and Splunk Forwarder for similar purposes, we then trim many of the database tables to the last 15-30 days depending on the table and any required reporting is done with the same data in Splunk.
Thanks!! This response might be the most helpful response to any forum post I’ve ever posted in the history of forums ha. This is definitely helpful; between this and me finally learning a bit more about to how to interact with ElasticSearch I think I’ve got everything I need now!