we want info and higher logs to end up on ElasticSearch but only want warning and higher level errors to get sent to database, therefor the final is set to false for the first rule, which we were recommended by UiPath to do
to write to ElasticSearch it should be set to robotElasticBuffer according to documentation (and what we have been using last years)
Could you please provide us the error message received in the Event Viewer logs? This is a generic error and we need to better understand your case.
Also, please tell us from which version did you upgrade the Orchestrator.
Hi, we have now had a private call with UiPath tech support and we havenât got it resolved yet, but enabling the debug for NLOG it seems this could be ca cause? We use a on-prem ELK cloud with this cluster setup to 1 node and it shouldnât try to access / PING using any other then the URL we provide in the nlog settings.
ElasticSearch: Failed to send log messages. status=200 Exception: Elasticsearch.Net.ElasticsearchClientException: Failed to ping the specified node⌠Call: Status code 200 from: HEAD / â> Elasticsearch.Net.PipelineException: Failed to ping the specified node. â> Elasticsearch.Net.PipelineException: An error occurred trying to read the response from the specified node.
at Elasticsearch.Net.RequestPipeline.Ping(Node node)
I have checked again your case and I wasnât able to reproduce the error, using the same version of ES.
I have checked your settings related target and logger and they are fine. ârobotElasticBufferâ is the right target name.
Could you please check if the URI is like â127.0.0.1:9200â or âhttp://127.0.0.1:9200â ?
Hi, this actually solved the issue and itâs now working! (this is a test environment, hence the setup with only one node )
but this then means that there was a change in the client version you use in 2018.4 and 2019.10 since same setup is working with 2018.4 without any problems, and what is the actual root cause?
Of course there was a change in the client version⌠2018.4 doesnât support Elasticsearch clusters with version 7.x, and lots of customers expect us to support newer releases. As to âthe actual root causeâ itâs the fact that ES7 clients are not officially supported to work well with ES6 clusters (or any other cluster versions). We tested the client that we use in a bunch of scenarios & cluster versions and it looked like for our usage itâs ok, but this situation is one of the things we apparently missed (I understand that it does work with a 6.8.3 one-node cluster in Azure, so maybe itâs only an issue with 6.8.2 & below? Anyway should only be an issue on one-node clusters)
@virgilp for testing purposes we added a couple of more nodes, but same issue is still with 3 nodes. Without setting DisablePing to True it fails.
How do you configure the StaticConnectionPool? since we only provide one URI in the config, how do you check which nodes are there to setup the connection to check if they are alive or not?
Hi, it seems be tied to that we use a LB for elastic, so actually all failover logic is handled at LB level and therfor we are also confident in using DisablePing set to True for our specific case. Thanks for your expertise and guidenc on this matter @virgilp !
You can now set multiple nodes (comma-separated) in the âuriâ; LB is probably better, but for simpler/less critical deployment setups, that works too.
Youâre welcome! Thanks for circling back, I think youâre right, the issue is likely caused by the LB not by the fact that cluster is made out of a single node.