Elasticsearch is not able to receive logs from Orchestrator
Scenario 1: Orchestrator is not configured as expected
Certain setups might fail if in case the initial configuration needed for the integration of Orchestrator and ElasticSearch are not yet available. Following are some of the initial configuration that should be checked:
- Ensure the Elasticsearch URL can be browsed from the Orchestrator server. This will ensure the Elasticsearch URL is reachable from the Orchestrator server.
- Ensure the same Elasticsearch URL has been passed to the URI attribute in the robotElasticBuffer target under the nlog section (in the web.config of Orchestrator).
- Check the <rules> section under the same nlog section, to validate if the "Robot.*" logs have a writeTo with robotElasticBuffer value. This is the rule that defines the logs getting sent to Elsaticsearch (ensure if there are multiple rules that the robotElasticBuffer rule can be reachable, if any earlier rule is satisfied and has a "final" as true, the next rules wont be validated).
Scenario 2: Orchestrator is not generating logs
In some scenarios, the Robot could have interruptions in sending logs to Orchestrator, which would also impact logs getting sent to Elasticsearch as it is Orchestrator that directs the logs to Elasticsearch. The best way to confirm if Orchestrator can receive logs is to check if the logs can be received in the database.
- Remove any elasticsearch URI values in the robotElasticBuffer to simulate turning off the Elasticsearch configuration.
- Ensure <rules> section has a writeTo to database.
- Check the row count on the dbo.Logs
Execute a new job and validate if the dbo.Logs row count is increasing.
- If the row count is increasing there is no issues in the logs hitting the Orchestrator. Additionally, the logs of the jobs in the Orchestrator screen can be checked.
- On the other hand, if the row count does not increase, the logs are not reaching Orchestrator. The next steps of trouble shooting will be to check the log settings on Robot machine, event viewer logs and the Robot's local log db (C:\Windows\SysWOW64\config\systemprofile\AppData\Local\UiPath\Logs\execution_log_data).
Scenario 3: Elasticsearch has gone into a read-only mode
In cases where the data drive of Elasticsearch (can be checked from the elasticsearch.yml configuration file available under C:\ProgramData\Elastic\Elasticsearch\config by default) reaches low free space, Elasticsearch will automatically set it's index into a read only mode, to prevent data loss. This can be validated by running the following commands in the Dev tools of Kibana dashboards:
In the above command, "default" will be the tenant name, "2019" is the current year and "01" will be the current month, this will have to be modified accordingly to the time and tenant on which the same is executed. Together default-2019.01 will be the index for "default" tenant logs for the month of January in 2019.
The above GET command should give the settings of the index and validation needs to be performed for the read_only_allow_delete attribute's value. If this value is true, the index is in read only mode and will have to be removed out of read only mode.
As a further confirmation, a test message can be posted using the following command and should give a "FORBIDDEN/12/index read-only / allow delete (api)" error.
"message": "Hello Elasticsearch!",
If the above points are as mentioned above, the index has been put in a read only mode. If the disk is still low on free space, ensure enough free space is allocated or that the data is migrated to a new drive with enough free space. Do note to make the necessary configuration changes if the data location is changed. To move the index out of read only mode, run the following command:
Once done, try posting a test message from Kibana and if you are able to, the index has been moved out of read only mode and Orchestrator should be able to send logs to Elasticsearch.
Applicable till Orchestrator 2018.4.1 and Elasticsearch 6.5.2