Odata Filtering Logs Limitation With Elasticsearch

When trying to filter logs from Orchestrator by using Odata/ Swagger, only a limited amount of 10000 logs can be displayed if the Robot(s) logs are stored in Elasticsearch. This doesn't happen when logs are stored using only SQL database. How to solve it?

  1. When reading from Elasticsearch using RobotLogsControler, only the following conditions can be used (these can change with future versions):

// RobotName eq ...
// JobKey eq ...
// MachineId eq ...
// Level ge ...
// TimeStamp gt ...

  1. Additionally a $top and $skip numbers can be set in order to skip and show a given number of logs. (ie: $skip 100 will offset first 100 logs and show all or $top number of logs after)
  1. In case logs are stored in Elastic there's a limitation of 10000 total logs with only 1 or no parameter ($top,$skip) setting, or a combined $top+$skip =< 10000 maximum logs retrieved.

If user attempts to increase that limit, an error is displayed stating the following:

{ "message": "Depth of pagination is limited in Elasticsearch by the max_result_window index setting. Make sure skip + take is lower than 10000.", "errorCode": 1015, "resourceIds": null }

  1. To alleviate the above issue the "index.max_result_window": "10000" setting in Elasticsearch can be altered by following the steps below:
    1. From Kibana access Management > Index Management > select the index


  1. Edit the settings and add a new row in the right screen like:

"index.max_result_window": "33333"

After that Save the setting.


  1. Another way to do it is by accessing in Kibana the Dev Tool > Console and run command like:
PUT _settings
"index" : {
"max_result_window" : 33333

The above will modify the index.max_result_window setting for ALL indexes.

To alter a single index add it's name after PUT command like this:

PUT /index_name/_settings
"index" : {
"max_result_window" : 55555


Check for response to be true.

Note: Although the above approach is the correct one to avoid the stated issue, currently there seem to be either a bug or a limitation on Elasticsearch regarding this particular setting, as stated on several internet forums.