In my example I use a flat file written to disk in a JSON format, while I don’t disagree with a flat file being my top choice, it does have its time and place. (I selected it out of time constraints at the time). That said, in the last two years, I haven’t had any issues with missing log messages (at least not at that stage of the pipeline), if you are in that scenario, I would probably dig into why.
Targeting a flat file to disk is by no means your only choice, you could just as easily direct the logs to another database, or directly into an ELK stack or where have you.
The concern that I would have with another system (as I haven’t watched how Orchestrator interacts with the SQL Server is additional load and locking of Orchestrator’s database which could have a performance impact on your Orchestrator/Robot activities.
Are you looking at a replica using SQL Server functionality, or another solution for reading the data directly out of the original database? (e.g. Splunk’s DB Connect) - Did you have a target solution/platform in mind for your reporting piece?