I’d like to share my issue: I’ve created a robot that performs web wrapping. Here are its steps:
Log in to the web interface using my username and password.
Navigate to a specific URL.
Retrieve items with the “Extract Table Data” activity (I haven’t set any limit) and click the “Next” button.
Export the results to an Excel file.
My problem is that the site occasionally returns HTTP 500 errors, which forces me to refresh the page. I tried wrapping my extract activity in a Try Catch to catch only the NodeNotFoundException and then refresh the page, but this approach doesn’t work.
Does anyone have any ideas on how to handle or work around these intermittent 500 errors?