Change in application / SDLC

As part of the project, in a large organization, where each department owns their own application, roadmap, and release calendar - I’m looking for ideas on how do we ensure that bots that are created stay current and do not fail in PROD due to screen change on the application that is being monitored.

Some ideas with possible issues.

  1. Ensure as part of ECAB meeting - all changes are approved … not scalable
  2. As part of PDD document or requirements - highlight impact to BOTS … not dependable (BA can forget or not be familiar)
  3. Create synthetic validation BOTS in lower environments that check for integrity … could produce high amount of false / positive issues due to unrelated changes

Any other ideas that have been implemented in such environments?

Closest to safe is running bots on a test/staging environment with the new application version, basically encorporating bots as part of the testing phase for it.
Since you probably have QA for new versions already, your testers should be able to run a version of the bot process versus the updated application.

It’s a tricky one to handle correctly and from SDLC perspective I’d say cross-matching the cycles for both bots and the application is the only way to go. Involving your QA and giving them agency over it should lessen the burden, but the fact of the matter is if you have a change in the application, bots need to be run against it. There’s no way around it.

1 Like

Thank you Andrzej. In order for QA to actually test the bots, could be one of the steps, however it could also add additional work items if it’s not addressed upstream … meaning, more changes developer make, more bots fail, more defects created, more last minute changes to bots

I’m looking into something upstream, so that change is captured in the requirement, or even during development …

Thanks for reply though, and agree about no way around it if thats the last step.