Excellent questions. The Business Analyst role is quite important when you think about the impact it has on a successful RPA journey.
I would recommend you use the Process Assessment Tool as a starting point. It is good, but not good enough when you want to build a CoE and want to assess processes against each other.
We evaluated Automation Hub, but found out that we need to be lean and use the excel expertise we have to make modifications to the Process Assessment Tool (excel).
We started with the UiPath Process Assessment Tool and modified our requirements
- One assessment Sheet for every RPA candidate
- Editing / translating the ScoreCard sheet as per our needs
- A macro which gets values from all the individual assessment sheet values to the “Multi Process Assessment” sheet
- A Pugh-Matrix with our objectively defined criteria. For example, if Automation Potential is high but Estimated time saved is low etc . Pugh-Matrix also gives the ranking of the processes which we use to order the projects to be developed. No process gets special treatment, the one which is objectively a good RPA candidate gets a better rating / rank. (How to make one: pugh-matrix-v1.1.pdf (burgehugheswalsh.co.uk))
- Status of each project - have we started development / testing / production
- Modify the Assumptions Efforts sheet to our use-case
But why invest so much time in this excel file?
Traceability: If RPA needs to scale, then we need more developers. But that will only happen if we can show the value of RPA to the decision makers.
Quantification: In the individual assessment sheet you will see there is a question to process owners regarding the time it takes to perform one case manually. We use this case handling time not only to calculate the future savings, but also when we have the robot running in production, we can calculate the overall savings due to automation. This way we avoid assumptions or biases in time savings calculations.
Historic Information All candidates which have been reviewed are allocated a sheet with all the information required. If there are some changes in the manual process we can revisit them in the future without having to waste time in preparing for the evaluation.
Looking back, we would not have started our CoE in any other way. Our internal customers support our approach and are happy to see each of their processes being objectively evaluated. They also can see where their process stands when compared to other process in the company in terms of value generated via automation. In the end, the best RPA candidate alone gets to the production phase.
With respect to challenges
Any technical challenges you face will usually not be as critical as the process challenges in RPA.
The biggest challenge is process requirements. You will come across many process which looked quite simple during the assessment stage, but while developing the solution you and the process owner themselves realize that the manual process is much more complicated than mentioned.
The only way to tackle this is to invest in good developers / solution architects, who think in agile perspectives and are willing to own the design / development / testing / production stages. For example, if the process changes slightly, the automation need not change so much if the workflows are broken into smaller chunks.
If the process is too complicated, just drop it and move to the next candidate. No point automating a process which might result in large numbers of exceptions.
Think traceability via dashboards. RPA robot logs can be used to make good dashboards which demonstrates value to upper management. If they see the value, more processes will be requested for automation and this will continue. Dashboards are great for this. I have a post describing why you need a custom RPA dashboard (not just the one in UiPath orchestrator) : Logs - ElasticSearch - Help / Insights - UiPath Community Forum
Of Course there are many other challenges, but with the right team they can be opportunities for learning and growing
I hope this helps you and others!