I wanted to get the community’s thoughts on automating in-kind TFSA contribution workflows (stocks, funds, bonds), especially when legacy applications are involved.
Given challenges like:
Unstable or dynamic selectors
Inconsistent UI behavior across sessions
Limited or no API access
Do you think a hybrid approach (fine-tuned selectors, anchors, background automation like hotkeys or image fallback) is the right way to stabilize such workflows?
Or have you seen better results with a different approach?
Curious to hear how others would tackle this kind of scenario.
A hybrid approach is essential but for in-kind transfers, human in loop is often regulatory or risk requirement. design the bot to prepare the transaction and stop at the final confirm screen for human 4-eye check
For legacy apps used in in-kind TFSA contributions, a hybrid approach is usually the best option.
Because selectors keep changing, UI behaves differently each time, and APIs are not available, using only one method is risky. A mix works better:
Use good selectors wherever possible
Add anchors or relative positioning when screens change
Use keyboard shortcuts / hotkeys for stability
Keep image automation only as a last backup
If solution helps you please mark
As solution or let us know if u face any issue
Use fine-tuned selectors with achors.
also create small workflows with checkpoints, so that you can retry when required.
Do not use image fallback and hotkeys until nothing works. keep this as last option.
Try to add logs after major steps and when required. so that debugging becomes easier
Thanks everyone for the insights — really helpful perspectives.
It’s interesting to see a clear consensus around hybrid automation being the practical choice for in-kind TFSA workflows, especially given selector instability, inconsistent UI behavior, and lack of APIs in legacy platforms.
The points on:
Keeping image/OCR and hotkeys as last-resort fallbacks
Using layered resilience (anchors, retries, checkpoints, logging)
And the reminder that human-in-the-loop / four-eye checks are often a regulatory or risk requirement
all resonate strongly with the realities of these kinds of financial transactions.
Curious to go one step further — for those who’ve implemented this at scale:
Do you typically standardize a selector/anchor strategy across applications, or tune it screen-by-screen?
And where human-in-the-loop is required, do you pause at a final confirmation screen or route exceptions only?
Its a good practice and proven to standardize first, then fine tune screen by screen. You can do screen level tuning only where you feel it may not be stable.
For Human in Loop routing, you don’t need to pause at every final confirmation, pause only where there is exception or confidence is very low. This is again a business decision based on the criticality of the process and design.