Still no news about potentially implementing this feature? Orchestrator really is lacking compared to BP at this point.
Bumping this thread to check status : We are in the same challenge again with another process currently where we want update transaction’s specific content. Missing this feature.
Hi @loginerror,
Has this request been updated in the feature request backlogs?
This feature can lead to many elegant solutions and can decrease so many additional workarounds, some of which are mentioned in this thread. They work, but without a way to update the SpecificContent
of the same transaction item
, it makes it difficult to build robust enterprise automations.
Transaction items should support updates to specific content like in BluePrism.
I’ve found a workaround to this problem.
A transaction item with InProgress status must first be converted to New status using a Postpone Transaction Item activity.
In this New state we can already modify the SpecificContent of the transaction item using API (Orchestrator HTTP Request activity).
Once these steps are done, all we need to do is to revert the transaction item to InProgress status, which can be done with a Get Transaction Item activity.
The solution to update a queue item in the interim seems really unpleasant to me. I really dont like the idea of updating a queue item in intermediate steps and having the logic in the bot to determine where it was and to recover.
I hear your concern with items getting delayed by repeating actions such as navigating to a page, however I’d like to urge you to consider that speed is not as urgent a priority when dealing with RPA. Stability and consistency are more important factors in my opinion as the robots can work all day and night. I have found its usually not a problem to add another 10-20 seconds to a transaction item in order to split them.
This is not always the case and I do have processes where 10 seconds does really add up because of the volume of data.
If I’m honest I do not follow why its a problem to make a new item (I also do not feel the current setup is an issue as I dont design my bots in this way). If your transaction has failed to process, then you’ll need to restart the application you were using in order to have a clean setup for the next item, so even if you then get the item back with the updated specific content you have to do your navigating.
I really cannot see how you can be working on an item, fail, keep your position in the application, then want to try the same queue item immediately but feel you have somehow lost data…?
If you retry immediate you have lost no data, you have it in memory. If you retry after processing other transactions you have lost the time taken to navigate etc as you indicate.
Perhaps you can clearly explain the scenario where you want to recover from a fail, avoid some navigation and be able to retry an item later on skipping steps and somehow being in the right place in the application without doing the pre-requisities?
Gonna have to hard disagree on this.
You having a single bot that does so much work sounds awful to me, testing much be an absolute nightmare, especially if you need to only test the functionality in stage 4 or 5, you’d need to not only make sure you can skip steps 1, 2 and 3 via some messy logic in your workflows, but then also somehow tell it to skip the remaining stages or have to sit through it completing the entire process each time.
The batch processing you are doing sounds like you are doing way to much in a single process.
I design my bots following SOLID programming principles and one of the fundamentals there is to make sure your code has single responsibility. It very common in coding to see people make a single monolith class which does everything and it just becomes an absolute beast. In all my experience so far RPA is the same and bots that do too much are the same in principle. They become too large, too untestable and contain too much logic to handle all the complex scenarios.
If you make all your processes more abstract, separate them out, you’ll have a much simpler time in development and maintenance and what I find really cool, is you end up re-using certain bots across a ton of different business processes.
For example I have a UiPath process that uploads files to Azure Storage blobs. This prevents me from repeating the same functionality along many many different projects, its already in a library but lets say I had 50 processes that do it and the upload changes (as it does) I need to update the library then 50 packages to upgrade the dependencies. In my method I update the library and a single bot as it handles files from any business process.
Far from it being unsustainable to separate your business process into multiple robot processes, I think its the only thing that is sustainable. All my experience in splitting the process has helped make it code clearer, easier to test and easier to maintain and all the legacy bots I inherit that or monolith bots are a nightmare.
There are some processes that are lengthy and we are no able to separate either because it doesn’t make sense, or the system we have to use have an internal workflow that doesn’t allow it, so it has to be carried out end-to-end, or the system cannot proceed.
I like building smaller robots as well, but in some cases, it is just not viable.
Respectfully, perhaps you are simply not able to think abstractly enough to separate them. Alot of people struggle to do this and it takes quite some skill and experience and understanding of SOLID to get it right.