Looping through datatable and passing value to next sub process

We are creating a bot that reads in a datatable from excel and then takes the Reference number from each row for inputting into a system to grab certain data from certain screens (using For Each row in DataTable).

We have 1 big workflow to go through the whole process but understand that we need to be splitting this down into smaller sub processes.

In best practice, do we nest and invoke the sub process flow charts within the For Each loop?

Could someone provide an example of how it should be done please?

1 Like

A flow chart within a for each row can be used but instead - you can simply use a flowchart and assign the row you are working on and use an increment to move onto the next row.

This will then help by breaking it down into smaller components (sub flow charts) via invokes)

Thanks for the quick response!

Sorry, I am new to the flow chart concept as we have been working on getting the process working first. How do I go about assigning the row and iterating through the table?

Hi there @fbxiii,
In an ideal World, you’d have one process (Load/Dispatcher) dedicated towards:

  • Reading the given Excel document
  • Iterating through each row
  • Adding each Reference Number to an Orchestrator queue

With a second process (Work/Performer), aiming to process the retrieve the next available item and complete the case.

Obviously, not everyone has access to a Production instance of Orchestrator, so this may not be feasible.

In that circumstance, retrieve the Reference Number one row at a time, process it, then increment the counter and work the subsequent case.

For instance, have a counter (of type Integer), starting at 0 (if appropriate), access your DT through yourDataTable.Rows(intCounter).Item(“Reference Column”).ToString. Following this, process the case, then increment the counter (intCounter = intCounter + 1).

However, using this approach, you run the risk of losing the counter value upon failure, making it difficulty to know which cases were worked.

With all of this said, have you seen the ReFramework?

Thanks in advance,

1 Like

As Josh said there can be a risk depending on failure but you can pass the intCounter as an argument in / out between the main work function and any exception handling if need be.

Attached is an example flow.
Main.xaml (14.8 KB)




Hi Josh.

We are currently just using the Trial version of UiPath as we are working on a Proof of Value exercise. We will take a look at ReFramework, thanks for the info.

1 Like

Thanks Tim, that makes it much clearer.

I have built the state machine and workflows up to the Get Work process. I will look at your xaml in more detail on Monday and let you know how I get on.

Hopefully it helps J

The ReFramework has some good concepts and is being used as the standard for RPA devs and across many consultant firms. However, there are other things to consider as well.

Each sub-process component should be using arguments that pertain to an individual item. Like, you don’t want to read the entire data or multiple items and loop those items within the sub-process component. If you were to test the sub-process, you would tell it to run one single item. For example, telling it to process an invoice number. Ideally, you also don’t want to rely on an entire data row being in a certain format, so use individual arguments for each value that the sub-process needs. You will probably also want to only update the status of the item within the main rather than the sub-process, but that might also depend on the sub-process as some may require the status be updated inside the sub-process itself. So anyway, when you invoke the sub-process you would send it each row item that the particular component needs rather than the entire row; this will also allow flexibility in how the component is used.

Here are some thoughts on Framework use, and sorry but sometimes I get a little deep in this :smiley:

Like, let’s say you have items to process, so you get first item, then invoke 3-5 sub-processes such as downloading documents from a few web interfaces and updating their completion into your internal system. And, the loop will continue on to the next item to process and repeat those same steps again. So, what if one of the sub-processes has a random exception that occurs and a retry attempt will be made? Do you want to perform all the sub-processes again for that item or do you want to start off where it left off? Technically, you could add some logic into or before each sub-process so it only is performed based on a condition like a file already exists or by a status that was recorded in a file. However, that approach isn’t always ready for you in every sub-process shared workflow that you tend to use, and requiring that kind of logic isn’t always a good solution, although it is an approach that would definitely work.

Additionally, when your sub-process throws the exception, how do you get the information from the sub-process such as “what part of the process failed” and so you know exactly what parts were completed successfully. When, an exception occurs, all arguments fail to be passed back out, so including that information in an out argument won’t work. So, the question is then do you require that every shared workflow component / sub-process being invoked be surrounded by another Try/Catch that doesn’t throw an exception but rather returns an exception variable? I’m not a fan of that either, but it is another approach that might work, atleast until you get tired of always receiving Exceptions as an exception variable instead of an actual Exception, and the fact that you will collaborate with other developers who don’t take this approach to their shared components.

I can’t say I’m a fan of using counters as a way to iterate through items. It’s like if I were to set up a For each like this: for each i as int from 0 to list.count
instead of like this: for each item in list

And, you can get lost with which item you are currently on if you increment/decrement something wrong. So, I almost feel like the Get Transaction Item State should be part of the same state as the Process State, therefore a For Each can be used rather than a counter loop. Then, again, it can still work either way.

So those things being said, if you were to integrate your sub-processes into the vanilla ReFramework, you will find that the entire process for the item will start over again when exceptions occur.

I don’t want to veer you off too far from the ReFramework, but a current solution I use to continue with the sub-process that had failed (and one I’m looking to improve) is to store each section of the process to some list. You can then use that list to store the current sub-process being performed and enter a Switch box that goes to that sub-process index and loops back to get the next sub-process from the list; if an exception occurs it will use that sub-process string in the Log message and again use the index in the Switch box so it goes to the last sub-process that needed to be performed due to error.

Here is just a snippet of an old version of this idea that I have been using successfully. Basically stores the string of the sub-process as the Current string from a list of sub-processes, then uses the index to jump to whichever sub-process it still needs to perform. If the whole process completes for that item, then it resets the index to the start for the next item.

Eventually, I’m hoping to have a Process workflow to work with that can fully integrate with the ReFramework, and I, my colleagues, or other developers would no longer need to think anymore on how to utilize or migrate a series of sub-processes successfully into a Framework. In theory, even a caveman should be able to do it!

Wow, you read this far. GG :stuck_out_tongue:



Hey @ClaytonM would you mind sharing this workflow you just posted with a screenshot?
I’m trying to do something similar