How would you design this using the RE Framework (using Queues) with Dispatcher/Performer?

Hi,

This scenario should be common to a lot of automations. I’m curious how would you design this?

Input: Excel Files
Excel File Rows: PO-Line Item combinations

  • for example: a PO that has 3 line items would have 3 rows, a PO that has 1 line items would have 1 row. You get the idea. Something like this:
    image

The goal is to create a text file for each PO that contains all its line items. For example:
image
image

Must use the Queue. Can be more than 1 queue. At least 1 queue must have PO as transaction item.

Please note that the question is how would you design this using the RE Framework with Queues and with Dispatcher/Performer.

If you think Dispatcher/Performer is not suitable, why and what should be used instead?

Being such a simple process, I wouldn’t have a separate dispatcher/performer. I would just have one automation that first loads the file(s) into the queue then starts pulling from the queue and processing them. But separate dispatcher/performer wouldn’t be a problem, they’d just both be very simple.

Also, I wouldn’t use RE Framework for this. Too simple of an automation.

Hello,

thanks for interesting question! :slight_smile: Let’s think:

  1. I’d use orchestrator queue and dispatcher/performer. Firstly because it enables you to scale the process - when you have 2 virtual machines, one can dispatch, other can perform. It also enables you to use REFramework - without dispatcher/performer this framework is useless/doesn’t work properly.

  2. The process design is simple. Dispatcher dispatches queue item (reference = PO number, SpecificContent = items). Performer performs the operation.

All in all I wouldn’t agree that you don’t need to use dispatcher/performer and REFramework. It’s always better to use this approach if the process is transactional in my opinion. It’s always more stable and less prone to errors. You can also get better view for further process analysis by showing failed/succeed transaction items. Not to mention very good exception handling in the framework - out of the box.

Best,

Artur

Using also one process for dispatching/performing is quite dangerous - if you set up the queue to perform a repeat, it’ll repeat dispatching and performing the transaction item forever. :smiley:

Tested it out and works like a charm. :stuck_out_tongue:

That’s only if you design it wrong.

RE Framework is one big black box, not appropriate for simple automations. It’s even right there in the UiPath description: “being ready to tackle a complex business scenario”

This is not a complex business scenario.

“It’s always more stable and less prone to errors.”

Not if you code properly. It sounds like maybe the issue is not doing much coding from scratch and not knowing how to handle errors etc, partly because of falling back on “RE Framework already does it for me.”

“by showing failed/succeed transaction items.”

You can do this with any automation, you don’t need RE Framework for this.

" Not to mention very good exception handling in the framework - out of the box."

Write your own very good exception handling. It’s easy.

I wouldn’t agree it’s not a complex process. That depends on what you’re trying to achieve. I’d still stick to using the template rather that telling: it’s a simple process so you can just write a sequence. :slight_smile: It’s just dangerous to think that way - sometimes even a simple process is not as simple as it was after it was developed. :smiley:

Huh? He told us the process. Load a file, write queue items, create a text file. That’s simple. It’s not dangerous to analyze processes and determine their complexity. It’s dangerous not to. You end up with simple automations that become complex because they’re designed poorly.

I’ve shared my opinion. :slight_smile: You can have a different one, that’s not bad. Meanwhile good day to you!

1 Like

This is not appropriate for a dispatcher and performer model as described or suggested by others.
@artur.stepniak I do not believe you have thought this through.

The dispatcher cannot simply make a queue item for each line, because each line is not a unit of work. If you consider a PO number a unit of work then you don’t want multiple queue items for the same PO number, which you would get if you simply made a queue item for each line.

Furthermore this would then mean during each transaction a file representing the PO number would either be generated or appended to. You would never have a clear indication that the file is ‘completed’, are there more transactions to be processed? Can the file be moved to the next step of your process? It is unknown.

So now at this point you need your dispatcher to read over the entire file, then only get unique PO numbers, but if it is going to all this effort now, just have it create the files instead of the queue items. All you are doing otherwise is adding another layer of complexity in another bot which must read the same file, filter by the same PO number and build the text file. What a cumbersome and bloated design.

Here is how I would do it. I would consider a file to be a unit of work (transaction). The bot would open the file and load the data to a datatable.
I would use a LINQ expression to then get the unique values in the column in an ienumerable

Then using a for each loop I would iterate over each string. In each iterate filter the table you already read from Excel and filter by the PO number, iterate over each filtered row and build a string containing all the data, then write that to a file.

Thanks all for your insights.

@artur.stepniak I agree that having it as dispatcher/performer enables us to scale. That is the reason I want to make it work with the said approach.

However, as @Jon_Smith pointed out, if PO is the transaction (queue item), I have to find a way to add line items as specific content. And with that effort, I might as well write that to a text file already.

@Jon_Smith you raised good points. In your suggested design with the file being the unit of work, I assume it’s only 1 process and not 2(dispatcher/performer)? I’m wondering how would this work if more bots are to be added in the future?

1 Like

I would stick to REF, even though it can be done with simpler workflow design.
Why… most of the basic control mechanisms are there, and you don’t have to worry about that part of the disign.

You can go with separate dispatcher / performer, but that is a bit overkill unless you indeed need scalability over multiple robots, in case of large volumes.

A single REF, as hybrid, with a dispatcher workflow in the init reads your excel, adds a queueitem for each PO. The challenge is bundling the row data in the excel into a single queue item.

Here I see a few options.

  • You can continuously filter the excel-extracted DT per unique value, convert that to a json string and add that as specific content to the queue item.
    In the process/performer part you translate that json string into the lineitem info.
    → in my oppinion a bit messy on the dispatcher side, and might run into limitations if you have large orders with a great amount of lineitems, and run into the limitations of the specific content size. (Though I read in another post that this has a 1Mb limit, you might want to avoid unneeded storage of this data)

  • just add a queue item for each PO number, where you set the queue restriction on unique reference. Capture the duplicate reference errors and ignore those. Add the source filename as specific content parameter.
    In the process part you open the sourcefile again based on the specific content, filter it on the PO and you have the same data available again.
    → You’d need to cover proper file access to the same excel (read only / shared file) if you want multiple bots to process the same info in parallel.

Option 2 is probably the easiest to implement, especially if you want to lean on the existing REF instead of designing your workflow from scratch.