Dispatcher state added in ReFramework

We have dispatcher state added in reframework after Init State before Transaction State in our project i.e. Load State where we read the input file and than it’s goes to Transaction State. From Transaction state it will read one transaction item and process it. After all Transaction item process it goes to End State.

This work for single file but I want to work for multiple file.I have requirement saying one or more file present to process. In that file whatever data present that need to be added in the queue. Like id,description so further with that id need to process. So my question is how do I iterate for multiple file. Like pick one file add data in queue after for each id present in one file process it after completing from one file again come to Load State check another input file present than add in queue and process it. After all file complete it should end. Since I am beginner I am not able to understand it. Any help will be appreciated.

1 Like

Good afternoon @Kavita_M !

Not sure if it’s only me, but your image did not load, anyway, hopefully I got your point right:

What you can do, is 1st find all the files in the input folder (for example) and for each file, create a DataTable (Or Array of Strings…) that can be used to feed your “Process” step as many times as files are present.

You can achieve that by doing the following:

  • On the dispatcher, use the following function:

Directory.GetFiles("YourPath")
Directory.GetFiles("YourPath","*.xlsx*") → If you want to get only .xlsx files.

This should return an array of strings, with the paths of the files.

The only step left to do, would be to send each one of them, one after the other to the Process.

This can be done by having your “in_TransactionItem” as type “String”, which will hold each string path, one at a time, and the “TransactionData” should be your Array of strings containing the paths

Hope it helps!

1 Like

Hi @ignasi.peiris

Actually I don’t want to send input file as data in queue but from one input file there is some column present like id, description this need to added in the queue so that for each id process will happen.

So for e.g. One input file which is excel format have 20 data that is added in the queue → process it → create output file of 20 data from same input file , another input file have 10 data that need to be added in the queue and process.

In Load State I am reading Input file which have 20 rows data which come as output as datatable so that data is added in the queue.

1 Like

Got it!

Then you can use the 2nd approach that I stated on the previous answer:

On the dispatcher, you’ll need to build a DataTable with a row containing all the info you need of each file.

Example: 10 files to be processed, → Dispatcher Output will be a DT with 10 rows.

Your TransactionData variable will be of type “DataTable”, and your in_TransactionItem will be of type “DataRow”.

This way you’ll iterate the process 10 times, and each one containing the info from each unique input file :slight_smile:

Hope this helps now!

1 Like

may be I am wrong but what I understood is read all file data and keep in datatable and than with each row you need to process. Let say I read all data for e.g. 150 rows from 3 input file(50 rows/input file) and I process it if i got any application issue than how will my 1-50 transaction row from 1st input file will come to know that it got completed while creating output file?

1 Like

No worries @Kavita_M , let me show you what would happen:

We did create the Dispatcher, in the Init phase, therefore what’s important is that it is also in the “First Run” part of the init phase:

This is important since if the bot fails with a system / application exception, it will come again to the init phase, but it will NOT enter again the “First Run” sequence. Therefore we will not be overwriting the 1st dispatcher data.

  • Lets imagine it fails at 10th execution. That yellow line will be the path, which leads to the Init, but it will not go inside the “First Run” sequence. Therefore it will “restart” all the apps, and will continue on transaction 11th.

If you want to create a Output file for each “input file”, what you can do is 1st of all, know how many rows were originally on each file, and whenever you reach that transaction number, create the report and send it if needed.

Hopefully I did understand your request here!

Best Regards,

1 Like

Thanks for the explanation @ignasi.peiris. I have one doubt. Like you said “If you want to create a Output file for each “input file”, what you can do is 1st of all, know how many rows were originally on each file, and whenever you reach that transaction number, create the report and send it if needed.”

Let say I am having 10 rows data in one input file and 20 rows data in another input file so total rows comes 30. When transaction number come to 10 due to some application issue it append to the queue again. So how will my total rows match with the transaction number in this scenario since it will take 11 rows which is of another file data.

1 Like

Hi @Kavita_M ,

Maybe to understand in Depth, Could you Let us know with Screenshots What is the Actual Implementation in Load Transaction State and Get Transaction State.

My Assumptions of both the States are as Below :

Load Transaction Data State :

  1. Check if Input file is Present in a Specified Directory, If Present, read the Contents as a Datatable.

  2. Add the rows present in the Datatable into a Queue. (Either using Bulk add or Add Queue Item)

  3. Next, Move the File to another Folder, to mark it as Processed.

Get Transaction Data State :

  1. Normal Get Transaction Item Activity, retrieving the Transaction Items added to the Queue.

Now, In this case. When there is an Application Exception, It moves to the Init and then back to the Get Transaction Data to Check if there are Queue Items. If there are Queue Items it processes, meaning it processes all the Items of the First File.

So, If we could write the Output Data in the Load Transaction Data State with the Condition that if QueueItem is Nothing and OutputDT.Rows.Count > 0.

I do believe even though there are Application Exceptions, It will only write the Data when there are No Queue Items to Process i.e all the items of the First File is Processed.

But maybe we would need to alter the condition as needed.

Let us know if this doesn’t work as expected.

1 Like

Hi @Kavita_M ,

You have multiple options, feel free to use the one you understand the best!

1- On the Dispatcher DT, add a column with the filename. Every time the bot finishes the Process, you can have a parallel DataTable, which you’ll be filling with the status of each transaction.

Example:

  • input DataTable.Clone will create a copy of the DT, with empty data, so you can fill it on each iteration with “Add Data Row” Activity.

So in each iteration, you’ll have a new row added.
If there is a System Exception, and you retry a transaction, you’ll either have 2x rows, which you know the last one is the most recent, or you could build a logic to check if it already exists, and if so, you update the row instead of adding a new one.

To identify the end of the file, you could check if the filename you’re using is different from the previous one, and if so, you create the report, otherwise, you’re still in the same file.

image

  • If transactionItem(“File”) is equal to the previous transaction (or is the 1st one) → you did not change the file yet.

  • Else, if the name is now “File2.xlsx” and on the previous transaction was “File1.xlsx”, you’re in a new file, so you can get all the data you’ve been adding, and prepare a report.

Once the report is done, clean it so you can start filling it again with some data from the new file.

1 Like

Your Assumption is correct for both state.

In Load state we are reading all the input file from one folder and using Add Queue Item we are adding data in the queue.

In Get Transaction Data we are using Get Transaction Item to get one transaction at a time.

In Process we are processing for each transaction item irrespective of filename.

Now in one input file if we have more than 50 item present than I have to restrict it with 50 item.

What I did is in Load State irrespective of file name reading all the data from all file and adding in the queue using add queue item.

In get transaction state I wrote condition before getting each transaction item from queue here. If (TransactionNumber-1) mod 50 than create output file which I need to append with file name.

In process it is processing as business requirement.

In Finished Processing transition I am creating output file for the remaining data left

This is working for all data added from multiple file in the queue.

What output should be:

Like one input file “abcfile.xlsx” have 101 item than 3 output file should be created . i.e 1st file 50 items, 2nd file 50 items and 3rd file 1 item.

If two input file present abc and xyz with 101 and 30 item respectively than 4 output file need to be created. i.e 1st file 50 items, 2nd file 50 items ,3rd file 1 item and 4th file 30 item since 4th file is from different input file.

Now what I am having:

If two input file present abc and xyz with 101 and 30 item respectively than it is creating 3 output file . i.e 1st file 50 items, 2nd file 50 items ,3rd file 31 item

Now I got stuck how do I check it with filename also. Finding some way around for this.

1 Like