Most efficient way to do this automation?

Hello,

I currently have an automation that handle some Excel files for me. I got no issues with it, but I’m curious as to hear you guys, if I could have made it more efficient. Whether it takes 1 or 2 minutes to run it doesn’t matter in this scenario - I’m simply curious.

The automation does the following:

  • Loops through all Excel files in a folder
  • Reads certain values, and writes them to specific cells

Next step

  • When done it loops through the files again, and reads the values we just wrotes
  • Those values are written to another sheet
  • File is converted to pdf

Next step

  • Once again we loop through all the files, now both pdf and excel
  • Moved to another location

Would it have been more efficient to be completely done with one Excel file before moving on to the next Excel file?

For example:

  • Loop through files
  • Read values
  • Write to another sheet
  • Convert to PDF and move both files

I hope it makes sense.

Looking forward to see your replies! :slight_smile:

An approach you could have made is do this in a DoWhile loop with the count of excel files in each directory first - outside of the dowhile. Then perform a truncated version as you indicated in your last statement. The condition for your dowhile loop would be something like excelFileCount > 0 and then after one file is done you would reduce your count with an assign excelFileCount = excelFileCount - 1. This would save you from continually reading the excel file.

I would start by building a Dt and add the rows to the Dt with Add Data Row with an array. Then write the Dt to the new sheet, convert to pdf, then use a clear datatable to start the next file.

Both options you present are valid, though I would have gone for your second suggestion, completely handling 1 file end2end before moging to then next one.

It would fit nicely into a REF framework where one proces loop handles 1 entire file, and in you init you just dispatch the filenames into a queue, once.

Will it increase or reduce processing time? Probably not significantly. But the greatest benefit is your error handling.

In your first scenario, if one of the excel files is locked by another user, corrupted soehow or whatever can happen with office files, your entire loop will run into an error, which will either repeatitself in the follow-up loops of stall the loop completely. Resulting in 0 transactions being completed end2end. So you need rather redundant error handling to keep it working.

If it happens on 1 end2end transactions, you can easily finnish the rest end2end with one single error handler.

True - have run into corrupt files or files opened by others - use a try catch. If you are using a state machine you can loop a New Business Rule Exception path to the end state and then in your catch throw a New Business Rule Exception.

Thanks for your inputs. It’s always interesting to see how others would have solved it.

@Jeroen_van_Loon I will take a look at REF framework. Right now I have no real way of handling errors, but it’s a small automation for myself, so not a huge issue so far.

@Chris_Bolin What would be the benefit of adding it to a DT before before manipulating the data?

It’s faster and you have more control over the data… it avoids re- reading the excel file… which, depending on the size, can cause latency in your automaton