Need to read different structure text file and enter that result into SQL datatable

I need to read 30+different structure text file, we need Mfg part no and quantity fields only from all the text file and enter that result into SQL datatable

But the challenge are

  1. Text file might be Comma or Space separator value
  2. Each text file headers are different, some text file don’t have header also
  3. Each text file in different structure

Please refer the below screenshot

Can you please help me how to handle this scenario without Switch activity.

An automation can’t just figure out on its own the format of the files. You either have to fix your source file generation process so they’re all the same, or build a way for the automation to branch depending on which type of file it’s loading.

“without Switch activity.”

Why? It’s the correct way to do it. Or an Else If. Or a Flowchart with Flow Decisions. There’s no way around it being your job to write the automation to branch based on the file format.

My Manager don’t want to use Switch method.

Right now we are having only 30 txt file, we may add few more vendor txt file in the same process in future, if we add new vendor txt then we need to add that vendor in Switch activity right,
he don’t wants to do that enhancement.

Are there any way to read the text file in SQL and enter the results into table?

I’m sorry your manager doesn’t want things to work the way they work. This is how they work. Automations can’t just do things on their own. Yeah, if you add a new file format you’ll have to add code to handle it. This is true whether you use a Switch or Else If or any other branching method. You have to tell it the format to use, period. No other choice.

Another option, is creating an excel “mapping” file, that has very detailed information on every possible file:
One row - one file, its name, separator, column sequence in the file, what it maps to in your table, etc.

So then you simply loop through the table, opening every file, picking up the information from its row then standardizing it and uploading to a table.

This way, when a new file comes in, or there is a change in the format, you don’t need to modify the code, just the excel mapping itself.

I have been running something very similar for a few years now, and it deals with a few hundred files that are always changing - only did code changes a few times, for new and exotic file formats or process related changes.

@ssavickas1 “One row - one file, its name, separator, column sequence in the file, what it maps to in your table, etc.”

If possible, Can you please send screenshot or something
to understand it.

@rumabharathy
Its really tricky case.
Did you try generate datatable from text activity?
select Column Separators as [Comma][space]

Something like this

image

let’s say you need only two columns from all of these tables - part number and QTY onhand, these are the names of your “standard” columns in the SQL table.

So you loop through this mapping table, and the first iteration will pick up “file1.txt”, it will know that its a txt, reads it, and knows that the Separator is tab, so will use that when reading/building the datatable. Now, you need the automation to rename the columns, it will check Columns D and E on the mapping, knows what to look for specifically in that file, rename it to your standard versions, and drop everything else.

Same happens with the second row, and so on. It will require a few dictionaries, a few loops, a few switches (won’t work without them), but you use them not to handle every single format, but on a higher level - what to do when it’s tab separated, what to do when it’s comma separated, etc.

Once you set everything up - all works very well. In my version, I use it to map header row, filtering columns, sheet names, etc.

@ssavickas1 I will try this method.
Thanks for your help!

@postwick Thanks for your response

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.