Whats the best way to handle large records in datatable

Hello guys i have about 8 csvs containing about 380,000 records or more in each. I am supposed to do the same thing for all as they are all transactions for a day.
Although it was difficult to read all into one data table i succeed to do it. i did this in chunks but the workflow that handles this is not stable sometimes it breaks with error “Invoke workflow: Child job stopped with unexpected exit code 0x000003E8.”

But now the issue is that i have a lot to do on this data table, the data table contains 8 million transactions… i have do some join manipulations to remove duplicates and a whole lot of activities
but my process is breaking at several workflows with this same error “Invoke workflow: Child job stopped with unexpected exit code 0x000003E8.”

I know its due to the large data table…but how do i go about this

please help