My impression was that one of the reasons why we have an in-memory Data Table is to create a source-agnostic data structure.
When I put my project through the debugger, I could not see any visible difference between the Data Table loaded from the Excel vs. the one that was created using the “Build Data Table” activity.
In other words, the Dynamic forms binding mechanism should treat all Data Tables the same when binding regardless of how that DataTable was created and render the form.
A few questions:
I see that we are defining static column mappings here. So does this make it static binding then? If the Excel is too wide, then the Form design becomes labor intensive. I have Excel sheets that run almost up to column “AJ”!
What happens if the business owners decide to change the format/design of the underlying Excel sheet? (I’m in the middle of an Attended Automation design and one of our Excel formats changed midway between two meetings!)
We are going through two levels of translation - Excel to Data Table, and then Data Table to JSON - how will this solution work in terms of high-volume performance (Example: 65000 records with 40 columns say?)
Thanks for posting the solution with such clarity!
Andy
Forms don’t consider how you create datatable. If you have datatable when you execute forms, it will render it.
Forms Datatable can be dynamic, Meaning you dont need to give column names during design time. But when you execute the workflow, while rendering the form we create runtime (dynamic) schema then we will show the form.
you can utilize realtime forms (DO Block) to change the data inside grid. But not the structure (I mean, column is fixed once it renders)
I gave this a shot on my end by installing the latest package. Don’t know what I’m missing, but this does not seem to work on my end. Here are the results:
Can you provide that sample workflow that describes your problem.?It looks interesting, I’ll try to check what is the difference between this workflow and the one that I have that works with reading Excel as DT.
@AndyMenon I don’t know if this is a probable Solution for the problem. But this indeed seems to be a problem. As I was analysing the workflow and the difference between Excel Datatable and the Build Datatable, One Sure Difference is the type of the Columns.
If you check the Type of Columns of the Datatable generated from a Read Range, the type would be a System.Object . But where as the type of Build Datatable Columns that you had used had all System.String
So the best thing to do was change the Column Type, but I was not sure of how to dynamically change the Columns of a Datatable that already has data.
Hence I Created a New Datatable. Dynamically added the Columns from the Read Range Datatable to the new Datatable with the type as String.
Then Imported all the Rows from the Excel Datatable to the new Datatable. In this way I could get the Values to appear in the Form.
Not sure If you were expecting a different kind of answer. But this is what i got. There might be more to dig up to get an answer for the types of Columns other than String
Yes, this is the issue. We are translating between two DataTables just to make it acceptable to the Form. All of this has to happen before the form loads. So performance is sure to take a hit in case of large worksheets.
Thanks, I will keep your suggestion in mind. That said, I don’t create these Excel documents. They are created by business groups for whom we need to automate processes. Asking these groups to simply change their Excel format may not be a possibility all the time as these documents may be inputs to, or outputs from other processes.
For example, I receive an Excel downloaded from a CRM application. Using this input file, I have to generate an output Excel that has to be in a certain format to be accepted by a Cloud SaaS application. In both cases, we don’t have a say in changing the formats of the reports we work with.
We have to factor in these real world conditions. But I do agree that if we are building something from scratch we will have some say in how these input/output structures need to be defined.
We looked into it a bit more. Basically, the culprit here is the Read Range activity, which loads the DataTable with columns of type Object, which in turn is not supported.
Therefore, the solution here is to process the loaded data table variable to another one that has primitive column types.
This way you will be able to dynamically load as many columns as you want.
EDIT
Another potential workaround I found is to convert the source file to a CSV file. When I loaded a sample file using the Read CSV activity, it worked properly.