Great to see your efforts in learning AI. I have reviewed your submission and here is some feedback.
Your solution includes the Dispatcher (REFramework) and the Document Understanding flow.
Feedback on the Dispatcher
The use of REFramework is a good choice for the dispatcher. However, there are few things that you can improve as a part of best practices. You included the folder reading activity in the Process.xaml to capture all files that require processing. Ideal scenario would be to include this in the INIT state and push it out as a DataTable from the InitAllApplications.xaml. If you look at the Main.xaml, there is a predefined datatable that we can use as transaction items. If we can assign the values to that, we can pass them as Data Rows to the Process state.
The other approach is, instead of converting to Datatable, use a List. Either way, it is always good to define it in the INIT itself. The reason to define in INIT has multiple benefits:
- in case of errors, we can always refer here and get the next item
- We can maintain a status (if we use datatables)
- We can ignore the follow up steps if there are no new files to run
Feedback on the Document Understanding flow
Here I assume that you had the idea of triggering the DU job multiple times for each queue item?
I say this because I don’t see the looping back to get the next queue item. Can you also share whether you planned to trigger the DU job based on a Queue Trigger? If so, that is perfect
I also liked the fact that you have included the DU activities inside a Try Catch.
Taxonomy Manager specifics
Good naming methods.
I noticed a few fields where you can use better data types. For example, a few fields were referring to names of people with the data type “Text”. You can use the data type “Name” here. It will also help to automatically format the names into First, middle, and last names.
Its great that you added the date format for those tricky dates
Your classification method is correct. Can you explain a bit on how you trained your classifier? What was your approach in training?
The Train Classifier Scope has to be included after the manual classification step. For example, after Present Classification Screen. So you need to swap the two activities to reorder it. The reason is, we do the training only after the manual validation. and based on manual validation results If we use the auto-classified results for training, we may also train using the wrong data (if the automatic one is wrong).
Form Extractor config
Signature Fields option is only applicable for the fields that include the human signature. We only configure it when we need to detect whether a signature is present or not. It should point only that field. In your scenario, it is mentioned for most of the fields. Hence, you need to fine-tune that setting to point only to the field that requires the signature.
This is probably the reason why you get “YES” “NO” as the output for most of those fields in the Excel.
Though that is the practice, I kind of see a pattern in how you have configured it. Can you explain your thinking behind this? I would like to understand better and be a bit more specific
The use of a try catch is wonderful. I just noticed that in the Catch section you are setting the transaction status to “Success”. I believe it should be “Failed” with Application Exception
That’s what I have for now… I look forward for your reply so we can talk about those little pointers I mentioned.
Thanks again and keep learning!