Working on an on-prem AI center and automation suite installation.
We trained a document understanding model with relatively small (50 documents) sample size.
Unfortunately, ml model always returns its predictions with 100% confidence.
Note that this is not a validated document (not verified through action center nor validation station) and has never been in training dataset.
confidence is fine but is the data being extracted correctly?
also sometimes because of excel formatting you might see as 1 can you check by selecting the cell and checking in the top display or check the value from locals pane in studio when in debug mode
Hey @E_Metin
Confidence rate doesn’t mean the extracted value will be definitely correct. But it is a good sign for your solution. Think to add some cross check or business rules.
Thanks, there are existing regex validations but we are getting false positives which can only eliminated by checking extraction scores.
This problem does not happen on Automation Cloud. Only on On-prem Automation Suite installation. AI center version is 2024.10. We will be raising a ticket to UiPath support.