How to use the IntelligentOCR Package


We are also working on keyword based classifier wizard, to help you get started faster with automating classification.

The wizard will allow you to define keywords (single or multiple words that come one after another) or sets of keywords (multiple groups of words that must all be found at the beginning of the document for classification, but can be in distinct places). it also allows you to review the learning if you believe junk has squeezed in, and thus clean up your data.

We are also introducing a new InArgument, LearningData, in which you can, if you wish, provide a string variable with the learning data. This has been added so that, due to the fact that learning should ideally be centralized so that all robots can use and update it, it is easier to read it from whatever place, and then just feed its content in as a variable.

Hope this will ease the use of the classifier!


hey @loana_Gligan which uipath studio version is this …mine is 2019.1.beta and on my studio it’s not working

Hi @Siddhant_Dimri

The latest stable version is 2019.10.

Could you update and try with it?

@loginerror sure will do that.

Can we remove the Present Validation Station attended activity ? This would require manual intervention.

You can remove it if you want to trust your extractors and classifiers 100% (which I don’t recommend), or if you don’t care about 100% accuracy.

You can also move it in a separate process if it suits your business case better.

But overall it is a pretty important component of the entire puzzle, that is why I added it to the sample workflow.

1 Like

Hi Lona,

I am getting below errors. I am not able to repair the dependencies. Could you please suggest.


Harish Vemula

Please try to search for the MachineLearningExtractor on the Official feed with the Include Prerelease chekbox checked - the activity should appear in a 1.0.0.-preview package of UiPath.MachineLearningExtractor.

Got it. Thank you Ioana.


1 Like

Hi @Ioana_Gligan, thanks for your work on this! This package is awesome and very powerful… when it works.

I’m having issues with the Classify Document Scope properly detecting document types and classifiers. I experience this error after I add a new Document Type to the taxonomy through the Taxonomy Manager. Then I click “Manage Learning” on the Keyword Based Classifier activity inside the Classify Document Scope and I add some new keywords for the new document type.

Then I click the “Configure Classifiers” button in the Classify Document Scope and I check the box next to the new document type. Then I receive this error:

But it is! That document type is definitely in there. So for some reason, an error is showing up even though the required information is there.

It seems a fix to this is to remove the Intelligent OCR package dependency from Studio, then to install it again. After I do that, the error disappears.

Other times, the Taxonomy Manager is glitchy. Sometimes I can’t add new categories. This isn’t fixed by reinstalling the IntelligentOCR package in Studio.

Or if I add a new document type, it doesn’t show up until after I close the taxonomy manager and open it again.

Do you have any thoughts to share on these issues I’m having?

EDIT: I’m trying out the Intelligent OCR package on another computer and I don’t seem to be experiencing the same issues… I’ll continue to investigate…

1 Like

@oscar thank you for the reports!

Please let me know if you can reproduce the same issues on the other computer.

Also, can you please share:

  • IntelligentOCR package version
  • Studio version
  • if possible a sample workflow reproducing the issue with a step by step guide to do it?

It is weird that this happens indeed.

Related to the Taxonomy Manager - you can add a category once you select a group under which you want to create it. Try selecting an existing group (or creating one), and then creating a category.

Related to the Classify Document Scope - Configure Classifiers, just double checking that after checking the new doc type, you clicked save and not cancel? :slight_smile: Kidding aside, I will try to reproduce independently anyway. Thank you!


Hi @Ioana_Gligan, thanks for your fast response! I see I was just creating categories wrong, oops! My fault :stuck_out_tongue:

I’ll play around with it a bit more to see if I can reproduce my issues on my other PC and come back to share my results.

I’'m wondering, is it possible to save these variables (DocumentObjectModel, ClassificationResults, and ExtractionResults) to an external file, then load them back into the workflow as variables from that file later on? This would be like the “Load Taxonomy” activity that reads the taxonomy.json file into a variable.

Except here it would be like “Load DocumentObjectModel” or “Load ExtractionResults”, etc…

My thinking is that I would like to preprocess all of my input documents before I present them to the user to validate. This would make it faster for the user to validate each document, since the document is already digitized, classified, and has the data extracted, and I can compare it to my database before the user validates the content.

I know I can save the DocumentObjectModel variable into a .JSON file using the “Deserialize JSON” activity, but I can’t think of a way to convert the .JSON file back into the DocumentObjectModel variable.

Does that make sense? Is this possible? Or do I need to do everything in the same workflow?

Thanks again for your insight!

Edit: I think this may be possible with the “Document Processing Contracts”? Is that right? Would you be able to explain how I could use this package to convert a DOM into a file, and then how to convert that file back into a DOM?

Edit 2: I figured it out. If you want to save a DOM to a text file, you just need to add the “UiPath.DocumentProcessing.Contracts” package to your project. Then you use the Serialize method on the DOM variable and assign it to your string variable. Then you can save that to a text file. Then when you load the text file, you just need to call the Deserialize method on the string variable and you can convert it back into a DOM. I’ve attached an image for other people to learn from :slight_smile:


You found the right way! All objects have serialize/deserialize on them so you can store and retrieve them.

Also for real life scenarios, you can look into breaking the process in three steps: automatic processing, then human interaction with the validation station, then post processing. This way users don’t have to wait for automatic processing.

You can try to synchronize the three pieces using Orchestrator queues, long running workflows etc.

Have a nice day,


1 Like

Hi @Ioana_Gligan, thanks your suggestions. I’ll be sure to try out your ideas soon.

For now, I have a bug report to share with you related to the proper reading of the taxonomy in Studio and in the IntelligentOCR Activities. You can watch the video that shows the bug here: Loom | Free Screen & Video Recording Software | Loom

I’ve also attached the two projects so you can investigate them yourself: (823.8 KB) (821.9 KB)

I imagine that you won’t experience any issues with these files, since both projects will be on your local machine. You could try setting up Google Drive File Stream and copying the project into there, then I am 100% sure you would be able to recreate the issue.

  • I am using UiPath Studio 2019.10.1.
  • IntelligentOCR Activities version 4.0.1 (but it doesn’t matter which version of the activities you use, ALL of them have this same bug, including 4.2.0-preview).
  • Excel.Activities = 2.7.2
  • Mail.Activities = 1.7.2
  • System.Activities = 19.10.1
  • UIAutomation.Activities = 19.10.1

The issue relates to Google Drive File Stream and the Intelligent OCR activities. It seems that when a project that uses Intelligent OCR activities is stored in Google Drive File Stream, it prevents UiPath Studio from properly reading the taxonomy. This causes issues with the “Classify Document Scope” activity, as well as issues with adding new categories and items into the Taxonomy.

Even if the project folder in Google Drive File Stream is marked as “Available offline”, the issue will still persist.

The thing that made me realize the issue was with Google Drive File Stream and not anything else, was that I copied the exact same project from File Stream to my Desktop, and the issue went away.

I hope that you can document this bug and get it resolved soon!

My work around right now is to move the project from File Stream to my Desktop, work on it, then move it back to File Stream when I’m done.

Please let me know your thoughts and if you have any questions for me.



Oh my, @oscar,

This is a cool bug report! Thank you, truly! :slight_smile:

I’ll look into this and see how we can fix.

Again, thank you!




I am using the Intelligent OCR code provided by UiPath and I see that the code is using a “Present Validation Station” stage. Each time the code processes a new pdf document, it will go to that stage and ask for manual validation. I was wondering if I can save the validation settings so the robot will get smarter and learn which items to extract on the pdf.

At the very moment it is asking for validation each time and I do not see this as viable for unattended run.

Please let me know if you guys have any idea on how to solve my issue.

Thank you!

Hi @bsamala

I think this will be further improved in future releases :slight_smile:

1 Like

Hello @bsamala,

The training capability is directly linked to the extractor(s) and/or classifier(s) you use.
Currently the only trainable component is the Keyword Based Classifier (from the classifiers series).
We are working on some trainable components for data extraction - it is a work in progress.

There is no magic one-size-fits-all trainable extractor, that is why we are offering you an infrastructure in which you cam combine and cascade extractors (within the Data Extraction Scope).

If your use case is simple, you might want to look into building your own extractor with a trainable component, as the contract for these activities is public and can be found in the UiPath.DocumentProcessing.Contracts .nupkg which you can reference in a .net project.

Hope this helps,



Hi @Ioana_Gligan, I’m back again. I saw the latest IntelligentOCR release and it’s really cool!

I have a couple of questions / suggestions.

1 - Why do we need to extract a value first in order to be able to select it?

For example, I would like to be able to just choose the value of “Yes” or “No” for a boolean value without extracting any data or matching it to a value on the document. Is this possible?

Another example for that would be for the person validating the document to be able to type in a note for the document without having to be extracted or linked to anything on the document.

And another example would be selecting an item from a set without needing the value to be extracted or linked to the document.

As a kind of workaround, I’ve been using a RegEx extractor for these values of “(a)” without the quotations. This will match the letter a, then the person can select a value. But this is kind of messy…

2 - How exactly should we use ‘keyword sets’ in the Keyword Based Classifier?

I’ve played around with it a lot. I have over 40 different document types I am classifying. It seems that it’s better for me to keep all of my keywords in a single ‘keyword set’ for a single Document Type as opposed to having multiple ‘keyword sets’ for each document type.

I notice that when I use multiple sets that the accuracy of the classification is reduced, but when I keep all of the key words in one set, accuracy of classification is better.



When should we use more than 1 keyword set for a document?

What are your suggestions for properly classifying documents in this way?

3 - If the document classification is incorrect and we manually update the document type, is it possible to run the extractor again with the new document type?

For this case, I just don’t want to manually input all details if the classification is wrong. Ideally, UiPath would be able to go through the extraction again, and save time on choosing all of the values.

1 Like

Hello @oscar,

Great questions! I will try to answer each of them below:

1 - Why do we need to extract a value first in order to be able to select it?
this is because of two factors: (1) all data that is reported in the extraction results needs to have an evidence, so that a person verifying what the first person did can check the validity of the information, and (2) the entire validation station experience is built specifically for documents and finding information within those documents. If you just need an entry form, for user input, there are other ways.

Now, we did have this request before - that of allowing adding “values” without a real reference in the document - and we have it on our roadmap. Will keep you posted when it will be available :slight_smile:

2 - How exactly should we use ‘keyword sets’ in the Keyword Based Classifier?
One entry in the keyword sets (like in your example with “test”, “word” and “note” on the same line), will require a document to contain all three keywords to get a high level of confidence.
You would use this for cases when you expect to find both “string 1” and “string 2” in a document in order for it to be classified as a certain type.
If you add them individually, (“string 1” in one entry, “string 2” in another entry) then the keyword based classifier will classify a document if EITHER “string 1” or “string 2” appears in it.
The way confidence is computed, for multi-entries, is that it reports an average of the confidences of each individual keyword => if 2 out of 3 match, then a maximum of 66% will be reported.
I hope this information will help in better defining the differentiators between your classes!

BTW, kudos on using this to classify 40+ document types! Can you please PM me with more details on your use case? Would love to chat on your findings about classification.

3 - If the document classification is incorrect and we manually update the document type, is it possible to run the extractor again with the new document type?
You can do this “artificially” at this moment. We are actually working on a Classification Validation Station, which will let you validate the classification - and even perform document splitting - prior to performing data extraction.
But until that is out, you can do some workarounds: let’s say you have 2 document types in the taxonomy, and you know that it is possible that a document is miss-classified. Let’s say the incoming doucment is classified as Type1, when instead it should be of Type2. You could look into something like this:

  • before opening the VS, EDIT the Taxonomy object and remove all Fields from the Type2 entry (and any other entry that is not the one the doc has been classified as)
  • open the VS with any extraction results.
  • if user sees the document is miss-classified, they will select the right docuemnt type, but will have no fields to actually process by hand
  • grab the new classification in case it changed
  • then run data extraction with the new classification
  • then open the VS with the new doc type and the new extraction results.

The framework gives you 100% flexibility, that is why all objects are public in a contracts package and that is why you can come up with any combination of the available tools!

I hope this helps.