We’re happy to announce that now, you can consume Document Understanding not only via robots using RPA - but also via APIs hosted on the cloud They provide a means to consume all skills available (as pre-trained) or built (for custom Document Types via labelling sessions) in Document Understanding, enabling a runtime experience through various programming languages.
In this sense, we announce the launch of Document Understanding Cloud APIs, which will allow you to consume the framework the same way you would via RPA, providing:
Discovery APIs - allowing consumers to access to the available resources (projects, document types, classifiers, extractors) used for the Document Understanding Framework, as displayed below:
Digitization APIs - providing a digitization method - called as a first step, responding with a
documentId, which will be referenced by other operations; and a method for retrieving the corresponding result, if required!
Classification APIs - allowing you to consume classification models for identifying the Document Type of the input document (similar as the Machine Learning Classifier enables classification via RPA)
Extraction APIs - allowing you to consume extraction models, for retrieving the fields of the Document Type processed by the extractor (similar as the Machine Learning Extractor provides this capability via RPA)
Validation APIs - allowing you to create Validation Tasks in Action Center, leveraging both the Classification or the Validation Station depending on users’ needs.
Classification & Extraction APIs are available for both synchronous (for documents up to 5 pages) as well as asynchronous (posting the request via a
start method and retrieving the result via polling) consumption, to provide support for various use cases: be it optimizing for performance or processing of large documents.
The service is discoverable via a Swagger interface which can be accessed from Document Understanding in Automation Cloud.
For consuming the APIs we recommend you start from the Swagger specification & give them a try, before implementation. In the future, we also plan on offering SDKs for various programming languages - nevertheless, swagger should provide all required information. Trying out the APIs is easy as 1-2-3
Within your Automation Cloud account, access the Document Understanding center and click the REST APIs button on the top right link, and select Framework to open the swagger interface.
(steps valid for the current UI)
- Within your Automation Cloud account, access Admin in the left navigation
- Select External Applications
- Click Add Application.
- Application name = Name however you’d like (e.g. - “du”)
- Click Add Scopes and you’ll have see a “Edit Resource” menu expand from the right
1. Select Document Understanding from the Resource drop-down
2. Click the “Application Scope(s)” tab and select all checkboxes
3. Click the Save button
- Leave the Redirect URL blank
- Click the Add button
- A pop-up will show, copy the
- These 2 will be used to authenticate into swagger.
- Return back to the Swagger page you opened in step 1
- Click the Authorize button
- In the pop-up, provide your
AppSecretand click the Authorize button
Once authorized, you are ready to consume!
We propose the following flow, however, you have the flexibility of implementing your own.
- Single Document Type per file - multiple Document Types & splitting capabilities to be added.
- Business Rules: currently, we do not provide you the possibility to define the Business Rules on a Document Type defined in Document Understanding center - this is something we currently work on.
- When discovering resources, some information you see in Document Understanding in Automation Cloud, may not be available yet - we work on adding it and have parity between the 2.
- Training - as of now, we do not automatically submit the data from the Classification or Validation Station for training - it is in our backlog, we plan on working on it soon.
- Document Data availability - we will retain the Document Data for 7 days after submitting the digitization request - afterwards, the data corresponding to the documentId will be removed and one will not be able to use it in further operations (another digitization request will be required to do so).
Charging will happen based on AI Units, as described here, considering the consumption of the respective models for extraction and classification (e.g. extracting information from a 3-page document will result in the consumption of 3 AI Units), with restrictions applicable to the respective license.
Do reach out if you give our APIs a try and let us know how it’s going! What are we missing? What would you like to see further? Looking forward to your thoughts!