Reading Scanned PDF Images with Slight Variations

I receive many different scanned PDF images of checks from different banks. The process is to type in the person’s name that’s on the check in one column in excel,the amount, and check number in others. Then add a yes or no regarding whether the check was signed. I want to add these steps into my automation.

I’ve run into issues as each check is slightly different. I’ve tried using find image and get text using anchor base as well as screen scraping. The one piece of data that’s always near the name is FBO. However sometimes its F.B.O or FBO:, FBO F//B//O ect. So the name gets picked up on the initial image I made the automation with but I can’t get it to act dynamically and grab the information from other check images.

Does anyone know if what I’m trying to do will be possible. If so, any suggestions?

Thank you

Do they all have common pattern ? like the cheque’s are following same pattern ?

They are all basic checks so they have pretty similar patterns. The Check number is always in the same place, the name is always after some variation of FBO, the signature is always in the lower right hand corner

If the pdf is a true image where you can’t highlight any text, then this will be tricky and a challenge if the text is pixelated or shifted ever so slightly image to image.

Essentially, the way OCR works is it tries to place the character inside a box and each box is of equal size, so if you set a scale to a certain size, it may work on one image, but when the image shifts slightly the characters don’t align in the box correctly causing disparities… such as “3” instead of “8”. So in order to try to get the greatest accuracy, you need to zoom in or out of the document until the characters fit more accurately inside the box that OCR is using with the scale that you have set. However, I make this sound easier than it is, lol.

You could look into using Abbyy, though, and more specifically Flexicapture. Abbyy allows you to set which characters are valid. Like you could exclude all special characters and numbers, or exclude all alphas… if you wanted to which make it more consistent. Flexicapture also allows you to set a pattern for the documents to follow and will extract all the information you need, and it can be more powerful than that even with scripting capabilities. I have not used the Abbyy OCR activity though, so maybe I’m just talking about Flexicapture. -I have not gotten too much into the software though and have only been through a trial.

I hope my suggestions help. It might be more beneficial if you are able to get the raw data from the pdf from the vendor somehow, so you are not looking at a pixelated scanned image, but oh well.


1 Like

I will also add, that we currently use UiPath to detect a signature from a scanned image, developed by me. What I did was set the zoom to about 150% and looked for an image near the signature box. Then, I used Set Clipping Range to resize the element box around the signature. I then used OCR with trial and error to convert it to characters, any characters. Then, I just checked it in a condition, like if it does not equal “”,“/”,“.”,“,”,“|” or any other single characters which could identify a signature fail.

so there’s that idea too, just to check the signature.

Thanks for your thorough response!

Yes its a true scanned image (many not the best looking either). Every image will be slightly shifted because one check may be from Bank A and another from Bank B (probably 20ish different banks possible).

The other software sounds great but unfortunately I can’t spend any money buying new software. The raw data might be a possibility but we usually have to make due with what we get.

The signature is actually another thing I was trying to look into. I will definitely be trying that out. This is one of the more important parts.

Hi ClaytonM,

Do you have any Xaml for this.
I am trying to insert signature, date and name into a scanned pdf .



Hi Clayton. I am trying to do precisely this but having trouble after indicating the “anchor” image. How do I use that to identify the area below it where the signature should appear?

Example: the signature always appears beneath this text. I want to use this text as an anchor to select the region where the signature should appear. Also, this is on page 2 of the scanned document - do I need to first jump to page 2 or is there some way to dynamically find this field within the full document?


Hi ashley,

I can provide a more lengthy response with more details if you request (or search the forums for one of my more detailed replies),
but here is a brief reply to answer your questions.

  • I have mine coded to scroll the document starting from the last page using keystrokes for a set max number of pages. Since the image can be cut off at the bottom or top [on a scrolled page], this also requires that some arrow keystrokes are used to adjust the scrolled page to pick up the image in this scenario when the image is not found (Try/Catch is needed). There is probably multiple ways to scroll the document in order to [dynamically] find your image identifier.

(I’m assuming this is a scanned document where the signature identifier is an image)

  • You can use Image Exists and Find Image to return the identifier to an element variable. You will want to test many documents to ensure the image and accuracy is consistently successful. If some documents need different images to be successful, then a Pick Branch can be used with multiple Image Exists and Find Images.

  • Using the element variable, you can use the Left, Top, Width, and Height of the element to adjust the clipping region - Set Clipping Region activity. I recommend using Highlight element to show your element region, so you can get it precise.

  • In addition, you can use the element in the TakeScreenshot activity to store it as image if desired.

  • Last part is just feeding the element into OCR to convert the pixels to characters, which you can use to determine if it’s legible or not.

For more details, search for some of my replies (though, they are all pretty old)

Hope this helps.