How to find accuracy of a ML model in AI center?

Is there anyway to find accuracy of model after training without using du framework to test the documents.
Is there any way to test the accuracy & compare models from AI center?

You can evaluate an ML model through the evaluation pipeline in AI center. Right now this is the only metrics available to ideate the performer of your ML model.

You can either create a full pipeline ( containing a training and evaluating pipeline) or even an evaluation pipeline alone, and this will give you an idea on the performance of your ML model, which fields have higher extraction & which fields need more training.

Here’s how you can provision an evaluation pipeline :

Can the metrics from train pipeline be taken for consideration?
And how many documents should the evaluation pipeline consist?

Should the Evaluation pipeline be run on minor version 0 or 1(train pipeline)

Hey Divya,

Your evaluation dataset could be a small set compared to your training dataset. you need to make sure your evaluation dataset is representative of all your training data. for eg. if you trained 10 variations of a document, and for each variation you labelled 10 documents in training pipeline, for evaluation pipeline you need to have all 10 variations, but each variation you can have 1-2 samples.

Yes the evaluation pipeline should always run in minor version 0.

1 Like

But shouldn’t we do the evaluation on the trained version to check its accuracy.
Lets say ''1" being the trained one should we do on the minor version 0 or 1

Hello @Divya_Salve,

Yes, I believe we need to use the latest (trained) version for evaluation.

This is how it should be.

Here they are talking about the major version, please refer screenshot.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.