Document Understanding - Modern DU Evaluation Set

Hi Community,

I’m looking into using Modern DU project for an upcoming engagement, however when i was trialling with it, i did not find an option to define evaluation dataset.

Bit of background - with AI Center, we can run evaluation pipeline with a separate dataset that the model is not trained on to get the metrics (e.g.: accuracy, F1 scores etc), to understand that on samples not used for training, how well the model can generalise and perform on unseen data.

I don’t see that in modern DU project, where we can define such dataset separately?
Or is there something it’s doing automatically in the background similar to this that maybe i’m not aware of? Have checked the documentation and can’t find much info on this.

Your help/feedback on this is much appreciated!

@Monica_Secelean is this something your team might be able to shed some light on? thanks!

@warren_lee

You have a mesure tab now which does not need evaluation set as such..it would provide the details about the accuracy and if you need to train more samples etc

This removes the need for evaluation set to get the model accuracy details

Cheers