Hi Community,
I’m looking into using Modern DU project for an upcoming engagement, however when i was trialling with it, i did not find an option to define evaluation dataset.
Bit of background - with AI Center, we can run evaluation pipeline with a separate dataset that the model is not trained on to get the metrics (e.g.: accuracy, F1 scores etc), to understand that on samples not used for training, how well the model can generalise and perform on unseen data.
I don’t see that in modern DU project, where we can define such dataset separately?
Or is there something it’s doing automatically in the background similar to this that maybe i’m not aware of? Have checked the documentation and can’t find much info on this.
Your help/feedback on this is much appreciated!