English text classification dataset

Can anyone please share dataset which I can use English text classification trained model in out of box packages in AI Fabric / AI Center

@ppr @Jeremy_Tederry @nisargkadam23 @NIVED_NAMBIAR @loginerror : Guys can you please help me with this ?

Thanks,
Varun.

@Varun_Dharni What is the use case in your mind? Depends on that.

@nisargkadam23 just a simple one as if now where we have input and target two columns in csv format and then we try to retrained the model , now can you share the dataset required for the same.

@Varun_Dharni find attached data for Restaurant feedback. RestaurantFeedback.xlsx (48.7 KB)

Enjoy!

Make sure to break data in train and test with 80% - 20% ratio.

1 Like

@nisargkadam23 sure mate :slight_smile:

thanks,
varun.

@Varun_Dharni Please mark comment as solution it helps.

:slight_smile: done

@nisargkadam23 @Jeremy_Tederry

it has been more than 2 hours and my package still stands to be undeployed.

Can you please help me here as well ?

@Varun_Dharni Donā€™t worry about ML Packageā€™s status proceed with Train and Evaluation Pipeline once you deploy ML Skill it will automatically change the status.

when I am trying to build pipeline its getting failed , this is happening twice now , any idea ?

@nisargkadam23 until pipelines are working fine I wont be able to implement ML Package , is my understanding correct ?

@Varun_Dharni Yes you are right about it.

@nisargkadam23 what would be the solution when your train pipeline status is coming as Fail ?

highlighted above with an image.

Do you see any logs on why is this failing? You can check two places, ML Logs and pipeline details (three dots button then details)

pipeline_log.txt (10.4 KB)

yes attached is the log file for your reference @Jeremy_Tederry

Thanks,

@Jeremy_Tederry

To be more specific this is the error from the log which I am trying to make out something

2021-02-25 11:01:14,079 - aiflib.data_manager:info:15 - INFO: Loading data from /data/datasetā€¦
2021-02-25 11:01:14,133 - aiflib.data_manager:info:15 - INFO: File [/data/dataset/train.csv] does not have name [input] in headerā€™[ā€˜Inputā€™, ā€˜Targetā€™]', skipping this file. The csv file must contain a header with at least two columns. The column names are set by the <input_column> and <target_column> variables of this run. The default values are ā€œinputā€ and ā€œtargetā€. If the file contains other columns, they will be ignored
2021-02-25 11:01:14,135 - aiflib.data_manager:info:15 - INFO: Unable to read any valid data from *.csv files in [/data/dataset]
2021-02-25 11:01:14,135 - aiflib.data_manager:info:15 - INFO: Unable to read any valid data from *.json files in [/data/dataset]
2021-02-25 11:01:14,135 - uipath_core.training_plugin:model_run:140 - ERROR: Training failed for pipeline type: TRAIN_ONLY, error: No valid data to run this pipeline.
2021-02-25 11:01:14,165 - uipath_core.trainer_run:main:81 - ERROR: Training Job failed, error: No valid data to run this pipeline.

attached is the file which i have divided into train and test csv format files and then the same have been made part of datasets respectively.

RestaurantFeedback.xls (130.5 KB)

also i just founded one of your answers to the post : AI Fabric Evaluation Fail - #6 by Jeremy_Tederry

but not able to understand what do you exactly meant by saying

" Could you try download the file again and upload it without opening it with Excel in meantime? "

@ppr @nisargkadam23 @Pablito @mgope @loginerror : Just thought if you guys would also like to help me out in this .

Thanks,
Varun.!

sharing screenshot of excel file having the data , this has been converted into csv and then is being used. excel111|260x223

You need to rename the column as input and target without capital I think.

@Jeremy_Tederry no luck :frowning:

@Jeremy_Tederry also ml package status is undeployed stillā€¦

@Jeremy_Tederry @nisargkadam23 the issue got resolved when I used two csv files having a little different kind of format to what I was trying earlier :slight_smile:

Thanks for your help !!!

appreciate that.