Data Labeling for Text - Public Preview

We’re excited to announce the launch of Data Labelling within AI Center.

You can now upload raw/unlabelled datasets, annotate data in the labelling tool, and use the labelled data to train ML models. Once the ML model is deployed, data labelling is also used by the human reviewer to re-label incorrect predictions as part of the feedback process.

In this preview, we enable labelling text data for classification and entity recognition models. This enables the complete text model building workflow within AI Center, without the need for third-party tools and integrations.

Here are a few key components as part of it –

  • Templates - The template defines possible labels for that labelling session - the set of classes for a classifier label, and the set of attributes for an entity label
  • Action Center - Like every other user action in UiPath products, you can assign and manage labelling tasks within the Action Center
  • Data Labelling Activity - The activity enables creating a data labelling task within Action Center, from a Studio workflow. This can be used to trigger a human review on low confidence predictions

More details can be found in the official docs.

The data labelling tool is available in the left menu within a project in AI Center. We look forward to your feedback on using Data Labelling.

Note: You need to have Document Understanding component enabled on the tenant in order to use Data Labelling

14 Likes

Data Labeling for Text in public preview lets users create and manage. It also permits labeling of textual datasets. It essentially enhances training of machine learning models. This enhancement is brought about by tools. These tools are provided for annotating textual data using relevant labels.

Users have capacity to classify text. They can identify entities and assign sentiments. All contribute to the process of creating more accurate, more efficient robust NLP applications. In public preview phase there’s opportunity. An opportunity to investigate and offer feedback on tools and features. This is to ensure they align with user needs before the full release.