Sentiment Analysis - True Negative and False Positive Predictions

Hi Team,

I recently had a hands on with Sentiment Analysis ML skill. I successfully tried to use on real time feedback excel . All were predicted correctly except for a few.
the score is 10 and there were no suggestions in the feedback column
Ex:
Score feedback Prediction
10 no Negative
2 I did not like the session Negative
6 the session was OK. Neutral

Now the person score was 10 and there were no suggestions . But the prediction was negative. How should we deal with such true negative and false positive predictions.
Can someone explain me .Did anyone face this type of scenarios and how to overcome them

Hi @parvathi_ayanala

You can introduce another intermediate step in between the process, just before sending the available data to the model for processing. In this step, you can check for the empty suggestion column & fill the column with the words based on the existing score (Negative work if less than or equal to 5 & vice versa). In this way, you can send in the ready input data to the model for proper processing.

Hope this helps,
Best Regards.

@arjunshenoy .Thanks for the reply . I will try

1 Like

In sentiment analysis, dealing with cases where the predicted sentiment does not align with the expected sentiment can be challenging. Here are some common strategies to address such scenarios:

  1. Imbalanced Classes:
  • Sentiment analysis models often encounter imbalanced classes, where certain sentiments are more prevalent than others. Ensure that your training data represents the distribution of sentiments in real-world scenarios. If negative sentiments are less common, the model may show a bias towards predicting the majority class.
  1. Review Training Data:
  • Examine the instances where the model made incorrect predictions. Check the training data for similar examples and assess if there are patterns or specific characteristics that the model is struggling with. Consider augmenting the training data to include more diverse examples.
  1. Fine-Tuning the Model:
  • If your sentiment analysis model allows for fine-tuning, consider retraining the model with additional labeled examples, especially those that resemble the misclassified instances. Fine-tuning can help the model adapt to specific nuances in the data.
  1. Adjusting Thresholds:
  • Sentiment analysis models often output a probability or confidence score for each prediction. By adjusting the threshold for classifying sentiments, you can control the balance between precision and recall. Experiment with different threshold values to find a balance that aligns with your use case.
  1. Post-Processing Rules:
  • Implement post-processing rules to handle specific cases. For example, if the score is 10 and there are no suggestions, it might be reasonable to consider it as a neutral sentiment rather than negative. Develop rules that align with the domain knowledge and context of your application.
  1. Handling Negations:
  • Sentences with negations (e.g., “not good”) can be challenging for sentiment analysis models. Ensure that the model is trained on examples with negations and consider using techniques like sentiment lexicons to handle negations effectively.
  1. Ensemble Models:
  • Consider using an ensemble of multiple models with different architectures or training approaches. Combining predictions from diverse models can often lead to more robust performance.
  1. Collecting More Diverse Data:
  • If possible, collect more diverse feedback data that includes a variety of sentiments and expressions. This can help the model generalize better to different user inputs.
  1. Continuous Monitoring and Feedback Loop:
  • Implement a system for continuous monitoring of model performance in production. When misclassifications occur, use them as feedback to iteratively improve the model. This can involve regularly retraining the model with new data and refining the features.
  1. Domain-specific Adjustments:
  • Depending on the specific requirements of your application, you may need to make domain-specific adjustments. For example, in cases where the score is 10 and there are no suggestions, you might consider treating it as a separate category or handling it differently.