MLWhiz | AI Unwrapped

MLWhiz | AI Unwrapped

Share this post

MLWhiz | AI Unwrapped
MLWhiz | AI Unwrapped
Adding Interpretability to Multiclass Text Classification models
Copy link
Facebook
Email
Notes
More

Adding Interpretability to Multiclass Text Classification models

Rahul Agarwal's avatar
Rahul Agarwal
Nov 08, 2019
∙ Paid

Share this post

MLWhiz | AI Unwrapped
MLWhiz | AI Unwrapped
Adding Interpretability to Multiclass Text Classification models
Copy link
Facebook
Email
Notes
More
Share

Explain Like I am 5.

It is the basic tenets of learning for me where I try to distill any concept in a more palatable form. As Feynman said:

I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it.

So, when I saw the ELI5 library that aims to interpret machine learning models, I just had to try it out.

One of the basic problems we face while explaining our complex machine learning classifiers to the business is interpretability.

Sometimes the stakeholders want to understand — what is causing a particular result? It may be because the task at hand is very critical and we cannot afford to take a wrong decision. Think of a classifier that takes automated monetary actions based on user reviews.

Or it may be to understand a little bit more about the business/the problem space.

Or it may be to increase the social acceptance of your model.

This post is about interpreting complex text classification models.

Keep reading with a 7-day free trial

Subscribe to MLWhiz | AI Unwrapped to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Rahul Agarwal
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More