Bot Essentials 18 : Machine learning interpretability

To give some pretext of my background

I come from a programming background. When a program does not give the desired result I could ‘debug the program’. I could add breakpoints in the code, watch variables as they change, stop the program at a breakpoint and inspect the ‘state’ so to say and figure out if executing that line of code contributed to a wrong result. When I spent enough time with a program I knew it like the back of my mind and could exactly predict what part of the program may fail for a new scenario or where to add new code for handling a new scenario. In a sense I could ‘interpret’ ‘how the program worked and could explain ‘what’ it did and ‘why’ it did something. We’ll get to machine learning later on.

Coming back to the present

Today, I’m training ML and DL models and using the power of AI to solve complex problems. And I have grudgingly accepted we should not expect the model to be 100% accurate. When a model makes a wrong prediction it is natural for me to ask ‘How do I debug the model?’ Imagine a situation where we have a customer churn model and we are presenting a list of customers who are likely to churn to a business user. When the user reviews it and asks how sure we are with the results, we are tempted to say ‘the model gave 85% accuracy’. What does 85% accuracy translate to in business language terminology? If the user contradicts the results based on his/her intuition, it is time to give a detailed explanation of ‘what’ the model predicted and ‘why’ it predicted that. At that time I understood the power of debuggers that are available in a programming language.

Reason being, in the AI world such advanced debuggers do not exist as of now

Sure we can visualize decision tree results but it is after the fact look at the results. But except for simple models it is not possible to get such visualizations to sort of understand how the model worked. And when we talk of Deep Learning models we are taught to accept them as ‘black boxes’ which implies we cannot look into the inner workings of the model. Truly the hidden layers hide everything from you.

This has pushed me to the crossroads. It has got me started to look for where we are in the industry and academia with respect to ‘interpretability of models’.

This is a very complex topic. Hundreds and thousands of researchers, industry practitioners are focusing on explaining how a model works.

We need solutions and we need them fast

The reason we need to find a solution sooner rather than later is because algorithms and AI is now becoming integrated into the decision making framework in many industries. Consider the implications of  a false positive or worse a false negative prediction for cancer. The doctor has the right to know how the model works and why it makes a certain prediction so that he can be sure about the results. Just knowing that the model can be 85% accurate does not help in that situation.

If we are to make AI useful for humans we have to define machine learning interpretability as ‘providing a natural language explanation of the inner workings of a model’. We can always augment that with visualization techniques and reams of data. E.g. the weights, but unless we can explain in human language the job will be half done.

Here are some interesting statistics on the interest in this area

A Google search of the phrase ’machine learning’ returned 728 million results.

When I was searching for ‘machine learning interpretability’ I got 0.37 million results. This is 0.0005% and implies almost negligible interest, if we compare it to the main topic.

‘Deep learning’ gave 826 million results .’Deep learning interpretability’ gave 0.2 million results, which is 0.0002% and again implies almost negligible interest.

This is a trend chart from Google Trends for year 2018 for the entire world.

google trends statistics

The above statistics definitely surprised me. Kind of tells me that I’m worrying about something that very few people are worrying about.

But there is hope for building effective machine learning systems

Google recently released a tool called ‘What-If’. For example, here is an excerpt.

‘Building effective machine learning systems means asking a lot of questions. It’s not enough to train a model and walk away. Instead, good practitioners act as detectives, probing to understand their model better.’

This will be a good start towards discovering how a model works. Definitely in my list of explorations as I look forward to this journey of interpreting a machine learning model.

Feel free to reach out to know more.