The balance: Accuracy vs. Interpretability

Published in
3 min read Dec 3, 2018

--

How many times have you struggled to implement a high performing algorithm with interpretability as high as that in regression models? I have definitely gone through this struggle almost every time I am delivering project to a business client. Everyone wants to know why a customer is more high valued than the other or why is a patient more likely to get diagnosed with a disease than the other? As more and more number of companies start using data science techniques to drive growth, and as C-Suite increasingly relies on the use of such techniques, understanding the trade-off between Accuracy and Interpretabilitybecomes all the more relevant for analytic success.

So what is this trade-off between accuracy and interpretability? A lot of deep learning techniques do the best job of predictions, but are very difficult and complex to interpret. For a business user, these complex interactions between the independent variables are difficult to understand and might not always make business sense.

We as data science consultants should build confidence in our algorithmic approaches if those are not well-understood by our business clients. Further, having interpretable models will justify the value proposition of data science techniques. The business users will start appreciating the value of using these techniques to solve real business use cases.

Also, now-a-days all the buzz words: Artificial Intelligence, Neural Networks, Machine Learning etc. are so rampantly used, and the clients right at the start of the project ask questions such as “Are we going to implement Neural Network algorithms?” So it becomes more important to explain the friction between accuracy and interpretability, and to explain why a certain model is best suited in a particular situation.

Below is the representation I typically use to explain the business user the choice of a particular algorithm over the other, and how the selection of the algorithm is related to the use case we are trying to solve, and to the business objective we want to achieve.

There are a ton of other algorithms that could fit into this spectrum, but these are the ones that are most relevant to the use cases I typically work on. Also, as far as the y-axis is considered the algorithms could go up and down, but this is the typical trend.

Having said this, there are a lot of techniques we can use to incorporate interpretability in the more complex models. In my next blog, I will explain one of these techniques we recently used to interpret XG-Boost model.

This content was originally published on my personal blogging website: http://datascienceninja.com/. Click here to view it and subscribe to receive updates on latest blogs instantly.

Follow www.datascienceninja.com to get access to my latest blogs! Data Science Consultant @ ZS Associates