Why Should I Trust You?

A challenge of complex machine learning models is to develop trust in the models. If it is a black box some users might not be feel comfortable using them. Models need to be interpretable, meaning that users should be able to understand how the outputs (predictions) are generated from the inputs (features).

Different approaches have been suggested. A recent one is a technique called Local Interpretable Model-agnostic Explanations (LIME). LIME approximates a model with an interpretable model locally. An interpretable model is a model such as linear models with a limited number of features.

A short video introduces the approach.

You can read the paper here.

 

Leave a Reply

Your email address will not be published. Required fields are marked *