Arxiv Sanity

A great website  to sort out research papers in Machine Learning is arxiv sanity. This was developed by Andrej Karpathy now at Tesla. Look at the introductory video:

You can save papers that you like and look for similar papers as ranked by their tf-idf statistics. What is missing is probably a social score of these papers such as which paper is popular at the moment though there are the top saved papers which can serve as a proxy for popularity.

Why Should I Trust You?

A challenge of complex machine learning models is to develop trust in the models. If it is a black box some users might not be feel comfortable using them. Models need to be interpretable, meaning that users should be able to understand how the outputs (predictions) are generated from the inputs (features).

Different approaches have been suggested. A recent one is a technique called Local Interpretable Model-agnostic Explanations (LIME). LIME approximates a model with an interpretable model locally. An interpretable model is a model such as linear models with a limited number of features.

A short video introduces the approach.

You can read the paper here.