“Why Should I Trust You?” - Debugging black-box text classifiers


Classifying text is a common use case for machine learning algorithms. But despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction. We will use eli5 and the LIME algorithm to explain text classifiers.

Follow to receive video recommendations   a   A

Talk repository: https://github.com/tsterbak/pydata2018-amsterdam

Editors Note:

I would like to work with open source projects to create a branch of the tree with all of the best videos for your open source project. Please send me an email if you are interested.