Data Skeptic

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

Direct download: interpretability-practitioners.mp3
Category:general -- posted at: 9:43am PDT

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

Direct download: facial-recognition-auditing.mp3
Category:general -- posted at: 11:34am PDT




Direct download: robust-fit-to-nature.mp3
Category:general -- posted at: 8:56am PDT

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”.

While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.

But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist?

Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition




Direct download: black-boxes-are-not-required.mp3
Category:general -- posted at: 12:59pm PDT

1