Data Skeptic

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

Direct download: interpretable-ai-in-healthcare.mp3
Category:general -- posted at: 8:49am PDT

What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.



Direct download: understanding-neural-networks.mp3
Category:general -- posted at: 10:07am PDT

Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user.

We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.



Direct download: self-explaining-ai.mp3
Category:general -- posted at: 10:23pm PDT

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

Direct download: plastic-bag-bans.mp3
Category:general -- posted at: 8:45am PDT




Direct download: self-driving-cars-and-pedestrians.mp3
Category:general -- posted at: 10:58am PDT

Computer Vision is not Perfect

Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks.

Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk

Direct download: computer-vision-is-not-perfect.mp3
Category:general -- posted at: 10:53am PDT

Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates.

Homepage: http://users.eecs.northwestern.edu/~jhullman/

Lab: MU Collective

Direct download: uncertainty-representations.mp3
Category:general -- posted at: 8:18am PDT

Announcing Journal Club

I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to.

In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article.

We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do.

In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience.

Keep reading and thanks for listening!

Kyle

Direct download: AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3
Category:general -- posted at: 11:00pm PDT

Direct download: visualizing-uncertainty.mp3
Category:general -- posted at: 8:00am PDT

Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.

Direct download: interpretability-tooling.mp3
Category:general -- posted at: 8:00am PDT