Data Skeptic

Derek Lim joins us to discuss the paper Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform.

 

Direct download: crowdsourced-expertise.mp3
Category:general -- posted at: 7:00am PDT

Neil Johnson joins us to discuss the paper The online competition between pro- and anti-vaccination views.

Direct download: the-spread-of-misinformation-online.mp3
Category:general -- posted at: 7:00am PDT



Direct download: consensus-voting.mp3
Category:general -- posted at: 7:00am PDT

Steven Heilman joins us to discuss his paper Designing Stable Elections.

For a general interest article, see: https://theconversation.com/the-electoral-college-is-surprisingly-vulnerable-to-popular-vote-changes-141104

Steven Heilman receives funding from the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Direct download: voting-mechanisms.mp3
Category:general -- posted at: 7:00am PDT

Sami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles.

More from Sami at samiyousif.org

Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8

Direct download: false-concensus.mp3
Category:general -- posted at: 3:16pm PDT

In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case.  He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective.

Direct download: fraud-detection-in-real-time.mp3
Category:general -- posted at: 12:12am PDT

In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results.

Direct download: listener-survey-review.mp3
Category:general -- posted at: 10:01am PDT

Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.

Direct download: human-computer-interaction-and-online-privacy.mp3
Category:general -- posted at: 2:43pm PDT




Direct download: authorship-attribution-of-lennon-mccartney-songs.mp3
Category:general -- posted at: 8:00am PDT

Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.

Direct download: gans-can-be-interpretable.mp3
Category:general -- posted at: 7:42pm PDT

Direct download: sentiment-preserving-fake-reviews.mp3
Category:general -- posted at: 3:48pm PDT

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

Direct download: interpretability-practitioners.mp3
Category:general -- posted at: 9:43am PDT

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

Direct download: facial-recognition-auditing.mp3
Category:general -- posted at: 11:34am PDT




Direct download: robust-fit-to-nature.mp3
Category:general -- posted at: 8:56am PDT

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”.

While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.

But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist?

Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition




Direct download: black-boxes-are-not-required.mp3
Category:general -- posted at: 12:59pm PDT

Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

Direct download: robustness-to-unforeseen-adversarial-attacks.mp3
Category:general -- posted at: 8:29am PDT



Direct download: estimating-the-size-of-language-acquisition.mp3
Category:general -- posted at: 2:36pm PDT

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

Direct download: interpretable-ai-in-healthcare.mp3
Category:general -- posted at: 8:49am PDT

What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.



Direct download: understanding-neural-networks.mp3
Category:general -- posted at: 10:07am PDT

Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user.

We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.



Direct download: self-explaining-ai.mp3
Category:general -- posted at: 10:23pm PDT

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

Direct download: plastic-bag-bans.mp3
Category:general -- posted at: 8:45am PDT




Direct download: self-driving-cars-and-pedestrians.mp3
Category:general -- posted at: 10:58am PDT

Computer Vision is not Perfect

Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks.

Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk

Direct download: computer-vision-is-not-perfect.mp3
Category:general -- posted at: 10:53am PDT

Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates.

Homepage: http://users.eecs.northwestern.edu/~jhullman/

Lab: MU Collective

Direct download: uncertainty-representations.mp3
Category:general -- posted at: 8:18am PDT

Announcing Journal Club

I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to.

In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article.

We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do.

In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience.

Keep reading and thanks for listening!

Kyle

Direct download: AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3
Category:general -- posted at: 11:00pm PDT

Direct download: visualizing-uncertainty.mp3
Category:general -- posted at: 8:00am PDT

Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.

Direct download: interpretability-tooling.mp3
Category:general -- posted at: 8:00am PDT

Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation.

Direct download: shapley-values.mp3
Category:general -- posted at: 12:29pm PDT

We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME.

In particular, we ask the question Are Red Roses Red? and discuss how Anchors provide high precision model-agnostic explanations.


Please take our listener survey.

Direct download: anchors-as-explanations.mp3
Category:general -- posted at: 6:46am PDT

Direct download: mathematical-models-of-ecological-systems.mp3
Category:general -- posted at: 4:10pm PDT

Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher.

Direct download: adversarial-explanations.mp3
Category:general -- posted at: 3:10pm PDT

Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset.

In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet.

http://0xab.com/

Direct download: objectnet.mp3
Category:general -- posted at: 8:00am PDT

Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable.

Find out more about Enrico at http://enrico.bertini.io/.

More from Enrico with co-host Moritz Stefaner on the Data Stories podcast!

Direct download: visualization-and-interpretability.mp3
Category:general -- posted at: 8:00am PDT

We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only.

Direct download: interpretable-one-shot-learning.mp3
Category:general -- posted at: 9:00pm PDT

Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person.  Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.

Direct download: fooling-computer-vision.mp3
Category:general -- posted at: 10:38am PDT

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

Direct download: algorithmic-fairness.mp3
Category:general -- posted at: 6:31pm PDT

Interpretability

Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask.

Welcome to Data Skeptic Interpretability.

In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning.

Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic.

Music

Our new theme song is #5 by Big D and the Kids Table.

Incidental music by Tanuki Suit Riot.

Direct download: interpretability.mp3
Category:general -- posted at: 12:33am PDT

1