Data Skeptic (general)

Hal Ashton, a PhD student from the University College of London, joins us today to discuss a recent work Causal Campbell-Goodhart’s law and Reinforcement Learning.

"Only buy honey from a local producer." - Hal Ashton

 

Works Mentioned:

“Causal Campbell-Goodhart’s law and Reinforcement Learning”by Hal AshtonBook 

“The Book of Why”by Judea PearlPaper

Thanks to our sponsor! 

When your business is ready to make that next hire, find the right person with LinkedIn Jobs. Just visit LinkedIn.com/DATASKEPTIC to post a job for
free! Terms and conditions apply
Direct download: goodharts-law-in-reinforcement-learning.mp3
Category:general -- posted at: 5:00am PST

Yuqi Ouyang, in his second year of PhD study at the University of Warwick in England, joins us today to discuss his work “Video Anomaly Detection by Estimating Likelihood of Representations.”Works Mentioned:


Video Anomaly Detection by Estimating Likelihood of Representations
https://arxiv.org/abs/2012.01468
by: Yuqi Ouyang, Victor Sanchez

Direct download: video-anomaly-detection.mp3
Category:general -- posted at: 6:00am PST

Nirupam Gupta, a Computer Science Post Doctoral Researcher at EDFL University in Switzerland, joins us today to discuss his work “Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent.”

 

Works Mentioned: 
https://arxiv.org/abs/2101.12316

Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent
by Nirupam Gupta and Nitin H. Vaidya

 

Conference Details:

https://georgetown.zoom.us/meeting/register/tJ0sc-2grDwjEtfnLI0zPnN-GwkDvJdaOxXF

Direct download: fault-tolerant-distributed-gradient-descent.mp3
Category:general -- posted at: 6:30am PST

Mikko Lauri, Post Doctoral researcher at the University of Hamburg, Germany, comes on the show today to discuss the work Information Gathering in Decentralized POMDPs by Policy Graph Improvements.

Follow Mikko: @mikko_lauri

Github https://laurimi.github.io/

Direct download: decentralized-information-gathering.mp3
Category:general -- posted at: 5:30am PST

Balaji Arun, a PhD Student in the Systems of Software Research Group at Virginia Tech, joins us today to discuss his research of distributed systems through the paper “Taming the Contention in Consensus-based Distributed Systems.” 

Works Mentioned
“Taming the Contention in Consensus-based Distributed Systems” 
by Balaji Arun, Sebastiano Peluso, Roberto Palmieri, Giuliano Losa, and Binoy Ravindran
https://www.ssrg.ece.vt.edu/papers/tdsc20-author-version.pdf

“Fast Paxos”
by Leslie Lamport 
https://link.springer.com/article/10.1007/s00446-006-0005-x

Direct download: leaderless-consensus.mp3
Category:general -- posted at: 9:47am PST

Maartje ter Hoeve, PhD Student at the University of Amsterdam, joins us today to discuss her research in automated summarization through the paper “What Makes a Good Summary? Reconsidering the Focus of Automatic Summarization.” 

Works Mentioned 
“What Makes a Good Summary? Reconsidering the Focus of Automatic Summarization.”
by Maartje der Hoeve, Juilia Kiseleva, and Maarten de Rijke

Contact
Email:
m.a.terhoeve@uva.nl

Twitter:
https://twitter.com/maartjeterhoeve

Website:
https://maartjeth.github.io/#get-in-touch

Direct download: automatic-summarization.mp3
Category:general -- posted at: 8:00am PST

Brian Brubach, Assistant Professor in the Computer Science Department at Wellesley College, joins us today to discuss his work “Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives".

WORKS MENTIONED:
Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives
by Brian Brubach, Aravind Srinivasan, and Shawn Zhao

Direct download: gerrymandering.mp3
Category:general -- posted at: 8:00am PST

Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory.

In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard.

Works Mentioned
Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard
by Josh Brunner, Erik D. Demaine, Dylan Hendrickson, and Juilian Wellman

1x1 Rush Hour With Fixed Blocks is PSPACE Complete
by Josh Brunner, Lily Chung, Erik D. Demaine, Dylan Hendrickson, Adam Hesterberg, Adam Suhl, Avi Zeff

Direct download: even-cooperative-chess-is-hard.mp3
Category:general -- posted at: 10:02am PST

Eil Goldweber, a graduate student at the University of Michigan, comes on today to share his work in applying formal verification to systems and a modification to the Paxos protocol discussed in the paper Significance on Consecutive Ballots in Paxos.

Works Mentioned :
Previous Episode on Paxos 
https://dataskeptic.com/blog/episodes/2020/distributed-consensus

Paper:
On the Significance on Consecutive Ballots in Paxos by: Eli Goldweber, Nuda Zhang, and Manos Kapritsos

Thanks to our sponsor:
Nord VPN : 68% off a 2-year plan and one month free! With NordVPN, all the data you send and receive online travels through an encrypted tunnel. This way, no one can get their hands on your private information. Nord VPN is quick and easy to use to protect the privacy and security of your data. Check them out at nordvpn.com/dataskeptic

Direct download: consecutive-votes-in-paxos.mp3
Category:general -- posted at: 6:00am PST

Today on the show we have Adrian Martin, a Post-doctoral researcher from the University of Pompeu Fabra in Barcelona, Spain. He comes on the show today to discuss his research from the paper “Convolutional Neural Networks can be Deceived by Visual Illusions.”

Works Mentioned in Paper:
Convolutional Neural Networks can be Decieved by Visual Illusions.” by Alexander Gomez-Villa, Adrian Martin, Javier Vazquez-Corral, and Marcelo Bertalmio

Examples:

Snake Illusions
https://www.illusionsindex.org/i/rotating-snakes

Twitter:
Alex: @alviur

Adrian: @adriMartin13

Thanks to our sponsor!

Keep your home internet connection safe with Nord VPN! Get 68% off plus a free month at nordvpn.com/dataskeptic  (30-day money-back guarantee!)

Direct download: visual-illusions-deceiving-neural-networks.mp3
Category:general -- posted at: 6:00am PST

Have you ever wanted to hear what an earthquake sounds like? Today on the show we have Omkar Ranadive, Computer Science Masters student at NorthWestern University, who collaborates with Suzan van der Lee, an Earth and Planetary Sciences professor at Northwestern University, on the crowd-sourcing project Earthquake Detective. 

Email Links:
Suzan: suzan@earth.northwestern.edu 
Omkar: omkar.ranadive@u.northwestern.edu

Works Mentioned: 

Paper: Applying Machine Learning to Crowd-sourced Data from Earthquake Detective
https://arxiv.org/abs/2011.04740
by Omkar Ranadive, Suzan van der Lee, Vivan Tang, and Kevin Chao
Github: https://github.com/Omkar-Ranadive/Earthquake-Detective
Earthquake Detective: https://www.zooniverse.org/projects/vivitang/earthquake-detective

Thanks to our sponsors!

Brilliant.org Is an awesome platform with interesting courses, like Quantum Computing! There is something for you and surely something for the whole family! Get 20% off Brilliant Premium at http://brilliant.com/dataskeptic

Direct download: earthquake-detection-with-crowd-sourced-data.mp3
Category:general -- posted at: 8:21am PST

Byzantine fault tolerance (BFT) is a desirable property in a distributed computing environment. BFT means the system can survive the loss of nodes and nodes becoming unreliable. There are many different protocols for achieving BFT, though not all options can scale to large network sizes.

Ted Yin joins us to explain BFT, survey the wide variety of protocols, and share details about HotStuff.

Direct download: byzantine-fault-tolerant-consensus.mp3
Category:general -- posted at: 5:00am PST

Kyle shared some initial reactions to the announcement about Alpha Fold 2's celebrated performance in the CASP14 prediction.  By many accounts, this exciting result means protein folding is now a solved problem.

Thanks to our sponsors!

  • Brilliant is a great last-minute gift idea! Give access to 60 + interactive courses including Quantum Computing and Group Theory. There's something for everyone at Brilliant. They have award-winning courses, taught by teachers, researchers and professionals from MIT, Caltech, Duke, Microsoft, Google and many more. Check them out at  brilliant.org/dataskeptic to take advantage of 20% off a Premium memebership.
  • Betterhelp is an online professional counseling platform. Start communicating with a licensed professional in under 24 hours! It's safe, private and convenient. From online messages to phone and video calls, there is something for everyone. Get 10% off your first month at betterhelp.com/dataskeptic
Direct download: alpha-fold.mp3
Category:general -- posted at: 9:45am PST

Above all, everyone wants voting to be fair. What does fair mean and how can we measure it? Kenneth Arrow posited a simple set of conditions that one would certainly desire in a voting system. For example, unanimity - if everyone picks candidate A, then A should win!

Yet surprisingly, under a few basic assumptions, this theorem demonstrates that no voting system exists which can satisfy all the criteria.

This episode is a discussion about the structure of the proof and some of its implications.

Works Mentioned

 
 
Thank you to our sponsors!
 
Better Help is much more affordable than traditional offline counseling, and financial aid is available! Get started in less than 24 hours. Data Skeptic listeners get 10% off your first month when you visit: betterhelp.com/dataskeptic
 
Let Springboard School of Data jumpstart your data career! With 100% online and remote schooling, supported by a vast network of professional mentors with a tuition-back guarantee, you can't go wrong. Up to twenty $500 scholarships will be awarded to Data Skeptic listeners. Check them out at springboard.com/dataskeptic and enroll using code: DATASK
Direct download: arrows-impossibility-theorem.mp3
Category:general -- posted at: 8:39am PST

As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds?

Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic.

Works Mentioned
https://arxiv.org/abs/2011.00336

Emails:
Neil Yeung
nyeung@u.rochester.edu

Jonathan Lia
jlai11@u.rochester.edu

Jiebo Luo
jluo@cs.rochester.edu

Thanks to our sponsors!

  • Springboard School of Data offers a comprehensive career program encompassing data science, analytics, engineering, and Machine Learning. All courses are online and tailored to fit the lifestyle of working professionals. Up to 20 Data Skeptic listeners will receive $500 scholarships. Apply today at springboard.com/datasketpic
  • Check out Brilliant's group theory course to learn about object-oriented design! Brilliant is great for learning something new or to get an easy-to-look-at review of something you already know. Check them out a Brilliant.org/dataskeptic to get 20% off of a year of Brilliant Premium!
Direct download: face-mask-sentiment-analysis.mp3
Category:general -- posted at: 10:56am PST

Niclas Boehmer, second year PhD student at Berlin Institute of Technology, comes on today to discuss the computational complexity of bribery in elections through the paper “On the Robustness of Winners: Counting Briberies in Elections.”

Links Mentioned:
https://www.akt.tu-berlin.de/menue/team/boehmer_niclas/

Works Mentioned:
“On the Robustness of Winners: Counting Briberies in Elections.” by Niclas Boehmer, Robert Bredereck, Piotr Faliszewski. Rolf Niedermier

Thanks to our sponsors:

Springboard School of Data: Springboard is a comprehensive end-to-end online data career program. Create a portfolio of projects to spring your career into action. Learn more about how you can be one of twenty $500 scholarship recipients at springboard.com/dataskeptic. This opportunity is exclusive to Data Skeptic listeners. (Enroll with code: DATASK)

Nord VPN: Protect your home internet connection with unlimited bandwidth. Data Skeptic Listeners-- take advantage of their Black Friday offer: purchase a 2-year plan, get 4 additional months free. nordvpn.com/dataskeptic (Use coupon code DATASKEPTIC)

Direct download: counting-briberies-in-elections.mp3
Category:general -- posted at: 8:26am PST

Clement Fung, a Societal Computing PhD student at Carnegie Mellon University, discusses his research in security of machine learning systems and a defense against targeted sybil-based poisoning called FoolsGold.

Works Mentioned:
The Limitations of Federated Learning in Sybil Settings

Twitter:

@clemfung

Website:
https://clementfung.github.io/

Thanks to our sponsors:

Brilliant - Online learning platform. Check out Geometry Fundamentals! Visit Brilliant.org/dataskeptic for 20% off Brilliant Premium!


BetterHelp - Convenient, professional, and affordable online counseling. Take 10% off your first month at betterhelp.com/dataskeptic

Direct download: sybil-attacks-on-federated-learning.mp3
Category:general -- posted at: 10:25am PST

Simson Garfinkel, Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau, discusses his work modernizing the Census Bureau disclosure avoidance system from private to public disclosure avoidance techniques using differential privacy. Some of the discussion revolves around the topics in the paper Randomness Concerns When Deploying Differential Privacy.
 

WORKS MENTIONED:


Check out: https://simson.net/page/Differential_privacy


Thank you to our sponsor, BetterHelp. Professional and confidential in-app counseling for everyone. Save 10% on your first month of services with www.betterhelp.com/dataskeptic

Direct download: differential-privacy-at-the-us-census.mp3
Category:general -- posted at: 8:13am PST

Computer Science research fellow of Cambridge University, Heidi Howard discusses Paxos, Raft, and distributed consensus in distributed systems alongside with her work “Paxos vs. Raft: Have we reached consensus on distributed consensus?”

She goes into detail about the leaders in Paxos and Raft and how The Raft Consensus Algorithm actually inspired her to pursue her PhD.

Paxos vs Raft paper: https://arxiv.org/abs/2004.05074

Leslie Lamport paper “part-time Parliament”
https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf

Leslie Lamport paper "Paxos Made Simple"
https://lamport.azurewebsites.net/pubs/paxos-simple.pdf

Twitter : @heidiann360

Thank you to our sponsor Monday.com! Their apps challenge is still accepting submissions! find more information at monday.com/dataskeptic

Direct download: distributed-consensus.mp3
Category:general -- posted at: 10:36pm PST

Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database’s transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube.
 
Thanks to this week's sponsors:
  • Monday.com - Their Apps Challenge is underway and available at monday.com/dataskeptic

  • Brilliant - Check out their Quantum Computing Course, I highly recommend it! Other interesting topics I’ve seen are Neural Networks and Logic. Check them out at Brilliant.org/dataskeptic
Direct download: acid-compliance.mp3
Category:general -- posted at: 6:00am PST

Patrick Rosenstiel joins us to discuss the The National Popular Vote.

Direct download: national-popular-vote-interstate-compact.mp3
Category:general -- posted at: 8:24am PST

Yudi Pawitan joins us to discuss his paper Defending the P-value.

Direct download: defending-the-p-value.mp3
Category:general -- posted at: 6:00am PST

Ivan Oransky joins us to discuss his work documenting the scientific peer-review process at retractionwatch.com.

 

Direct download: retraction-watch.mp3
Category:general -- posted at: 8:00am PST

Derek Lim joins us to discuss the paper Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform.

 

Direct download: crowdsourced-expertise.mp3
Category:general -- posted at: 7:00am PST

Neil Johnson joins us to discuss the paper The online competition between pro- and anti-vaccination views.

Direct download: the-spread-of-misinformation-online.mp3
Category:general -- posted at: 7:00am PST



Direct download: consensus-voting.mp3
Category:general -- posted at: 7:00am PST

Steven Heilman joins us to discuss his paper Designing Stable Elections.

For a general interest article, see: https://theconversation.com/the-electoral-college-is-surprisingly-vulnerable-to-popular-vote-changes-141104

Steven Heilman receives funding from the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Direct download: voting-mechanisms.mp3
Category:general -- posted at: 7:00am PST

Sami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles.

More from Sami at samiyousif.org

Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8

Direct download: false-concensus.mp3
Category:general -- posted at: 3:16pm PST

In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case.  He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective.

Direct download: fraud-detection-in-real-time.mp3
Category:general -- posted at: 12:12am PST

In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results.

Direct download: listener-survey-review.mp3
Category:general -- posted at: 10:01am PST

Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.

Direct download: human-computer-interaction-and-online-privacy.mp3
Category:general -- posted at: 2:43pm PST




Direct download: authorship-attribution-of-lennon-mccartney-songs.mp3
Category:general -- posted at: 8:00am PST

Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.

Direct download: gans-can-be-interpretable.mp3
Category:general -- posted at: 7:42pm PST

Direct download: sentiment-preserving-fake-reviews.mp3
Category:general -- posted at: 3:48pm PST

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

Direct download: interpretability-practitioners.mp3
Category:general -- posted at: 9:43am PST

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

Direct download: facial-recognition-auditing.mp3
Category:general -- posted at: 11:34am PST




Direct download: robust-fit-to-nature.mp3
Category:general -- posted at: 8:56am PST

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”.

While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.

But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist?

Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition




Direct download: black-boxes-are-not-required.mp3
Category:general -- posted at: 12:59pm PST

Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

Direct download: robustness-to-unforeseen-adversarial-attacks.mp3
Category:general -- posted at: 8:29am PST



Direct download: estimating-the-size-of-language-acquisition.mp3
Category:general -- posted at: 2:36pm PST

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

Direct download: interpretable-ai-in-healthcare.mp3
Category:general -- posted at: 8:49am PST

What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.



Direct download: understanding-neural-networks.mp3
Category:general -- posted at: 10:07am PST

Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user.

We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.



Direct download: self-explaining-ai.mp3
Category:general -- posted at: 10:23pm PST

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

Direct download: plastic-bag-bans.mp3
Category:general -- posted at: 8:45am PST




Direct download: self-driving-cars-and-pedestrians.mp3
Category:general -- posted at: 10:58am PST

Computer Vision is not Perfect

Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks.

Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk

Direct download: computer-vision-is-not-perfect.mp3
Category:general -- posted at: 10:53am PST

Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates.

Homepage: http://users.eecs.northwestern.edu/~jhullman/

Lab: MU Collective

Direct download: uncertainty-representations.mp3
Category:general -- posted at: 8:18am PST

Announcing Journal Club

I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to.

In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article.

We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do.

In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience.

Keep reading and thanks for listening!

Kyle

Direct download: AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3
Category:general -- posted at: 11:00pm PST

Direct download: visualizing-uncertainty.mp3
Category:general -- posted at: 8:00am PST

Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.

Direct download: interpretability-tooling.mp3
Category:general -- posted at: 8:00am PST

Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation.

Direct download: shapley-values.mp3
Category:general -- posted at: 12:29pm PST

We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME.

In particular, we ask the question Are Red Roses Red? and discuss how Anchors provide high precision model-agnostic explanations.


Please take our listener survey.

Direct download: anchors-as-explanations.mp3
Category:general -- posted at: 6:46am PST

Direct download: mathematical-models-of-ecological-systems.mp3
Category:general -- posted at: 4:10pm PST

Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher.

Direct download: adversarial-explanations.mp3
Category:general -- posted at: 3:10pm PST

Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset.

In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet.

http://0xab.com/

Direct download: objectnet.mp3
Category:general -- posted at: 8:00am PST

Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable.

Find out more about Enrico at http://enrico.bertini.io/.

More from Enrico with co-host Moritz Stefaner on the Data Stories podcast!

Direct download: visualization-and-interpretability.mp3
Category:general -- posted at: 8:00am PST

We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only.

Direct download: interpretable-one-shot-learning.mp3
Category:general -- posted at: 9:00pm PST

Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person.  Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.

Direct download: fooling-computer-vision.mp3
Category:general -- posted at: 10:38am PST

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

Direct download: algorithmic-fairness.mp3
Category:general -- posted at: 6:31pm PST

Interpretability

Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask.

Welcome to Data Skeptic Interpretability.

In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning.

Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic.

Music

Our new theme song is #5 by Big D and the Kids Table.

Incidental music by Tanuki Suit Riot.

Direct download: interpretability.mp3
Category:general -- posted at: 12:33am PST

A year in recap.

Direct download: nlp-in-2019.mp3
Category:general -- posted at: 3:51am PST

We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".

Direct download: the-limits-of-nlp.mp3
Category:general -- posted at: 5:18pm PST

Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.

Direct download: jumpstart-your-ml-project.mp3
Category:general -- posted at: 9:25am PST

Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline.  The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.

Direct download: serverless-nlp-model-training.mp3
Category:general -- posted at: 6:13pm PST

Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.

 

Direct download: the-team-data-science-process.mp3
Category:general -- posted at: 2:54pm PST

Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

Direct download: ancient-text-restoration.mp3
Category:general -- posted at: 10:25pm PST

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

Direct download: ml-ops.mp3
Category:general -- posted at: 12:18am PST

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand.  Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

 

Direct download: annotator-bias.mp3
Category:general -- posted at: 1:46pm PST

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.

Direct download: nlp-for-developers.mp3
Category:general -- posted at: 7:00pm PST

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

Direct download: indigenous-american-language-research.mp3
Category:general -- posted at: 1:40am PST

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus.

As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

Direct download: talking-to-gpt2.mp3
Category:general -- posted at: 12:45pm PST

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

Direct download: reproducing-deep-learning-models.mp3
Category:general -- posted at: 6:15pm PST

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

Direct download: what-bert-is-not.mp3
Category:general -- posted at: 2:02pm PST

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans".

https://arxiv.org/abs/1907.10529

Direct download: spanbert.mp3
Category:general -- posted at: 1:27am PST

Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

Direct download: bert-is-shallow.mp3
Category:general -- posted at: 2:13pm PST

Kyle pontificates on how impressed he is with BERT.

Direct download: bert-is-magic.mp3
Category:general -- posted at: 10:11pm PST

Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.

Direct download: applied-data-science-in-industry.mp3
Category:general -- posted at: 10:31pm PST

Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.

This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.

Related Links

The paper will be presented at ICCV 2019

@antoine77340

Antoine on Github

Antoine's homepage

Direct download: building-the-howto100m-video-corpus.mp3
Category:general -- posted at: 1:12pm PST

Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.

Direct download: bert.mp3
Category:general -- posted at: 11:42pm PST

Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks.

Direct download: onyx.mp3
Category:general -- posted at: 12:52am PST

Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting.

Direct download: catastrophic-forgetting.mp3
Category:general -- posted at: 1:40am PST

Sebastian Ruder is a research scientist at DeepMind.  In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it.

Direct download: transfer_learning.mp3
Category:general -- posted at: 9:02pm PST

In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.

Direct download: facebook-language.mp3
Category:general -- posted at: 9:21am PST

Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English.  Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models.  For languages that researchers have not paid as much attention to, these tools are not always available.

Direct download: under-resourced-languages.mp3
Category:general -- posted at: 3:17pm PST

Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER.  NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).

Direct download: named-entity-recognition.mp3
Category:general -- posted at: 11:16am PST

USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.

Direct download: the-death-of-a-language.mp3
Category:general -- posted at: 2:47pm PST

Kyle and Linh Da discuss the concepts behind the neural Turing machine.

Direct download: neuro-turing-machines.mp3
Category:general -- posted at: 9:05am PST

Kyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud.

Direct download: data-infrastructure-in-the-cloud.mp3
Category:general -- posted at: 12:28pm PST

In this episode, Kyle interviews Laura Edell at MS Build 2019.  The conversation covers a number of topics, notably her NCAA Final 4 prediction model.

 

Direct download: ncaa-predictions-on-spark.mp3
Category:general -- posted at: 9:52am PST

Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.

Direct download: transformer.mp3
Category:general -- posted at: 8:31am PST

When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location.  In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.

 

Direct download: mapping-dialects-with-twitter-data.mp3
Category:general -- posted at: 8:00am PST

This is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge.  We primarily discuss sentiment analysis.

Direct download: sentiment-analysis.mp3
Category:general -- posted at: 6:46pm PST

A gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks.

Direct download: attention-part-1.mp3
Category:general -- posted at: 7:46pm PST

Modern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties.  The rules of grammar may be discarded and often visible errors are a normal part of the conversation.

>>> Good mornink

>>> morning

Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order.  Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way.  In this episode, we discuss techniques for designing solutions like that.

 

Direct download: cross-lingual.mp3
Category:general -- posted at: 6:42am PST

ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.

Direct download: elmo.mp3
Category:general -- posted at: 8:00am PST

Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?

Direct download: bleu.mp3
Category:general -- posted at: 9:16pm PST

While at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference.

Direct download: simultaneous-translation.mp3
Category:general -- posted at: 8:00am PST

Machine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!

Direct download: human-vs-machine-transcription-errors.mp3
Category:general -- posted at: 8:00am PST

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

Related Links

Direct download: seq2seq.mp3
Category:general -- posted at: 8:00am PST

Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional.

Related Links

Direct download: text-mining-in-r.mp3
Category:general -- posted at: 8:00am PST