Mon, 28 June 2021
Shane Ross, Professor of Aerospace and Ocean Engineering at Virginia Tech University, comes on today to talk about his work “Beach-level 24-hour forecasts of Florida red tide-induced respiratory irritation.” |
Mon, 21 June 2021
Lior Shamir, Associate Professor of Computer Science at Kansas University, joins us today to talk about the recent paper Automatic Identification of Outliers in Hubble Space Telescope Galaxy Images. Follow Lio on Twitter @shamir_lior
Direct download: automatic-identification-of-outlier-galaxy-images.mp3
Category:general -- posted at: 12:11pm PDT |
Wed, 16 June 2021
Shereen Elsayed and Daniela Thyssens, both are PhD Student at Hildesheim University in Germany, come on today to talk about the work “Do We Really Need Deep Learning Models for Time Series Forecasting?”
Direct download: do-we-need-deep-learning-in-time-series.mp3
Category:general -- posted at: 9:10am PDT |
Thu, 10 June 2021
Sam Ackerman, Research Data Scientist at IBM Research Labs in Haifa, Israel, joins us today to talk about his work Detection of Data Drift and Outliers Affecting Machine Learning Model Performance Over Time. Check out Sam's IBM statistics/ML blog at: http://www.research.ibm. |
Mon, 31 May 2021
Julien Herzen, PhD graduate from EPFL in Switzerland, comes on today to talk about his work with Unit 8 and the development of the Python Library: Darts. |
Mon, 24 May 2021
Welcome to Timeseries! Today’s episode is an interview with Rob Hyndman, Professor of Statistics at Monash University in Australia, and author of Forecasting: Principles and Practices. |
Fri, 21 May 2021
Today's experimental episode uses sound to describe some basic ideas from time series. This episode includes lag, seasonality, trend, noise, heteroskedasticity, decomposition, smoothing, feature engineering, and deep learning.
|
Fri, 7 May 2021
Today’s show in two parts. First, Linhda joins us to review the episodes from Data Skeptic: Pilot Season and give her feedback on each of the topics. Second, we introduce our new segment “Orders of Magnitude”. It’s a statistical game show in which participants must identify the true statistic hidden in a list of statistics which are off by at least an order of magnitude. Claudia and Vanessa join as our first contestants. Below are the sources of our questions. Heights
Bird Statistics Amounts of Data Our statistics come from this post
|
Mon, 3 May 2021
AI has, is, and will continue to facilitate the automation of work done by humans. Sometimes this may be an entire role. Other times it may automate a particular part of their role, scaling their effectiveness. Unless progress in AI inexplicably halts, the tasks done by humans vs. machines will continue to evolve. Today’s episode is a speculative conversation about what the future may hold. Co-Host of Squaring the Strange Podcast, Caricature Artist, and an Academic Editor, Celestia Ward joins us today! Kyle and Celestia discuss whether or not her jobs as a caricature artist or as an academic editor are under threat from AI automation. Mentions
|
Mon, 26 April 2021
Today on the show Derek Driggs, a PhD Student at the University of Cambridge. He comes on to discuss the work Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans. Help us vote for the next theme of Data Skeptic! Vote here: https://dataskeptic.com/vote |
Mon, 19 April 2021
Given a document in English, how can you estimate the ease with which someone will find they can read it? Does it require a college-level of reading comprehension or is it something a much younger student could read and understand? While these questions are useful to ask, they don't admit a simple answer. One option is to use one of the (essentially identical) two Flesch Kincaid Readability Tests. These are simple calculations which provide you with a rough estimate of the reading ease. In this episode, Kyle shares his thoughts on this tool and when it could be appropriate to use as part of your feature engineering pipeline towards a machine learning objective. For empirical validation of these metrics, the plot below compares English language Wikipedia pages with "Simple English" Wikipedia pages. The analysis Kyle describes in this episode yields the intuitively pleasing histogram below. It summarizes the distribution of Flesch reading ease scores for 1000 pages examined from both Wikipedias.
|
Fri, 9 April 2021
Today on the show we have Shubhranshu Shekar, a Ph. D Student at Carnegie Mellon University, who joins us to talk about his work, FAIROD: Fairness-aware Outlier Detection. |
Mon, 5 April 2021
Today on the show Dr. Anders Sandburg, Senior Research Fellow at the Future of Humanity Institute at Oxford University, comes on to share his work “The Timing of Evolutionary Transitions Suggest Intelligent Life is Rare.” Works Mentioned: Paper: Twitter: |
Mon, 29 March 2021
Mayank Kejriwal, Research Professor at the University of Southern California and Researcher at the Information Sciences Institute, joins us today to discuss his work and his new book Knowledge, Graphs, Fundamentals, Techniques and Applications by Mayank Kejriwal, Craig A. Knoblock, and Pedro Szekley. Works Mentioned |
Mon, 22 March 2021
QAnon is a conspiracy theory born in the underbelly of the internet. While easy to disprove, these cryptic ideas captured the minds of many people and (in part) paved the way to the 2021 storming of the US Capital. This is a contemporary conspiracy which came into existence and grew in a very digital way. This makes it possible for researchers to study this phenomenon in a way not accessible in previous conspiracy theories of similar popularity. This episode is not so much a debunking of this debunked theory, but rather an exploration of the metadata and origins of this conspiracy. This episode is also the first in our 2021 Pilot Season in which we are going to test out a few formats for Data Skeptic to see what our next season should be. This is the first installment. In a few weeks, we're going to ask everyone to vote for their favorite theme for our next season.
|
Mon, 15 March 2021
Karthick Shankar, Masters Student at Carnegie Mellon University, and Somali Chaterji, Assistant Professor at Purdue University, join us today to discuss the paper "JANUS: Benchmarking Commercial and Open-Source Cloud and Edge Platforms for Object and Anomaly Detection Workloads" Works Mentioned: https://ieeexplore.ieee.org/abstract/document/9284314 by: Karthick Shankar, Pengcheng Wang, Ran Xu, Ashraf Mahgoub, Somali ChaterjiSocial Media Karthick Shankar Somali Chaterji |
Fri, 5 March 2021
Hal Ashton, a PhD student from the University College of London, joins us today to discuss a recent work Causal Campbell-Goodhart’s law and Reinforcement Learning. "Only buy honey from a local producer." - Hal Ashton
Works Mentioned: “Causal Campbell-Goodhart’s law and Reinforcement Learning”by Hal AshtonBook Thanks to our sponsor! When your business is ready to make that next hire, find the right person with LinkedIn Jobs. Just visit LinkedIn.com/DATASKEPTIC to post a job for free! Terms and conditions apply
Direct download: goodharts-law-in-reinforcement-learning.mp3
Category:general -- posted at: 5:00am PDT |
Mon, 1 March 2021
Yuqi Ouyang, in his second year of PhD study at the University of Warwick in England, joins us today to discuss his work “Video Anomaly Detection by Estimating Likelihood of Representations.”Works Mentioned:
|
Mon, 22 February 2021
Nirupam Gupta, a Computer Science Post Doctoral Researcher at EDFL University in Switzerland, joins us today to discuss his work “Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent.”
Works Mentioned: Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent
Conference Details: https://georgetown.zoom.us/meeting/register/tJ0sc-2grDwjEtfnLI0zPnN-GwkDvJdaOxXF
Direct download: fault-tolerant-distributed-gradient-descent.mp3
Category:general -- posted at: 6:30am PDT |
Mon, 15 February 2021
Mikko Lauri, Post Doctoral researcher at the University of Hamburg, Germany, comes on the show today to discuss the work Information Gathering in Decentralized POMDPs by Policy Graph Improvements. Follow Mikko: @mikko_lauri |
Fri, 5 February 2021
Balaji Arun, a PhD Student in the Systems of Software Research Group at Virginia Tech, joins us today to discuss his research of distributed systems through the paper “Taming the Contention in Consensus-based Distributed Systems.” Works Mentioned “Fast Paxos” |
Fri, 29 January 2021
Maartje ter Hoeve, PhD Student at the University of Amsterdam, joins us today to discuss her research in automated summarization through the paper “What Makes a Good Summary? Reconsidering the Focus of Automatic Summarization.” Works Mentioned Contact Twitter: |
Fri, 22 January 2021
Brian Brubach, Assistant Professor in the Computer Science Department at Wellesley College, joins us today to discuss his work “Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives". WORKS MENTIONED: |
Fri, 15 January 2021
Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory. In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard. Works Mentioned 1x1 Rush Hour With Fixed Blocks is PSPACE Complete |
Mon, 11 January 2021
Eil Goldweber, a graduate student at the University of Michigan, comes on today to share his work in applying formal verification to systems and a modification to the Paxos protocol discussed in the paper Significance on Consecutive Ballots in Paxos. Works Mentioned : Paper: Thanks to our sponsor: |
Fri, 1 January 2021
Today on the show we have Adrian Martin, a Post-doctoral researcher from the University of Pompeu Fabra in Barcelona, Spain. He comes on the show today to discuss his research from the paper “Convolutional Neural Networks can be Deceived by Visual Illusions.” Works Mentioned in Paper: Examples: Snake Illusions Twitter: Adrian: @adriMartin13 Thanks to our sponsor! Keep your home internet connection safe with Nord VPN! Get 68% off plus a free month at nordvpn.com/dataskeptic (30-day money-back guarantee!)
Direct download: visual-illusions-deceiving-neural-networks.mp3
Category:general -- posted at: 6:00am PDT |
Fri, 25 December 2020
Have you ever wanted to hear what an earthquake sounds like? Today on the show we have Omkar Ranadive, Computer Science Masters student at NorthWestern University, who collaborates with Suzan van der Lee, an Earth and Planetary Sciences professor at Northwestern University, on the crowd-sourcing project Earthquake Detective. Email Links: Works Mentioned: Paper: Applying Machine Learning to Crowd-sourced Data from Earthquake Detective Thanks to our sponsors! Brilliant.org Is an awesome platform with interesting courses, like Quantum Computing! There is something for you and surely something for the whole family! Get 20% off Brilliant Premium at http://brilliant.com/dataskeptic
Direct download: earthquake-detection-with-crowd-sourced-data.mp3
Category:general -- posted at: 8:21am PDT |
Tue, 22 December 2020
Byzantine fault tolerance (BFT) is a desirable property in a distributed computing environment. BFT means the system can survive the loss of nodes and nodes becoming unreliable. There are many different protocols for achieving BFT, though not all options can scale to large network sizes. Ted Yin joins us to explain BFT, survey the wide variety of protocols, and share details about HotStuff. |
Fri, 11 December 2020
Kyle shared some initial reactions to the announcement about Alpha Fold 2's celebrated performance in the CASP14 prediction. By many accounts, this exciting result means protein folding is now a solved problem. Thanks to our sponsors!
|
Fri, 4 December 2020
Above all, everyone wants voting to be fair. What does fair mean and how can we measure it? Kenneth Arrow posited a simple set of conditions that one would certainly desire in a voting system. For example, unanimity - if everyone picks candidate A, then A should win! Yet surprisingly, under a few basic assumptions, this theorem demonstrates that no voting system exists which can satisfy all the criteria. This episode is a discussion about the structure of the proof and some of its implications. Works Mentioned Thank you to our sponsors! Better Help is much more affordable than traditional offline counseling, and financial aid is available! Get started in less than 24 hours. Data Skeptic listeners get 10% off your first month when you visit: betterhelp.com/dataskeptic Let Springboard School of Data jumpstart your data career! With 100% online and remote schooling, supported by a vast network of professional mentors with a tuition-back guarantee, you can't go wrong. Up to twenty $500 scholarships will be awarded to Data Skeptic listeners. Check them out at springboard.com/dataskeptic and enroll using code: DATASK
|
Fri, 27 November 2020
As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds? Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic. Works Mentioned Emails: Jonathan Lia Jiebo Luo Thanks to our sponsors!
|
Fri, 20 November 2020
Niclas Boehmer, second year PhD student at Berlin Institute of Technology, comes on today to discuss the computational complexity of bribery in elections through the paper “On the Robustness of Winners: Counting Briberies in Elections.” Links Mentioned: Works Mentioned: Thanks to our sponsors: Springboard School of Data: Springboard is a comprehensive end-to-end online data career program. Create a portfolio of projects to spring your career into action. Learn more about how you can be one of twenty $500 scholarship recipients at springboard.com/dataskeptic. This opportunity is exclusive to Data Skeptic listeners. (Enroll with code: DATASK) Nord VPN: Protect your home internet connection with unlimited bandwidth. Data Skeptic Listeners-- take advantage of their Black Friday offer: purchase a 2-year plan, get 4 additional months free. nordvpn.com/dataskeptic (Use coupon code DATASKEPTIC) |
Fri, 13 November 2020
Clement Fung, a Societal Computing PhD student at Carnegie Mellon University, discusses his research in security of machine learning systems and a defense against targeted sybil-based poisoning called FoolsGold. Works Mentioned: Twitter: @clemfung Website: Thanks to our sponsors: Brilliant - Online learning platform. Check out Geometry Fundamentals! Visit Brilliant.org/dataskeptic for 20% off Brilliant Premium!
|
Fri, 6 November 2020
Simson Garfinkel, Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau, discusses his work modernizing the Census Bureau disclosure avoidance system from private to public disclosure avoidance techniques using differential privacy. Some of the discussion revolves around the topics in the paper Randomness Concerns When Deploying Differential Privacy. WORKS MENTIONED:
Direct download: differential-privacy-at-the-us-census.mp3
Category:general -- posted at: 8:13am PDT |
Thu, 29 October 2020
Computer Science research fellow of Cambridge University, Heidi Howard discusses Paxos, Raft, and distributed consensus in distributed systems alongside with her work “Paxos vs. Raft: Have we reached consensus on distributed consensus?” She goes into detail about the leaders in Paxos and Raft and how The Raft Consensus Algorithm actually inspired her to pursue her PhD. Thank you to our sponsor Monday.com! Their apps challenge is still accepting submissions! find more information at monday.com/dataskeptic |
Fri, 23 October 2020
Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database’s transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube. Thanks to this week's sponsors:
|
Fri, 16 October 2020
Patrick Rosenstiel joins us to discuss the The National Popular Vote.
Direct download: national-popular-vote-interstate-compact.mp3
Category:general -- posted at: 8:24am PDT |
Mon, 12 October 2020
Yudi Pawitan joins us to discuss his paper Defending the P-value. |
Mon, 5 October 2020
Ivan Oransky joins us to discuss his work documenting the scientific peer-review process at retractionwatch.com.
|
Mon, 21 September 2020
Derek Lim joins us to discuss the paper Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform.
|
Mon, 14 September 2020
Neil Johnson joins us to discuss the paper The online competition between pro- and anti-vaccination views. |
Mon, 7 September 2020
Mashbat Suzuki joins us to discuss the paper How Many Freemasons Are There? The Consensus Voting Mechanism in Metric Spaces. Check out Mashbat’s and many other great talks at the 13th Symposium on Algorithmic Game Theory (SAGT 2020)
|
Mon, 31 August 2020
Steven Heilman joins us to discuss his paper Designing Stable Elections. For a general interest article, see: https://theconversation.com/the-electoral-college-is-surprisingly-vulnerable-to-popular-vote-changes-141104 Steven Heilman receives funding from the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. |
Mon, 24 August 2020
Sami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles. More from Sami at samiyousif.org Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8 |
Tue, 18 August 2020
In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case. He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective. |
Tue, 11 August 2020
In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results. |
Mon, 27 July 2020
Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.
Direct download: human-computer-interaction-and-online-privacy.mp3
Category:general -- posted at: 2:43pm PDT |
Mon, 20 July 2020
Mark Glickman joins us to discuss the paper Data in the Life: Authorship Attribution in Lennon-McCartney Songs.
Direct download: authorship-attribution-of-lennon-mccartney-songs.mp3
Category:general -- posted at: 8:00am PDT |
Fri, 10 July 2020
Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself. |
Mon, 6 July 2020
|
Fri, 26 June 2020
Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. |
Fri, 19 June 2020
Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. |
Fri, 12 June 2020
Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.
|
Fri, 5 June 2020
Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…
|
Sat, 30 May 2020
Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.
Direct download: robustness-to-unforeseen-adversarial-attacks.mp3
Category:general -- posted at: 8:29am PDT |
Fri, 22 May 2020
Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition
Direct download: estimating-the-size-of-language-acquisition.mp3
Category:general -- posted at: 2:36pm PDT |
Fri, 15 May 2020
Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models. |
Fri, 8 May 2020
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.
|
Fri, 1 May 2020
Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user. We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.
|
Fri, 24 April 2020
Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out! |
Sat, 18 April 2020
|
Fri, 10 April 2020
Computer Vision is not PerfectJulia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks. Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk |
Sat, 4 April 2020
Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates. Homepage: http://users.eecs.northwestern.edu/~jhullman/ Lab: MU Collective |
Fri, 27 March 2020
Announcing Journal ClubI am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to. In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article. We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do. In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience. Keep reading and thanks for listening! Kyle
Direct download: AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3
Category:general -- posted at: 11:00pm PDT |
Fri, 20 March 2020
|
Fri, 13 March 2020
Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability. |
Fri, 6 March 2020
Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation. |
Fri, 28 February 2020
We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME. In particular, we ask the question Are Red Roses Red? and discuss how Anchors provide high precision model-agnostic explanations. Please take our listener survey. |
Fri, 21 February 2020
Direct download: mathematical-models-of-ecological-systems.mp3
Category:general -- posted at: 4:10pm PDT |
Fri, 14 February 2020
Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher. |
Fri, 7 February 2020
Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset. In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet. |
Fri, 31 January 2020
Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable. Find out more about Enrico at http://enrico.bertini.io/. More from Enrico with co-host Moritz Stefaner on the Data Stories podcast! |
Sat, 25 January 2020
We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only. |
Wed, 22 January 2020
Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person. Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable. |
Mon, 13 January 2020
This episode includes an interview with Aaron Roth author of The Ethical Algorithm. |
Tue, 7 January 2020
InterpretabilityMachine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. MusicOur new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot. |
Tue, 31 December 2019
A year in recap. |
Mon, 23 December 2019
We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". |
Sun, 15 December 2019
Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts. |
Mon, 9 December 2019
Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline. The is a technical deep dive on architecting solutions and a discussion of some of the design choices made. |
Tue, 3 December 2019
Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.
|
Sat, 30 November 2019
Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text. |
Wed, 27 November 2019
Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations. |
Sat, 23 November 2019
The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias
|
Tue, 19 November 2019
While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems. |
Wed, 13 November 2019
Manuel Mager joins us to discuss natural language processing for low and under-resourced languages. We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.
Direct download: indigenous-american-language-research.mp3
Category:general -- posted at: 1:40am PDT |
Thu, 31 October 2019
GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI? Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2. We discuss his experiences as well as some novel thoughts on artificial intelligence. |
Tue, 22 October 2019
Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model. His results exposed some issues with the model. Kyle and Rajiv discuss the original paper and Rajiv's analysis. |
Mon, 14 October 2019
Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations. |
Tue, 8 October 2019
Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". |
Mon, 23 September 2019
Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning. |
Sun, 15 September 2019
Kyle pontificates on how impressed he is with BERT. |
Thu, 5 September 2019
Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings. |
Mon, 19 August 2019
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen. This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities. Related LinksThe paper will be presented at ICCV 2019 |
Sun, 28 July 2019
Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects. |
Mon, 22 July 2019
Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks. |
Mon, 15 July 2019
Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting. |
Sun, 7 July 2019
Sebastian Ruder is a research scientist at DeepMind. In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it. |
Fri, 21 June 2019
In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research. |
Sat, 15 June 2019
Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available. |