Fri, 5 March 2021
Hal Ashton, a PhD student from the University College of London, joins us today to discuss a recent work Causal Campbell-Goodhart’s law and Reinforcement Learning. "Only buy honey from a local producer." - Hal Ashton
Works Mentioned: “Causal Campbell-Goodhart’s law and Reinforcement Learning”by Hal AshtonBook Thanks to our sponsor! When your business is ready to make that next hire, find the right person with LinkedIn Jobs. Just visit LinkedIn.com/DATASKEPTIC to post a job for free! Terms and conditions apply
Direct download: goodharts-law-in-reinforcement-learning.mp3
Category:general -- posted at: 5:00am PST |
Mon, 1 March 2021
Yuqi Ouyang, in his second year of PhD study at the University of Warwick in England, joins us today to discuss his work “Video Anomaly Detection by Estimating Likelihood of Representations.”Works Mentioned:
|
Mon, 22 February 2021
Nirupam Gupta, a Computer Science Post Doctoral Researcher at EDFL University in Switzerland, joins us today to discuss his work “Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent.”
Works Mentioned: Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent
Conference Details: https://georgetown.zoom.us/meeting/register/tJ0sc-2grDwjEtfnLI0zPnN-GwkDvJdaOxXF
Direct download: fault-tolerant-distributed-gradient-descent.mp3
Category:general -- posted at: 6:30am PST |
Mon, 15 February 2021
Mikko Lauri, Post Doctoral researcher at the University of Hamburg, Germany, comes on the show today to discuss the work Information Gathering in Decentralized POMDPs by Policy Graph Improvements. Follow Mikko: @mikko_lauri |
Fri, 5 February 2021
Balaji Arun, a PhD Student in the Systems of Software Research Group at Virginia Tech, joins us today to discuss his research of distributed systems through the paper “Taming the Contention in Consensus-based Distributed Systems.” Works Mentioned “Fast Paxos” |
Fri, 29 January 2021
Maartje ter Hoeve, PhD Student at the University of Amsterdam, joins us today to discuss her research in automated summarization through the paper “What Makes a Good Summary? Reconsidering the Focus of Automatic Summarization.” Works Mentioned Contact Twitter: |
Fri, 22 January 2021
Brian Brubach, Assistant Professor in the Computer Science Department at Wellesley College, joins us today to discuss his work “Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives". WORKS MENTIONED: |
Fri, 15 January 2021
Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory. In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard. Works Mentioned 1x1 Rush Hour With Fixed Blocks is PSPACE Complete |
Mon, 11 January 2021
Eil Goldweber, a graduate student at the University of Michigan, comes on today to share his work in applying formal verification to systems and a modification to the Paxos protocol discussed in the paper Significance on Consecutive Ballots in Paxos. Works Mentioned : Paper: Thanks to our sponsor: |
Fri, 1 January 2021
Today on the show we have Adrian Martin, a Post-doctoral researcher from the University of Pompeu Fabra in Barcelona, Spain. He comes on the show today to discuss his research from the paper “Convolutional Neural Networks can be Deceived by Visual Illusions.” Works Mentioned in Paper: Examples: Snake Illusions Twitter: Adrian: @adriMartin13 Thanks to our sponsor! Keep your home internet connection safe with Nord VPN! Get 68% off plus a free month at nordvpn.com/dataskeptic (30-day money-back guarantee!)
Direct download: visual-illusions-deceiving-neural-networks.mp3
Category:general -- posted at: 6:00am PST |
Fri, 25 December 2020
Have you ever wanted to hear what an earthquake sounds like? Today on the show we have Omkar Ranadive, Computer Science Masters student at NorthWestern University, who collaborates with Suzan van der Lee, an Earth and Planetary Sciences professor at Northwestern University, on the crowd-sourcing project Earthquake Detective. Email Links: Works Mentioned: Paper: Applying Machine Learning to Crowd-sourced Data from Earthquake Detective Thanks to our sponsors! Brilliant.org Is an awesome platform with interesting courses, like Quantum Computing! There is something for you and surely something for the whole family! Get 20% off Brilliant Premium at http://brilliant.com/dataskeptic
Direct download: earthquake-detection-with-crowd-sourced-data.mp3
Category:general -- posted at: 8:21am PST |
Tue, 22 December 2020
Byzantine fault tolerance (BFT) is a desirable property in a distributed computing environment. BFT means the system can survive the loss of nodes and nodes becoming unreliable. There are many different protocols for achieving BFT, though not all options can scale to large network sizes. Ted Yin joins us to explain BFT, survey the wide variety of protocols, and share details about HotStuff. |
Fri, 11 December 2020
Kyle shared some initial reactions to the announcement about Alpha Fold 2's celebrated performance in the CASP14 prediction. By many accounts, this exciting result means protein folding is now a solved problem. Thanks to our sponsors!
|
Fri, 4 December 2020
Above all, everyone wants voting to be fair. What does fair mean and how can we measure it? Kenneth Arrow posited a simple set of conditions that one would certainly desire in a voting system. For example, unanimity - if everyone picks candidate A, then A should win! Yet surprisingly, under a few basic assumptions, this theorem demonstrates that no voting system exists which can satisfy all the criteria. This episode is a discussion about the structure of the proof and some of its implications. Works Mentioned Thank you to our sponsors! Better Help is much more affordable than traditional offline counseling, and financial aid is available! Get started in less than 24 hours. Data Skeptic listeners get 10% off your first month when you visit: betterhelp.com/dataskeptic Let Springboard School of Data jumpstart your data career! With 100% online and remote schooling, supported by a vast network of professional mentors with a tuition-back guarantee, you can't go wrong. Up to twenty $500 scholarships will be awarded to Data Skeptic listeners. Check them out at springboard.com/dataskeptic and enroll using code: DATASK
|
Fri, 27 November 2020
As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds? Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic. Works Mentioned Emails: Jonathan Lia Jiebo Luo Thanks to our sponsors!
|
Fri, 20 November 2020
Niclas Boehmer, second year PhD student at Berlin Institute of Technology, comes on today to discuss the computational complexity of bribery in elections through the paper “On the Robustness of Winners: Counting Briberies in Elections.” Links Mentioned: Works Mentioned: Thanks to our sponsors: Springboard School of Data: Springboard is a comprehensive end-to-end online data career program. Create a portfolio of projects to spring your career into action. Learn more about how you can be one of twenty $500 scholarship recipients at springboard.com/dataskeptic. This opportunity is exclusive to Data Skeptic listeners. (Enroll with code: DATASK) Nord VPN: Protect your home internet connection with unlimited bandwidth. Data Skeptic Listeners-- take advantage of their Black Friday offer: purchase a 2-year plan, get 4 additional months free. nordvpn.com/dataskeptic (Use coupon code DATASKEPTIC) |
Fri, 13 November 2020
Clement Fung, a Societal Computing PhD student at Carnegie Mellon University, discusses his research in security of machine learning systems and a defense against targeted sybil-based poisoning called FoolsGold. Works Mentioned: Twitter: @clemfung Website: Thanks to our sponsors: Brilliant - Online learning platform. Check out Geometry Fundamentals! Visit Brilliant.org/dataskeptic for 20% off Brilliant Premium!
|
Fri, 6 November 2020
Simson Garfinkel, Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau, discusses his work modernizing the Census Bureau disclosure avoidance system from private to public disclosure avoidance techniques using differential privacy. Some of the discussion revolves around the topics in the paper Randomness Concerns When Deploying Differential Privacy. WORKS MENTIONED:
Direct download: differential-privacy-at-the-us-census.mp3
Category:general -- posted at: 8:13am PST |
Thu, 29 October 2020
Computer Science research fellow of Cambridge University, Heidi Howard discusses Paxos, Raft, and distributed consensus in distributed systems alongside with her work “Paxos vs. Raft: Have we reached consensus on distributed consensus?” She goes into detail about the leaders in Paxos and Raft and how The Raft Consensus Algorithm actually inspired her to pursue her PhD. Thank you to our sponsor Monday.com! Their apps challenge is still accepting submissions! find more information at monday.com/dataskeptic |
Fri, 23 October 2020
Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database’s transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube. Thanks to this week's sponsors:
|
Fri, 16 October 2020
Patrick Rosenstiel joins us to discuss the The National Popular Vote.
Direct download: national-popular-vote-interstate-compact.mp3
Category:general -- posted at: 8:24am PST |
Mon, 12 October 2020
Yudi Pawitan joins us to discuss his paper Defending the P-value. |
Mon, 5 October 2020
Ivan Oransky joins us to discuss his work documenting the scientific peer-review process at retractionwatch.com.
|
Mon, 21 September 2020
Derek Lim joins us to discuss the paper Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform.
|
Mon, 14 September 2020
Neil Johnson joins us to discuss the paper The online competition between pro- and anti-vaccination views. |
Mon, 7 September 2020
Mashbat Suzuki joins us to discuss the paper How Many Freemasons Are There? The Consensus Voting Mechanism in Metric Spaces. Check out Mashbat’s and many other great talks at the 13th Symposium on Algorithmic Game Theory (SAGT 2020)
|
Mon, 31 August 2020
Steven Heilman joins us to discuss his paper Designing Stable Elections. For a general interest article, see: https://theconversation.com/the-electoral-college-is-surprisingly-vulnerable-to-popular-vote-changes-141104 Steven Heilman receives funding from the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. |
Mon, 24 August 2020
Sami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles. More from Sami at samiyousif.org Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8 |
Tue, 18 August 2020
In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case. He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective. |
Tue, 11 August 2020
In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results. |
Mon, 27 July 2020
Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.
Direct download: human-computer-interaction-and-online-privacy.mp3
Category:general -- posted at: 2:43pm PST |
Mon, 20 July 2020
Mark Glickman joins us to discuss the paper Data in the Life: Authorship Attribution in Lennon-McCartney Songs.
Direct download: authorship-attribution-of-lennon-mccartney-songs.mp3
Category:general -- posted at: 8:00am PST |
Fri, 10 July 2020
Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself. |
Mon, 6 July 2020
|
Fri, 26 June 2020
Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. |
Fri, 19 June 2020
Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. |
Fri, 12 June 2020
Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.
|
Fri, 5 June 2020
Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…
|
Sat, 30 May 2020
Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.
Direct download: robustness-to-unforeseen-adversarial-attacks.mp3
Category:general -- posted at: 8:29am PST |
Fri, 22 May 2020
Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition
Direct download: estimating-the-size-of-language-acquisition.mp3
Category:general -- posted at: 2:36pm PST |
Fri, 15 May 2020
Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models. |
Fri, 8 May 2020
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.
|
Fri, 1 May 2020
Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user. We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.
|
Fri, 24 April 2020
Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out! |
Sat, 18 April 2020
|
Fri, 10 April 2020
Computer Vision is not PerfectJulia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks. Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk |
Sat, 4 April 2020
Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates. Homepage: http://users.eecs.northwestern.edu/~jhullman/ Lab: MU Collective |
Fri, 27 March 2020
Announcing Journal ClubI am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to. In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article. We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do. In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience. Keep reading and thanks for listening! Kyle
Direct download: AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3
Category:general -- posted at: 11:00pm PST |
Fri, 20 March 2020
|
Fri, 13 March 2020
Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability. |
Fri, 6 March 2020
Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation. |
Fri, 28 February 2020
We welcome back Marco Tulio Ribeiro to discuss research he has done since our original discussion on LIME. In particular, we ask the question Are Red Roses Red? and discuss how Anchors provide high precision model-agnostic explanations. Please take our listener survey. |
Fri, 21 February 2020
Direct download: mathematical-models-of-ecological-systems.mp3
Category:general -- posted at: 4:10pm PST |
Fri, 14 February 2020
Walt Woods joins us to discuss his paper Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness with co-authors Jack Chen and Christof Teuscher. |
Fri, 7 February 2020
Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset. In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet. |
Fri, 31 January 2020
Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable. Find out more about Enrico at http://enrico.bertini.io/. More from Enrico with co-host Moritz Stefaner on the Data Stories podcast! |
Sat, 25 January 2020
We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only. |
Wed, 22 January 2020
Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person. Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable. |
Mon, 13 January 2020
This episode includes an interview with Aaron Roth author of The Ethical Algorithm. |
Tue, 7 January 2020
InterpretabilityMachine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. MusicOur new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot. |
Tue, 31 December 2019
A year in recap. |
Mon, 23 December 2019
We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". |
Sun, 15 December 2019
Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts. |
Mon, 9 December 2019
Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline. The is a technical deep dive on architecting solutions and a discussion of some of the design choices made. |
Tue, 3 December 2019
Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.
|
Sat, 30 November 2019
Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text. |
Wed, 27 November 2019
Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations. |
Sat, 23 November 2019
The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias
|
Tue, 19 November 2019
While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems. |
Wed, 13 November 2019
Manuel Mager joins us to discuss natural language processing for low and under-resourced languages. We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.
Direct download: indigenous-american-language-research.mp3
Category:general -- posted at: 1:40am PST |
Thu, 31 October 2019
GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI? Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2. We discuss his experiences as well as some novel thoughts on artificial intelligence. |
Tue, 22 October 2019
Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model. His results exposed some issues with the model. Kyle and Rajiv discuss the original paper and Rajiv's analysis. |
Mon, 14 October 2019
Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations. |
Tue, 8 October 2019
Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". |
Mon, 23 September 2019
Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning. |
Sun, 15 September 2019
Kyle pontificates on how impressed he is with BERT. |
Thu, 5 September 2019
Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings. |
Mon, 19 August 2019
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen. This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities. Related LinksThe paper will be presented at ICCV 2019 |
Sun, 28 July 2019
Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects. |
Mon, 22 July 2019
Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks. |
Mon, 15 July 2019
Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting. |
Sun, 7 July 2019
Sebastian Ruder is a research scientist at DeepMind. In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it. |
Fri, 21 June 2019
In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research. |
Sat, 15 June 2019
Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available. |
Sat, 8 June 2019
Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER. NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place). |
Sat, 1 June 2019
USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project. |
Sat, 25 May 2019
Kyle and Linh Da discuss the concepts behind the neural Turing machine. |
Sat, 18 May 2019
Kyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud. |
Sat, 11 May 2019
In this episode, Kyle interviews Laura Edell at MS Build 2019. The conversation covers a number of topics, notably her NCAA Final 4 prediction model.
|
Fri, 3 May 2019
Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case. |
Fri, 26 April 2019
When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location. In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.
|
Fri, 19 April 2019
This is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge. We primarily discuss sentiment analysis. |
Fri, 12 April 2019
A gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks. |
Fri, 5 April 2019
Modern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties. The rules of grammar may be discarded and often visible errors are a normal part of the conversation. >>> Good mornink >>> morning Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order. Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way. In this episode, we discuss techniques for designing solutions like that.
|
Fri, 29 March 2019
ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition. |
Fri, 22 March 2019
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation? |
Fri, 15 March 2019
While at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference. |
Fri, 8 March 2019
Machine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!
Direct download: human-vs-machine-transcription-errors.mp3
Category:general -- posted at: 8:00am PST |
Fri, 1 March 2019
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder. The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings. In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning. Related Links |
Fri, 22 February 2019
Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional. Related Links |