Data Skeptic

A year in recap.

Direct download: nlp-in-2019.mp3
Category:general -- posted at: 3:51am PDT

We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".

Direct download: the-limits-of-nlp.mp3
Category:general -- posted at: 5:18pm PDT

Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.

Direct download: jumpstart-your-ml-project.mp3
Category:general -- posted at: 9:25am PDT

Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline.  The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.

Direct download: serverless-nlp-model-training.mp3
Category:general -- posted at: 6:13pm PDT

Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.

 

Direct download: the-team-data-science-process.mp3
Category:general -- posted at: 2:54pm PDT

Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

Direct download: ancient-text-restoration.mp3
Category:general -- posted at: 10:25pm PDT

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

Direct download: ml-ops.mp3
Category:general -- posted at: 12:18am PDT

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand.  Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

 

Direct download: annotator-bias.mp3
Category:general -- posted at: 1:46pm PDT

While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.

Direct download: nlp-for-developers.mp3
Category:general -- posted at: 7:00pm PDT

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

Direct download: indigenous-american-language-research.mp3
Category:general -- posted at: 1:40am PDT

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus.

As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

Direct download: talking-to-gpt2.mp3
Category:general -- posted at: 12:45pm PDT

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

Direct download: reproducing-deep-learning-models.mp3
Category:general -- posted at: 6:15pm PDT

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

Direct download: what-bert-is-not.mp3
Category:general -- posted at: 2:02pm PDT

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans".

https://arxiv.org/abs/1907.10529

Direct download: spanbert.mp3
Category:general -- posted at: 1:27am PDT

Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

Direct download: bert-is-shallow.mp3
Category:general -- posted at: 2:13pm PDT

Kyle pontificates on how impressed he is with BERT.

Direct download: bert-is-magic.mp3
Category:general -- posted at: 10:11pm PDT

Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.

Direct download: applied-data-science-in-industry.mp3
Category:general -- posted at: 10:31pm PDT

Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.

This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.

Related Links

The paper will be presented at ICCV 2019

@antoine77340

Antoine on Github

Antoine's homepage

Direct download: building-the-howto100m-video-corpus.mp3
Category:general -- posted at: 1:12pm PDT

Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.

Direct download: bert.mp3
Category:general -- posted at: 11:42pm PDT

Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks.

Direct download: onyx.mp3
Category:general -- posted at: 12:52am PDT

Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting.

Direct download: catastrophic-forgetting.mp3
Category:general -- posted at: 1:40am PDT

Sebastian Ruder is a research scientist at DeepMind.  In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it.

Direct download: transfer_learning.mp3
Category:general -- posted at: 9:02pm PDT

In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.

Direct download: facebook-language.mp3
Category:general -- posted at: 9:21am PDT

Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English.  Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models.  For languages that researchers have not paid as much attention to, these tools are not always available.

Direct download: under-resourced-languages.mp3
Category:general -- posted at: 3:17pm PDT

Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER.  NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).

Direct download: named-entity-recognition.mp3
Category:general -- posted at: 11:16am PDT

USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.

Direct download: the-death-of-a-language.mp3
Category:general -- posted at: 2:47pm PDT

Kyle and Linh Da discuss the concepts behind the neural Turing machine.

Direct download: neuro-turing-machines.mp3
Category:general -- posted at: 9:05am PDT

Kyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud.

Direct download: data-infrastructure-in-the-cloud.mp3
Category:general -- posted at: 12:28pm PDT

In this episode, Kyle interviews Laura Edell at MS Build 2019.  The conversation covers a number of topics, notably her NCAA Final 4 prediction model.

 

Direct download: ncaa-predictions-on-spark.mp3
Category:general -- posted at: 9:52am PDT

Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.

Direct download: transformer.mp3
Category:general -- posted at: 8:31am PDT

When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location.  In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.

 

Direct download: mapping-dialects-with-twitter-data.mp3
Category:general -- posted at: 8:00am PDT

This is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge.  We primarily discuss sentiment analysis.

Direct download: sentiment-analysis.mp3
Category:general -- posted at: 6:46pm PDT

A gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks.

Direct download: attention-part-1.mp3
Category:general -- posted at: 7:46pm PDT

Modern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties.  The rules of grammar may be discarded and often visible errors are a normal part of the conversation.

>>> Good mornink

>>> morning

Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order.  Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way.  In this episode, we discuss techniques for designing solutions like that.

 

Direct download: cross-lingual.mp3
Category:general -- posted at: 6:42am PDT

ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.

Direct download: elmo.mp3
Category:general -- posted at: 8:00am PDT

Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?

Direct download: bleu.mp3
Category:general -- posted at: 9:16pm PDT

While at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference.

Direct download: simultaneous-translation.mp3
Category:general -- posted at: 8:00am PDT

Machine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!

Direct download: human-vs-machine-transcription-errors.mp3
Category:general -- posted at: 8:00am PDT

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

Related Links

Direct download: seq2seq.mp3
Category:general -- posted at: 8:00am PDT

Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional.

Related Links

Direct download: text-mining-in-r.mp3
Category:general -- posted at: 8:00am PDT

One of the most challenging NLP tasks is natural language understanding and reasoning. How can we construct algorithms that are able to achieve human level understanding of text and be able to answer general questions about it?

This is truly an open problem, and one with the bAbI dataset has been constructed to facilitate. bAbI presents a variety of different language understanding and reasoning tasks and exists as benchmark for comparing approaches.

In this episode, Kyle talks to Rasmus Berg Palm about his recent paper Recurrent Relational Networks

Direct download: recurrent-relational-networks.mp3
Category:general -- posted at: 7:47am PDT

In the first half of this episode, Kyle speaks with Marc-Alexandre Côté and Wendy Tay about Text World.  Text World is an engine that simulates text adventure games.  Developers are encouraged to try out their reinforcement learning skills building agents that can programmatically interact with the generated text adventure games.

 

In the second half of this episode, Kyle interviews Kevin Patel about his paper Towards Lower Bounds on Number of Dimensions for Word Embeddings.  In this research, the explore an important question of how many hidden nodes to use when creating a word embedding.

Direct download: text-world-and-word-embedding-lower-bounds.mp3
Category:general -- posted at: 8:00am PDT

Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use.

The key algorithmic ideas involved in word2vec is the continuous bag of words model (CBOW). In this episode, Kyle uses excerpts from the 1983 cinematic masterpiece War Games, and challenges Linhda to guess a word Kyle leaves out of the transcript. This is similar to how word2vec is trained. It trains a neural network to predict a hidden word based on the words that appear before and after the missing location.

Direct download: word2vec.mp3
Category:general -- posted at: 8:00am PDT

In a recent paper, Leveraging Discourse Information Effectively for Authorship Attribution, authors Su Wang, Elisa Ferracane, and Raymond J. Mooney describe a deep learning methodology for predict which of a collection of authors was the author of a given document.

Direct download: authorship-attribution.mp3
Category:general -- posted at: 8:44am PDT

The earliest efforts to apply machine learning to natural language tended to convert every token (every word, more or less) into a unique feature. While techniques like stemming may have cut the number of unique tokens down, researchers always had to face a problem that was highly dimensional. Naive Bayes algorithm was celebrated in NLP applications because of its ability to efficiently process highly dimensional data.

Of course, other algorithms were applied to natural language tasks as well. While different algorithms had different strengths and weaknesses to different NLP problems, an early paper titled Scaling to Very Very Large Corpora for Natural Language Disambiguation popularized one somewhat surprising idea. For many NLP tasks, simply providing a large corpus of examples not only improved accuracy, but it also showed that asymptotically, some algorithms yielded more improvement from working on very, very large corpora.

Although not explicitly in about NLP, the noteworthy paper The Unreasonable Effectiveness of Data emphasizes this point further while paying homage to the classic treatise The Unreasonable Effectiveness of Mathematics in the Natural Sciences.

In this episode, Kyle shares a few thoughts along these lines with Linh Da.

The discussion winds up with a brief introduction to Zipf's law. When applied to natural language, Zipf's law states that the frequency of any given word in a corpus (regardless of language) will be proportional to its rank in the frequency table.

Direct download: extremely-large-corpora.mp3
Category:general -- posted at: 8:00am PDT

Github is many things besides source control. It's a social network, even though not everyone realizes it. It's a vast repository of code. It's a ticketing and project management system. And of course, it has search as well.

In this episode, Kyle interviews Hamel Husain about his research into semantic code search.

Direct download: semantic-search-at-github.mp3
Category:general -- posted at: 8:00am PDT

This episode reboots our podcast with the theme of Natural Language Processing for the next few months.

We begin with introductions of Yoshi and Linh Da and then get into a broad discussion about natural language processing: what it is, what some of the classic problems are, and just a bit on approaches.

Finishing out the show is an interview with Lucy Park about her work on the KoNLPy library for Korean NLP in Python.

If you want to share your NLP project, please join our Slack channel.  We're eager to see what listeners are working on!

http://konlpy.org/en/latest/

 

 

Direct download: natural-language-processing.mp3
Category:general -- posted at: 8:15am PDT

1