Data Skeptic (general)

Logic is a fundamental of mathematical systems. It's roots are the values true and false and it's power is in what it's rules allow you to prove. Prepositional logic provides it's user variables. This episode gets into First Order Logic, an extension to prepositional logic.

Direct download: first-order-logic.mp3
Category:general -- posted at: 8:00am PDT

In this week’s episode, our host Kyle interviews Gokula Krishnan from ETH Zurich, about his recent contributions to defenses against adversarial attacks. The discussion centers around his latest paper, titled “Defending Against Adversarial Attacks by Leveraging an Entire GAN,” and his proposed algorithm, aptly named ‘Cowboy.’

Direct download: defending-against-adversarial-attacks.mp3
Category:general -- posted at: 8:00am PDT

On a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain.

Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one.

A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset.

Direct download: transfer-learning.mp3
Category:general -- posted at: 8:00am PDT

Thanks to our sponsor Galvanize

A Kalman Filter is a technique for taking a sequence of observations about an object or variable and determining the most likely current state of that object. In this episode, we discuss it in the context of tracking our lilac crowned amazon parrot Yoshi.

Kalman filters have many applications but the one of particular interest under our current theme of artificial intelligence is to efficiently update one's beliefs in light of new information.

The Kalman filter is based upon the Gaussian distribution. This distribution is described by two parameters: \mu (the mean) and standard deviation. The procedure for updating these values in light of new information has a closed form. This means that it can be described with straightforward formulae and computed very efficiently.

You may gain a greater appreciation for Kalman filters by considering what would happen if you could not rely on the Gaussian distribution to describe your posterior beliefs. If determining the probability distribution over the variables describing some object cannot be efficiently computed, then by definition, maintaining the most up to date posterior beliefs can be a significant challenge.

Kyle will be giving a talk at Skeptical 2018 in Berkeley, CA on June 10.

Direct download: kalman-filters.mp3
Category:general -- posted at: 12:47am PDT

Today's interview is with the authors of the textbook Artificial Intelligence and Games.

Direct download: ai-in-games-master.mp3
Category:general -- posted at: 6:00am PDT

Thanks to our sponsor The Great Courses.

This week's episode is a short primer on game theory.

For tickets to the free Data Skeptic meetup in Chicago on Tuesday, May 15 at the Mendoza College of Business (224 South Michigan Avenue, Suite 350), click here,

Direct download: game-theory.mp3
Category:general -- posted at: 11:16am PDT

This week on Data Skeptic, we begin with a skit to introduce the topic of this show: The Imitation Game. We open with a scene in the distant future. The year is 2027, and a company called Shamony is announcing their new product, Ada, the most advanced artificial intelligence agent. To prove its superiority, the lead scientist announces that it will use the Turing Test that Alan Turing proposed in 1950. During this we introduce Turing’s “objections” outlined in his famous paper, “Computing Machinery and Intelligence.”

Following that, we talk with improv coach Holly Laurent on the art of improvisation and Peter Clark from the Allen Institute for Artificial Intelligence about question and answering algorithms.

Direct download: the-imitation-game.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Kyle shares his perspective on the chatbot Eugene Goostman which (some claim) "passed" the Turing Test. As a second topic Kyle also does an intro of the Winograd Schema Challenge.

Direct download: eugene-goostman.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Kyle and Linhda discuss the theory of formal languages. Any language can (theoretically) be a formal language. The requirement is that the language can be rigorously described as a set of strings which are considered part of the language. Those strings are any combination of alphabet characters in the given language.

Read more

 

Direct download: the-theory-of-formal-languages.mp3
Category:general -- posted at: 8:00am PDT

The Loebner Prize is a competition in the spirit of the Turing Test.  Participants are welcome to submit conversational agent software to be judged by a panel of humans.  This episode includes interviews with Charlie Maloney, a judge in the Loebner Prize, and Bruce Wilcox, a winner of the Loebner Prize.

Direct download: the-loebner-prize.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Kyle chats with Vince from iv.ai and Heather Shapiro who works on the Microsoft Bot Framework. We solicit their advice on building a good chatbot both creatively and technically.

Our sponsor today is Warby Parker.

Direct download: chatbots.mp3
Category:general -- posted at: 8:00am PDT

In this week’s episode, Kyle Polich interviews Pedro Domingos about his book, The Master Algorithm: How the quest for the ultimate learning machine will remake our world. In the book, Domingos describes what machine learning is doing for humanity, how it works and what it could do in the future. He also hints at the possibility of an ultimate learning algorithm, in which the machine uses it will be able to derive all knowledge — past, present, and future.

Direct download: the-master-algorithm.mp3
Category:general -- posted at: 8:00am PDT

What's the best machine learning algorithm to use? I hear that XGBoost wins most of the Kaggle competitions that aren't won with deep learning. Should I just use XGBoost all the time? That might work out most of the time in practice, but a proof exists which tells us that there cannot be one true algorithm to rule them.

Direct download: no-free-lunch-theorems.mp3
Category:general -- posted at: 8:00am PDT

For a long time, physicians have recognized that the tools they have aren't powerful enough to treat complex diseases, like cancer. In addition to data science and models, clinicians also needed actual products — tools that physicians and researchers can draw upon to answer questions they regularly confront, such as “what clinical trials are available for this patient that I'm seeing right now?” In this episode, our host Kyle interviews guests Alex Grigorenko and Iker Huerga from Memorial Sloan Kettering Cancer Center to talk about how data and technology can be used to prevent, control and ultimately cure cancer.

Direct download: ml-at-sloan-kettering-cancer-center.mp3
Category:general -- posted at: 8:00am PDT

In a previous episode, we discussed Markov Decision Processes or MDPs, a framework for decision making and planning. This episode explores the generalization Partially Observable MDPs (POMDPs) which are an incredibly general framework that describes most every agent based system.

Direct download: optimal-decision-making-with-pomdps.mp3
Category:general -- posted at: 8:00am PDT

In many real world situations, a person/agent doesn't necessarily know their own objectives or the mechanics of the world they're interacting with. However, if the agent receives rewards which are correlated with the both their actions and the state of the world, then reinforcement learning can be used to discover behaviors that maximize the reward earned.

Direct download: reinforcement-learning.mp3
Category:general -- posted at: 8:00am PDT

Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples.  Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.

Direct download: markov-decision-process.mp3
Category:general -- posted at: 8:00am PDT

Last week on Data Skeptic, we visited the Laboratory of Neuroimaging, or LONI, at USC and learned about their data-driven platform that enables scientists from all over the world to share, transform, store, manage and analyze their data to understand neurological diseases better. We talked about how neuroscientists measure the brain using data from MRI scans, and how that data is processed and analyzed to understand the brain. This week, we'll continue the second half of our two-part episode on LONI.

Direct download: neuroscience-frontiers.mp3
Category:general -- posted at: 8:00am PDT

In artificial intelligence, the term 'agent' is used to mean an autonomous, thinking agent with the ability to interact with their environment. An agent could be a person or a piece of software. In either case, we can describe aspects of the agent in a standard framework.

Direct download: the-agent-model-of-intelligence.mp3
Category:general -- posted at: 8:00am PDT

This episode kicks off the next theme on Data Skeptic: artificial intelligence.  Kyle discusses what's to come for the show in 2018, why this topic is relevant, and how we intend to cover it.

Direct download: artificial-intelligence-a-podcast-approach.mp3
Category:general -- posted at: 8:00am PDT

We break format from our regular programming today and bring you an excerpt from Max Tegmark's book "Life 3.0".  The first chapter is a short story titled "The Tale of the Omega Team".  Audio excerpted courtesy of Penguin Random House Audio from LIFE 3.0 by Max Tegmark, narrated by Rob Shapiro.  You can find "Life 3.0" at your favorite bookstore and the audio edition via penguinrandomhouseaudio.com.

Kyle will be giving a talk at the Monterey County SkeptiCamp 2018.

Direct download: the-tale-of-the-omega-team.mp3
Category:general -- posted at: 8:00am PDT

This episode features an interview with Rigel Smiroldo recorded at NIPS 2017 in Long Beach California.  We discuss data privacy, machine learning use cases, model deployment, and end-to-end machine learning.

Direct download: mercedes-benz-machine-learning-research.mp3
Category:general -- posted at: 11:07pm PDT

When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets.

Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy for spreading that data over many machines in an orderly fashion. Resolving ambiguity or disagreements across sources is sometimes required.

This episode discusses how such algorithms related to the complexity class NC.

Direct download: parallel-algorithms.mp3
Category:general -- posted at: 8:00am PDT

In this week's episode, Scott Aaronson, a professor at the University of Texas at Austin, explains what a quantum computer is, various possible applications, the types of problems they are good at solving and much more. Kyle and Scott have a lively discussion about the capabilities and limits of quantum computers and computational complexity.

Direct download: quantum-computing.mp3
Category:general -- posted at: 8:00am PDT

I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks.

When I heard about the announcement, my first thoughts were two-fold.  First, the possibility of optimized integrations with existing Azure services.  This would be a big benefit to heavy Azure users who also want to use Spark.  Second, the benefits of active directory to control Databricks access for large enterprise.

Hear Ali and JG's thoughts and comments on what makes Azure Databricks a novel offering.

 

Direct download: azure-databricks.mp3
Category:general -- posted at: 8:00am PDT

In this episode we discuss the complexity class of EXP-Time which contains algorithms which require $O(2^{p(n)})$ time to run.  In other words, the worst case runtime is exponential in some polynomial of the input size.  Problems in this class are even more difficult than problems in NP since you can't even verify a solution in polynomial time.

We mostly discuss Generalized Chess as an intuitive example of a problem in EXP-Time.  Another well-known problem is determining if a given algorithm will halt in k steps.  That extra condition of restricting it to k steps makes this problem distinct from Turing's original definition of the halting problem which is known to be intractable.

Direct download: exp-time.mp3
Category:general -- posted at: 8:00am PDT

Algorithms with similar runtimes are said to be in the same complexity class. That runtime is measured in the how many steps an algorithm takes relative to the input size.

The class P contains all algorithms which run in polynomial time (basically, a nested for loop iterating over the input).  NP are algorithms which seem to require brute force.  Brute force search cannot be done in polynomial time, so it seems that problems in NP are more difficult than problems in P.  I say it "seems" this way because, while most people believe it to be true, it has not been proven.  This is the famous P vs. NP conjecture.  It will be discussed in more detail in a future episode.

Given a solution to a particular problem, if it can be verified/checked in polynomial time, that problem might be in NP.  If someone hands you a completed Sudoku puzzle, it's not difficult to see if they made any mistakes.  The effort of developing the solution to the Sudoku game seems to be intrinsically more difficult.  In fact, as far as anyone knows, in the general case of all possible examples of the game, it seems no strategy can do better on average than just random guessing.

This notion of random guessing the solution is where the N in NP comes from: Non-deterministic.  Imagine a machine with a random input already written in its memory.  Given enough such machines, one of them will have the right answer.  If they all ran in parallel, one of them could verify it's input in polynomial time.  This guess / provided input is often called a witness string.

NP is an important concept for many reasons.  To me, the most reason to know about NP is a practical one.  Depending on your goals or the goals of your employer, there are many challenging problems you may attempt to solve.  If a problem you are trying to solve happens to be in NP, then you should consider the implications very carefully.  Perhaps you'll be lucky and discover that your particular instance of the problem is easy.  Sudoku is pretty easy if only 2 remaining squares need to be filled in.  The traveling salesman problem is easy to solve if you live in a country where all roads for a ring with exactly one road in and out.

If the problem you wish to solve is not trivial, or if you will face many instances of the problem and expect some will not be trivial, then it's unlikely you'll be able to find the exact solution.  Sure, maybe you can grab a bunch of commodity servers and try to scale the heck out of your attempt.  Depending on the problem you're solving, that might just work.  If you can out-purchase your problem in computing power, then problems in NP will surrender to you.  But if your input size ever grows, it's unlikely you'll be able to keep up.

If your problem is intractable in this way, all is not lost.  You might be able to find an approximate solution to your problem.  Good enough is better than no solution at all, right?  Most of the time, probably.  However, some tremendous work has also been done studying topics like this.  Are there problems which are not even approximable in polynomial time?  What approximation techniques work best?  Alas, those answers lie elsewhere.

This episode avoids a discussion of a few key points in order to keep the material accessible.  If you find this interesting, you should next familiarize yourself with the notions of NP-Complete, NP-Hard, and co-NP.  These are topics we won't necessarily get to in future episodes.  Michael Sipser's Introduction to the Theory of Computation is a good resource.

 

Direct download: sudoku-in-np.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems.

When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it’s going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms?

A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it's learnable in polynomial time.

Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they're balancing the complexity with the goodness of fit to fit their large training sample size.

As mentioned, a good resource for getting started with correlated equilibria is: https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf

Thanks to our sponsors:

Mendoza College of Business - Get your Masters of Science in Business Analytics from Notre Dame.

brilliant.org - A fun, affordable, online learning tool.  Check out their Computer Science Algorithms course.

Direct download: the-computational-complexity-of-machine-learning.mp3
Category:general -- posted at: 8:00am PDT

TMs are a model of computation at the heart of algorithmic analysis.  A Turing Machine has two components.  An infinitely long piece of tape (memory) with re-writable squares and a read/write head which is programmed to change it's state as it processes the input.  This exceptionally simple mechanical computer can compute anything that is intuitively computable, thus says the Church-Turing Thesis.

Attempts to make a "better" Turing Machine by adding things like additional tapes can make the programs easier to describe, but it can't make the "better" machine more capable.  It won't be able to solve any problems the basic Turing Machine can, even if it perhaps solves them faster.

An important concept we didn't get to in this episode is that of a Universal Turing Machine.  Without the prefix, a TM is a particular algorithm.  A Universal TM is a machine that takes, as input, a description of a TM and an input to that machine, and subsequently, simulates the inputted machine running on the given input.

Turing Machines are a central idea in computer science.  They are central to algorithmic analysis and the theory of computation.

Direct download: turing-machines.mp3
Category:general -- posted at: 8:00am PDT

How long an algorithm takes to run depends on many factors including implementation details and hardware.  However, the formal analysis of algorithms focuses on how they will perform in the worst case as the input size grows.  We refer to an algorithm's runtime as it's "O" which is a function of its input size "n".  For example, O(n) represents a linear algorithm - one that takes roughly twice as long to run if you double the input size.  In this episode, we discuss a few everyday examples of algorithmic analysis including sorting, search a shuffled deck of cards, and verifying if a grocery list was successfully completed.

Thanks to our sponsor Brilliant.org, who right now is featuring a related problem as their Brilliant Problem of the Week.

Direct download: big-oh-analysis.mp3
Category:general -- posted at: 8:00am PDT

One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples.  This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model.

In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each.  Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data?  We discuss some of the reasons why and approaches to One Shot Learning.

Direct download: one-shot-learning.mp3
Category:general -- posted at: 8:00am PDT

Recommender systems play an important role in providing personalized content to online users. Yet, typical data mining techniques are not well suited for the unique challenges that recommender systems face. In this episode, host Kyle Polich joins Dr. Joseph Konstan from the University of Minnesota at a live recording at FARCON 2017 in Minneapolis to discuss recommender systems and how machine learning can create better user experiences. 

Direct download: recommender-systems-live-from-farcon.mp3
Category:general -- posted at: 8:00am PDT

Thanks to our sponsor brilliant.org/dataskeptics

A Long Short Term Memory (LSTM) is a neural unit, often used in Recurrent Neural Network (RNN) which attempts to provide the network the capacity to store information for longer periods of time. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time.

Direct download: long-short-term-memory.mp3
Category:general -- posted at: 8:00am PDT

Zillow is a leading real estate information and home-related marketplace. We interviewed Andrew Martin, a data science Research Manager at Zillow, to learn more about how Zillow uses data science and big data to make real estate predictions.

Direct download: zillow-zestimate.mp3
Category:general -- posted at: 8:00am PDT

Our guest Pranav Rajpurkar and his coauthored recently published Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, a paper in which they demonstrate the use of Convolutional Neural Networks which outperform board certified cardiologists in detecting a wide range of heart arrhythmias from ECG data.

Direct download: cardiologist-level-arrhythmia-detection-with-cnns.mp3
Category:general -- posted at: 8:00am PDT

RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely on the state at a previous time, thus giving the network a form of memory.  RNNs have been used effectively in language analysis, translation, speech recognition, and many other tasks.

Direct download: recurrent-neural-networks.mp3
Category:general -- posted at: 8:00am PDT

Thanks to our sponsor Springboard.

In this week's episode, guest Andre Natal from Mozilla joins our host, Kyle Polich, to discuss a couple exciting new developments in open source speech recognition systems, which include Project Common Voice.

In June 2017, Mozilla launched a new open source project, Common Voice, a novel complementary project to the TensorFlow-based DeepSpeech implementation. DeepSpeech is a deep learning-based voice recognition system that was designed by Baidu, which they describe in greater detail in their research paper. DeepSpeech is a speech-to-text engine, and Mozilla hopes that, in the future, they can use Common Voice data to train their DeepSpeech engine.

Direct download: project-common-voice.mp3
Category:general -- posted at: 8:00am PDT

A Bayesian Belief Network is an acyclic directed graph composed of nodes that represent random variables and edges that imply a conditional dependence between them. It's an intuitive way of encoding your statistical knowledge about a system and is efficient to propagate belief updates throughout the network when new information is added.

Direct download: bayesian-belief-networks.mp3
Category:general -- posted at: 11:58pm PDT

In this episode, Tony Beltramelli of UIzard Technologies joins our host, Kyle Polich, to talk about the ideas behind his latest app that can transform graphic design into functioning code, as well as his previous work on spying with wearables.

Direct download: pix2code.mp3
Category:general -- posted at: 8:00am PDT

In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes situations in which two variables are independent of one another given some other variable.

For example, the probability that a vendor will pay their bill on time could depend on many factors such as the company's market cap. Thus, a statistical analysis would reveal many relationships between observable details about the company and their propensity for paying on time. However, if you know that the company has filed for bankruptcy, then we might assume their chances of paying on time have dropped to near 0, and the result is now independent of all other factors in light of this new information.

We discuss a few real world analogies to this idea in the context of some chance meetings on our recent trip to New York City.

Direct download: conditional-independence.mp3
Category:general -- posted at: 8:00am PDT

Animals can't tell us when they're experiencing pain, so we have to rely on other cues to help treat their discomfort. But it is often difficult to tell how much an animal is suffering. The sheep, for instance, is the most inscrutable of animals. However, scientists have figured out a way to understand sheep facial expressions using artificial intelligence.

On this week's episode, Dr. Marwa Mahmoud from the University of Cambridge joins us to discuss her recent study, "Estimating Sheep Pain Level Using Facial Action Unit Detection." Marwa and her colleague's at Cambridge's Computer Laboratory developed an automated system using machine learning algorithms to detect and assess when a sheep is in pain. We discuss some details of her work, how she became interested in studying sheep facial expression to measure pain, and her future goals for this project.

If you're able to be in Minneapolis, MN on August 23rd or 24th, consider attending Farcon. Get your tickets today via https://farcon2017.eventbrite.com.

Direct download: estimating-sheep-pain-with-facial-recognition.mp3
Category:general -- posted at: 8:00am PDT

This episode collects interviews from my recent trip to Microsoft Build where I had the opportunity to speak with Dharma Shukla and Syam Nair about the recently announced CosmosDB. CosmosDB is a globally consistent, distributed datastore that supports all the popular persistent storage formats (relational, key/value pair, document database, and graph) under a single streamlined API. The system provides tunable consistency, allowing the user to make choices about how consistency trade-offs are managed under the hood, if a consumer wants to go beyond the selected defaults.

Direct download: cosmosdb.mp3
Category:general -- posted at: 8:00am PDT

This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.

Direct download: the-vanishing-gradient.mp3
Category:general -- posted at: 8:00am PDT

hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.

Direct download: doctor-ai.mp3
Category:general -- posted at: 8:00am PDT

In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced.

Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation.

In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.

Direct download: activation-functions.mp3
Category:general -- posted at: 8:00am PDT

This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David Carmona.

 

Direct download: ms-build-recap.mp3
Category:general -- posted at: 8:00am PDT

Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.

Direct download: max-pooling.mp3
Category:general -- posted at: 8:00am PDT

This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications.

Direct download: unsupervised-depth-perception.mp3
Category:general -- posted at: 8:00am PDT

CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture.

Direct download: cnns.mp3
Category:general -- posted at: 8:00am PDT

Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety.

To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.

Direct download: mutli-agent-diverse-generative-adversarial-networks.mp3
Category:general -- posted at: 8:00am PDT

GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other.

In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own.

The concept was first introduced in the paper Generative Adversarial Networks.

Direct download: generative-adversarial-networks.mp3
Category:general -- posted at: 8:00am PDT

Recently, we've seen opinion polls come under some skepticism.  But is that skepticism truly justified?  The recent Brexit referendum and US 2016 Presidential Election are examples where some claims the polls "got it wrong".  This episode explores this idea.

Direct download: polling.mp3
Category:general -- posted at: 8:00am PDT

No reliable, complete database cataloging home sales data at a transaction level is available for the average person to access. To a data scientist interesting in studying this data, our hands are complete tied. Opportunities like testing sociological theories, exploring economic impacts, study market forces, or simply research the value of an investment when buying a home are all blocked by the lack of easy access to this dataset. OpenHouse seeks to correct that by centralizing and standardizing all publicly available home sales transactional data. In this episode, we discuss the achievements of OpenHouse to date, and what plans exist for the future.

Check out the OpenHouse gallery.

I also encourage everyone to check out the project Zareen mentioned which was her Harry Potter word2vec webapp and Joy's project doing data visualization on Jawbone data.

Guests

Thanks again to @iamzareenf, @blueplastic, and @joytafty for coming on the show. Thanks to the numerous other volunteers who have helped with the project as well!

Announcements and details

Sponsor

Thanks to our sponsor for this episode Periscope Data. The blog post demoing their maps option is on our blog titled Periscope Data Maps.

Periscope Data

To start a free trial of their dashboarding too, visit http://periscopedata.com/skeptics

Kyle recently did a youtube video exploring the Data Skeptic podcast download numbers using Periscope Data. Check it out at https://youtu.be/aglpJrMp0M4.

Supplemental music is Lee Rosevere's Let's Start at the Beginning.

 

Direct download: openhouse.mp3
Category:general -- posted at: 8:30am PDT

There's more than one type of computer processor. The central processing unit (CPU) is typically what one means when they say "processor". GPUs were introduced to be highly optimized for doing floating point computations in parallel. These types of operations were very useful for high end video games, but as it turns out, those same processors are extremely useful for machine learning. In this mini-episode we discuss why.

Direct download: gpu-cpu.mp3
Category:general -- posted at: 8:00am PDT

Backpropagation is a common algorithm for training a neural network.  It works by computing the gradient of each weight with respect to the overall error, and using stochastic gradient descent to iteratively fine tune the weights of the network.  In this episode, we compare this concept to finding a location on a map, marble maze games, and golf.

Direct download: backpropagation.mp3
Category:general -- posted at: 8:00am PDT

 

In this week's episode of Data Skeptic, host Kyle Polich talks with guest Maura Church, Patreon's data science manager. Patreon is a fast-growing crowdfunding platform that allows artists and creators of all kinds build their own subscription content service. The platform allows fans to become patrons of their favorite artists- an idea similar the Renaissance times, when musicians would rely on benefactors to become their patrons so they could make more art. At Patreon, Maura's data science team strives to provide creators with insight, information, and tools, so that creators can focus on what they do best-- making art.

On the show, Maura talks about some of her projects with the data science team at Patreon. Among the several topics discussed during the episode include: optical music recognition (OMR) to translate musical scores to electronic format, network analysis to understand the connection between creators and patrons, growth forecasting and modeling in a new market, and churn modeling to determine predictors of long time support.

A more detailed explanation of Patreon's A/B testing framework can be found here

Other useful links to topics mentioned during the show:

OMR research

Patreon blog

Patreon HQ blog

Amanda Palmer

Fran Meneses

Direct download: data-science-at-patreon.mp3
Category:general -- posted at: 8:00am PDT

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table

Input 1 Input 2 Output
0 0 0
0 1 0
1 0 0
1 1 1

OR Truth Table

Input 1 Input 2 Output
0 0 0
0 1 1
1 0 1
1 1 1

XOR Truth Table

Input 1 Input 2 Output
0 0 0
0 1 1
1 0 1
1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function A, followed by the formula for calculating the output.

 

 

Output = A(w_0 \cdot Bias + w_1 \cdot Input_1 + w_2 \cdot Input_2)

 

 

Can this perceptron learn the AND function?

Sure. Let w_0 = -0.6 and w_1 = w_2 = 0.5

What about OR?

Yup. Let w_0 = 0 and w_1 = w_2 = 0.5

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

 

 

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

 

Periscope Data

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Direct download: feed-forward-neural-networks.mp3
Category:general -- posted at: 8:00am PDT

In this Data Skeptic episode, Kyle is joined by guest Ruggiero Cavallo to discuss his latest efforts to mitigate the problems presented in this new world of online advertising. Working with his collaborators, Ruggiero reconsiders the search ad allocation and pricing problems from the ground up and redesigns a search ad selling system. He discusses a mechanism that optimizes an entire page of ads globally based on efficiency-maximizing search allocation and a novel technical approach to computing prices.

Direct download: reinventing-sponsored-search-auctions.mp3
Category:general -- posted at: 8:00am PDT

Today's episode overviews the perceptron algorithm. This rather simple approach is characterized by a few particular features. It updates its weights after seeing every example, rather than as a batch. It uses a step function as an activation function. It's only appropriate for linearly separable data, and it will converge to a solution if the data meets these criteria. Being a fairly simple algorithm, it can run very efficiently. Although we don't discuss it in this episode, multi-layer perceptron networks are what makes this technique most attractive.

Direct download: the-perceptron.mp3
Category:general -- posted at: 8:00am PDT

DataRefuge is a public collaborative, grassroots effort around the United States in which scientists, researchers, computer scientists, librarians and other volunteers are working to download, save, and re-upload government data. The DataRefuge Project, which is led by the UPenn Program in Environmental Humanities and the Penn Libraries group at University of Pennsylvania, aims to foster resilience in an era of anthropogenic global climate change and raise awareness of how social and political events affect transparency.

 

Direct download: data-refuge.mp3
Category:general -- posted at: 8:00am PDT

If a CEO wants to know the state of their business, they ask their highest ranking executives. These executives, in turn, should know the state of the business through reports from their subordinates. This structure is roughly analogous to a process observed in deep learning, where each layer of the business reports up different types of observations, KPIs, and reports to be interpreted by the next layer of the business. In deep learning, this process can be thought of as automated feature engineering. DNNs built to recognize objects in images may learn structures that behave like edge detectors in the first hidden layer. Proceeding layers learn to compose more abstract features from lower level outputs. This episode explore that analogy in the context of automated feature engineering.

Linh Da and Kyle discuss a particular image in this episode. The image included below in the show notes is drawn from the work of Lee, Grosse, Ranganath, and Ng in their paper Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.

 

Direct download: automated-feature-engineering.mp3
Category:general -- posted at: 8:00am PDT

In this episode, I speak with Raghu Ramakrishnan, CTO for Data at Microsoft.  We discuss services, tools, and developments in the big data sphere as well as the underlying needs that drove these innovations.

Direct download: big-data-tools-and-trends.mp3
Category:general -- posted at: 9:28am PDT

In this episode, we talk about a high-level description of deep learning.  Kyle presents a simple game (pictured below), which is more of a puzzle really, to try and give  Linh Da the basic concept.

 

 

Thanks to our sponsor for this week, the Data Science Association. Please check out their upcoming Dallas conference at dallasdatascience.eventbrite.com

Direct download: deep-learning-primer.mp3
Category:general -- posted at: 8:00am PDT

Versioning isn't just for source code. Being able to track changes to data is critical for answering questions about data provenance, quality, and reproducibility. Daniel Whitenack joins me this week to talk about these concepts and share his work on Pachyderm. Pachyderm is an open source containerized data lake.

During the show, Daniel mentioned the Gopher Data Science github repo as a great resource for any data scientists interested in the Go language. Although we didn't mention it, Daniel also did an interesting analysis on the 2016 world chess championship that complements our recent episode on chess well. You can find that post here

Supplemental music is Lee Rosevere's Let's Start at the Beginning.

 

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Periscope Data

 

 

 

Direct download: data-provenance-and-reproducibility-with-pachyderm.mp3
Category:general -- posted at: 9:09am PDT

Logistic Regression is a popular classification algorithm. In this episode, we discuss how it can be used to determine if an audio clip represents one of two given speakers. It assumes an output variable (isLinhda) is a linear combination of available features, which are spectral bands in the discussion on this episode.

 

Keep an eye on the dataskeptic.com blog this week as we post more details about this project.

 

Thanks to our sponsor this week, the Data Science Association.  Please check out their upcoming conference in Dallas on Saturday, February 18th, 2017 via the link below.

 

dallasdatascience.eventbrite.com

 

Direct download: logistic-regression.mp3
Category:general -- posted at: 9:08am PDT

Prior work has shown that people's response to competition is in part predicted by their gender. Understanding why and when this occurs is important in areas such as labor market outcomes. A well structured study is challenging due to numerous confounding factors. Peter Backus and his colleagues have identified competitive chess as an ideal arena to study the topic. Find out why and what conclusions they reached.

Our discussion centers around Gender, Competition and Performance: Evidence from Real Tournaments from Backus, Cubel, Guid, Sanchez-Pages, and Mañas. A summary of their paper can also be found here.

 

Direct download: studying-competition-and-gender-through-chess.mp3
Category:general -- posted at: 8:00am PDT

Deep learning can be prone to overfit a given problem. This is especially frustrating given how much time and computational resources are often required to converge. One technique for fighting overfitting is to use dropout. Dropout is the method of randomly selecting some neurons in one's network to set to zero during iterations of learning. The core idea is that each particular input in a given layer is not always available and therefore not a signal that can be relied on too heavily.

 

Direct download: dropout.mp3
Category:general -- posted at: 7:56am PDT

In this episode I speak with Clarence Wardell and Kelly Jin about their mutual service as part of the White House's Police Data Initiative and Data Driven Justice Initiative respectively.

The Police Data Initiative was organized to use open data to increase transparency and community trust as well as to help police agencies use data for internal accountability. The PDI emerged from recommendations made by the Task Force on 21st Century Policing.

The Data Driven Justice Initiative was organized to help city, county, and state governments use data-driven strategies to help low-level offenders with mental illness get directed to the right services rather than into the criminal justice system.

Direct download: police-data-initiative-and-data-driven-justice-initiative.mp3
Category:general -- posted at: 8:10am PDT

We close out 2016 with a discussion of a basic interview question which might get asked when applying for a data science job. Specifically, how a library might build a model to predict if a book will be returned late or not.

 
Direct download: the-library-problem.mp3
Category:general -- posted at: 8:07am PDT

Today's episode is a reading of Isaac Asimov's Franchise.  As mentioned on the show, this is just a work of fiction to be enjoyed and not in any way some obfuscated political statement.  Enjoy, and happy holidays!

Direct download: 2016-holiday-special.mp3
Category:general -- posted at: 8:00am PDT

Classically, entropy is a measure of disorder in a system. From a statistical perspective, it is more useful to say it's a measure of the unpredictability of the system. In this episode we discuss how information reduces the entropy in deciding whether or not Yoshi the parrot will like a new chew toy. A few other everyday examples help us examine why entropy is a nice metric for constructing a decision tree.

Direct download: entropy.mp3
Category:general -- posted at: 12:23am PDT

Cloud services are now ubiquitous in data science and more broadly in technology as well. This week, I speak to Mark Souza, Tobias Ternström, and Corey Sanders about various aspects of data at scale. We discuss the embedding of R into SQLServer, SQLServer on linux, open source, and a few other cloud topics.

Direct download: ms-connect-conference.mp3
Category:general -- posted at: 8:24am PDT

Today's episode is all about Causal Impact, a technique for estimating the impact of a particular event on a time series. We talk to William Martin about his research into the impact releases have on app and we also chat with Karen Blakemore about a project she helped us build to explore the impact of a Saturday Night Live appearance on a musician's career.

Martin's work culminated in a paper Causal Impact for App Store Analysis. A shorter summary version can be found here. His company helping app developers do this sort of analysis can be found at crestweb.cs.ucl.ac.uk/appredict/.

Direct download: causal-impact.mp3
Category:general -- posted at: 7:56am PDT

The Bootstrap is a method of resampling a dataset to possibly refine it's accuracy and produce useful metrics on the result. The bootstrap is a useful statistical technique and is leveraged in Bagging (bootstrap aggregation) algorithms such as Random Forest. We discuss this technique related to polling and surveys.

Direct download: the-bootstrap.mp3
Category:general -- posted at: 8:00am PDT

The Gini Coefficient (as it relates to decision trees) is one approach to determining the optimal decision to introduce which splits your dataset as part of a decision tree. To pick the right feature to split on, it considers the frequency of the values of that feature and how well the values correlate with specific outcomes that you are trying to predict.

Direct download: gini-index.mp3
Category:general -- posted at: 8:00am PDT

Financial analysis techniques for studying numeric, well structured data are very mature. While using unstructured data in finance is not necessarily a new idea, the area is still very greenfield. On this episode,Delia Rusu shares her thoughts on the potential of unstructured data and discusses her work analyzing Wikipedia to help inform financial decisions.

Delia's talk at PyData Berlin can be watched on Youtube (Estimating stock price correlations using Wikipedia). The slides can be found here and all related code is available on github.

Direct download: unstructured-data-for-finance.mp3
Category:general -- posted at: 8:00am PDT

AdaBoost is a canonical example of the class of AnyBoost algorithms that create ensembles of weak learners. We discuss how a complex problem like predicting restaurant failure (which is surely caused by different problems in different situations) might benefit from this technique.

Direct download: adaboost.mp3
Category:general -- posted at: 8:00am PDT

Platform as a service is a growing trend in data science where services like fraud analysis and face detection can be provided via APIs. Such services turn the actual model into a black box to the consumer. But can the model be reverse engineered?

Florian Tramèr shares his work in this episode showing that it can. The paper Stealing Machine Learning Models via Prediction APIs is definitely worth your time to read if you enjoy this episode. Related source code can be found in https://github.com/ftramer/Steal-ML.

Direct download: stealing-models-from-the-cloud.mp3
Category:general -- posted at: 7:54am PDT

For machine learning models created with the random forest algorithm, there is no obvious diagnostic to inform you which features are more important in the output of the model. Some straightforward but useful techniques exist revolving around removing a feature and measuring the decrease in accuracy or Gini values in the leaves. We broadly discuss these techniques in this episode.

Direct download: feature-importance.mp3
Category:general -- posted at: 9:24am PDT

As cities provide bike sharing services, they must also plan for how to redistribute bicycles as they inevitably build up at more popular destination stations. In this episode, Hui Xiong talks about the solution he and his colleagues developed to rebalance bike sharing systems.

Direct download: nyc-bike-share-rebalancing.mp3
Category:general -- posted at: 8:00am PDT

Random forest is a popular ensemble learning algorithm which leverages bagging both for sampling and feature selection. In this episode we make an analogy to the process of running a bookstore.

Direct download: random-forest.mp3
Category:general -- posted at: 8:00am PDT

Jo Hardin joins us this week to discuss the ASA's Election Prediction Contest. This is a competition aimed at forecasting the results of the upcoming US presidential election competition. More details are available in Jo's blog post found here.

You can find some useful R code for getting started automatically gathering data from 538 via Jo's github and official contest details are available here. During the interview we also mention Daily Kos and 538.

Direct download: asa-election-prediction-with-jo-hardin.mp3
Category:general -- posted at: 8:00am PDT

The F1 score is a model diagnostic that combines precision and recall to provide a singular evaluation for model comparison.  In this episode we discuss how it applies to selecting an interior designer.

Direct download: f1-score.mp3
Category:general -- posted at: 9:49am PDT

Urban congestion effects every person living in a city of any reasonable size. Lewis Lehe joins us in this episode to share his work on downtown congestion pricing. We explore topics of how different pricing mechanisms effect congestion as well as how data visualization can inform choices.

You can find examples of Lewis's work at setosa.io. His paper which we discussed during the interview isDistance-dependent congestion pricing for downtown zones.

On this episode, we discuss State of California data which can be found at pems.dot.ca.gov.

Direct download: urban-congestion.mp3
Category:general -- posted at: 8:00am PDT

Heteroskedasticity is a term used to describe a relationship between two variables which has unequal variance over the range.  For example, the variance in the length of a cat's tail almost certainly changes (grows) with age.  On the other hand, the average amount of chewing gum a person consume probably has a consistent variance over a wide range of human heights.

We also discuss some issues with the visualization shown in the tweet embedded below.

Image claiming relationship between income and tickets issued

Direct download: heteroskedasticity.mp3
Category:general -- posted at: 8:00am PDT

Our guest today is Michael Cuthbert, an associate professor of music at MIT and principal investigator of the Music21 project, which we focus our discussion on today.

Music21 is a python library making analysis of music accessible and fun. It supports integration with popular formats such as MIDI, MusicXML, Lilypond, and others. It's also well integrated with The Elvis Project, enabling users to import large volumes of music for easy analysis. Music21 is a great platform for musicologists and machine learning researchers alike to explore patterns and structure in music.

Direct download: music21.mp3
Category:general -- posted at: 8:00am PDT

Paxos is a protocol for arriving a consensus in a distributed computing system which accounts for unreliability of the nodes.  We discuss how this might be used in the real world in the event of a massive disaster.

Direct download: paxos.mp3
Category:general -- posted at: 8:00am PDT

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Direct download: trust-in-ml.mp3
Category:general -- posted at: 8:00am PDT

Analysis of variance is a method used to evaluate differences between the two or more groups.  It works by breaking down the total variance of the system into the between group variance and within group variance.  We discuss this method in the context of wait times getting coffee at Starbucks.

Direct download: anova.mp3
Category:general -- posted at: 8:00am PDT

When humans describe images, they have a reporting bias, in that the report only what they consider important. Thus, in addition to considering whether something is present in an image, one should consider whether it is also relevant to the image before labeling it.

Ishan Misra joins us this week to discuss his recent paper Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels which explores a novel architecture for learning to distinguish presence and relevance. This work enables web-scale datasets to be useful for training, not just well groomed hand labeled corpora.

Direct download: ishan.mp3
Category:general -- posted at: 8:00am PDT

Survival analysis techniques are useful for studying the longevity of groups of elements or individuals, taking into account time considerations and right censorship. This episode explores how survival analysis can describe marriages, in particular, using the non-parametric Cox proportional hazard model.

This episode discusses some good summaries of survey data on marriage and divorce which can be found here.

The python lifelines library is a good place to get started for people that want to do some hands on work.

Direct download: survival-analysis.mp3
Category:general -- posted at: 8:00am PDT

This week is an insightful discussion with Claudia Perlich about some situations in machine learning where models can be built, perhaps by well-intentioned practitioners, to appear to be highly predictive despite being trained on random data. Our discussion covers some novel observations about ROC and AUC, as well as an informative discussion of leakage.

Much of our discussion is inspired by two excellent papers Claudia authored: Leakage in Data Mining: Formulation, Detection, and Avoidance and On Cross Validation and Stacking: Building Seemingly Predictive Models on Random Data. Both are highly recommended reading!

Direct download: Predictive_Models_on_Random_Data.mp3
Category:general -- posted at: 8:00am PDT

An ROC curve is a plot that compares the trade off of true positives and false positives of a binary classifier under different thresholds. The area under the curve (AUC) is useful in determining how discriminating a model is. Together, ROC and AUC are very useful diagnostics for understanding the power of one's model and how to tune it.

Direct download: roc-auc.mp3
Category:general -- posted at: 8:00am PDT

I'm joined by Chris Stucchio this week to discuss how deliberate or uninformed statistical practitioners can derive spurious and arbitrary results via multiple comparisons. We discuss p-hacking and a variety of other important lessons and tips for proper analysis.

You can enjoy Chris's writing on his blog at chrisstucchio.com and you may also like his recent talk Multiple Comparisons: Make Your Boss Happy with False Positives, Guarenteed.

Direct download: multiple-comparisons.mp3
Category:general -- posted at: 8:00am PDT

If you'd like to make a good prediction, your best bet is to invent a time machine, visit the future, observe the value, and return to the past. For those without access to time travel technology, we need to avoid including information about the future in our training data when building machine learning models. Similarly, if any other feature whose value would not actually be available in practice at the time you'd want to use the model to make a prediction, is a feature that can introduce leakage to your model.

Direct download: leakage.mp3
Category:general -- posted at: 8:00am PDT

Kristian Lum (@KLdivergence) joins me this week to discuss her work at @hrdag on predictive policing. We also discuss Multiple Systems Estimation, a technique for inferring statistical information about a population from separate sources of observation.

If you enjoy this discussion, check out the panel Tyranny of the Algorithm? Predictive Analytics & Human Rights which was mentioned in the episode.

Direct download: predictive-policing.mp3
Category:general -- posted at: 8:00am PDT

Distributed computing cannot guarantee consistency, accuracy, and partition tolerance. Most system architects need to think carefully about how they should appropriately balance the needs of their application across these competing objectives. Linh Da and Kyle discuss the CAP Theorem using the analogy of a phone tree for alerting people about a school snow day.

Direct download: cap-theorem.mp3
Category:general -- posted at: 8:00am PDT

A startup is claiming that they can detect terrorists purely through facial recognition. In this solo episode, Kyle explores the plausibility of these claims.

Direct download: detecting-terrorists.mp3
Category:general -- posted at: 8:00am PDT

Goodhart's law states that "When a measure becomes a target, it ceases to be a good measure". In this mini-episode we discuss how this affects SEO, call centers, and Scrum.

Direct download: goodharts-law.mp3
Category:general -- posted at: 8:00am PDT

I'm joined this week by Jon Morra, director of data science at eHarmony to discuss a variety of ways in which machine learning and data science are being applied to help connect people for successful long term relationships.

Interesting open source projects mentioned in the interview include Face-parts, a web service for detecting faces and extracting a robust set of fiducial markers (features) from the image, and Aloha, a Scala based machine learning library. You can learn more about these and other interesting projects at the eHarmony github page.

In the wrap up, Jon mentioned the LA Machine Learning meetup which he runs. This is a great resource for LA residents separate and complementary to datascience.la groups, so consider signing up for all of the above and I hope to see you there in the future.

Direct download: data-science-at-eharmony.mp3
Category:general -- posted at: 8:00am PDT