Data Skeptic

Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.

Direct download: max-pooling.mp3
Category:general -- posted at: 8:00am PDT

This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications.

Direct download: unsupervised-depth-perception.mp3
Category:general -- posted at: 8:00am PDT

CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture.

Direct download: cnns.mp3
Category:general -- posted at: 8:00am PDT

Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety.

To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.

Direct download: mutli-agent-diverse-generative-adversarial-networks.mp3
Category:general -- posted at: 8:00am PDT

GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other.

In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own.

The concept was first introduced in the paper Generative Adversarial Networks.

Direct download: generative-adversarial-networks.mp3
Category:general -- posted at: 8:00am PDT

Recently, we've seen opinion polls come under some skepticism.  But is that skepticism truly justified?  The recent Brexit referendum and US 2016 Presidential Election are examples where some claims the polls "got it wrong".  This episode explores this idea.

Direct download: polling.mp3
Category:general -- posted at: 8:00am PDT

No reliable, complete database cataloging home sales data at a transaction level is available for the average person to access. To a data scientist interesting in studying this data, our hands are complete tied. Opportunities like testing sociological theories, exploring economic impacts, study market forces, or simply research the value of an investment when buying a home are all blocked by the lack of easy access to this dataset. OpenHouse seeks to correct that by centralizing and standardizing all publicly available home sales transactional data. In this episode, we discuss the achievements of OpenHouse to date, and what plans exist for the future.

Check out the OpenHouse gallery.

I also encourage everyone to check out the project Zareen mentioned which was her Harry Potter word2vec webapp and Joy's project doing data visualization on Jawbone data.

Guests

Thanks again to @iamzareenf, @blueplastic, and @joytafty for coming on the show. Thanks to the numerous other volunteers who have helped with the project as well!

Announcements and details

Sponsor

Thanks to our sponsor for this episode Periscope Data. The blog post demoing their maps option is on our blog titled Periscope Data Maps.

Periscope Data

To start a free trial of their dashboarding too, visit http://periscopedata.com/skeptics

Kyle recently did a youtube video exploring the Data Skeptic podcast download numbers using Periscope Data. Check it out at https://youtu.be/aglpJrMp0M4.

Supplemental music is Lee Rosevere's Let's Start at the Beginning.

 

Direct download: openhouse.mp3
Category:general -- posted at: 8:30am PDT

There's more than one type of computer processor. The central processing unit (CPU) is typically what one means when they say "processor". GPUs were introduced to be highly optimized for doing floating point computations in parallel. These types of operations were very useful for high end video games, but as it turns out, those same processors are extremely useful for machine learning. In this mini-episode we discuss why.

Direct download: gpu-cpu.mp3
Category:general -- posted at: 8:00am PDT

Backpropagation is a common algorithm for training a neural network.  It works by computing the gradient of each weight with respect to the overall error, and using stochastic gradient descent to iteratively fine tune the weights of the network.  In this episode, we compare this concept to finding a location on a map, marble maze games, and golf.

Direct download: backpropagation.mp3
Category:general -- posted at: 8:00am PDT

 

In this week's episode of Data Skeptic, host Kyle Polich talks with guest Maura Church, Patreon's data science manager. Patreon is a fast-growing crowdfunding platform that allows artists and creators of all kinds build their own subscription content service. The platform allows fans to become patrons of their favorite artists- an idea similar the Renaissance times, when musicians would rely on benefactors to become their patrons so they could make more art. At Patreon, Maura's data science team strives to provide creators with insight, information, and tools, so that creators can focus on what they do best-- making art.

On the show, Maura talks about some of her projects with the data science team at Patreon. Among the several topics discussed during the episode include: optical music recognition (OMR) to translate musical scores to electronic format, network analysis to understand the connection between creators and patrons, growth forecasting and modeling in a new market, and churn modeling to determine predictors of long time support.

A more detailed explanation of Patreon's A/B testing framework can be found here

Other useful links to topics mentioned during the show:

OMR research

Patreon blog

Patreon HQ blog

Amanda Palmer

Fran Meneses

Direct download: data-science-at-patreon.mp3
Category:general -- posted at: 8:00am PDT