Data Skeptic (general)

Deepjazz is a project from Ji-Sung Kim, a computer science student at Princeton University. It is built using Theano, Keras, music21, and Evan Chow's project jazzml. Deepjazz is a computational music project that creates original jazz compositions using recurrent neural networks trained on Pat Metheny's "And Then I Knew". You can hear some of deepjazz's original compositions on soundcloud.

Direct download: deepjazz.mp3
Category:general -- posted at: 8:00am PDT

When working with time series data, there are a number of important diagnostics one should consider to help understand more about the data. The auto-correlative function, plotted as a correlogram, helps explain how a given observations relates to recent preceding observations. A very random process (like lottery numbers) would show very low values, while temperature (our topic in this episode) does correlate highly with recent days.
 
 
Below is a time series of the weather in Chapel Hill, NC every morning over a few years. You can clearly see an annual cyclic pattern, which should be no surprise to anyone. Yet, you can also see a fair amount of variance from day to day. Even if you de-trend the annual cycle, we can see that this would not be enough for yesterday's temperature to perfectly predict today's weather.
 
 
 

Below is a correlogram of the ACF (auto-correlative function). For very low values of lag (comparing the most recent temperature measurement to the values of previous days), we can see a quick drop-off. This tells us that weather correlates very highly, but decliningly so, with recent days.

Interestingly, we also see it negatively correlate with days far away. This is because of the "opposite" seasons we experience. The annual cycle of the planet is visible as we see weaker correlations with previous years.

This highlights one limit to the ACF. If compares the current weather to all previous days and reports those correlations independently. If we know today and yesterday's weather, the weather from two days ago might not add as much information.

 
 

For that reason, we also want to review the PACF (partial auto-correlative function) which subtracts the correlation of previous days for each lag so that we get an estimate of what each of those days actually contributes to the most recent observation. In the plots below of the same data, we see all the seasonal and annual correlations disappear. We expect this because most of the information about how the weather depends on the past is already contained in the most recent few days.

 
 

The boundaries shown in the above plots represent a measure of statistical significant. Any points outside this rang are considered statistically significant. Those below it are not.

As many listeners know, Kyle and Linh Da are looking to buy a house. In fact, they've made an offer on a house in the zipcode 90008. Thanks to the Trulia API, we can get a time series of the average median listing price of homes in that zipcode and see if it gives us any insight into the viability of this investment's future!
 
The plot below shows the time series of the median listing price (note, that's not the same as the sale price) on a daily basis over the past few years.
 
 
 

Let's first take a look at it's ACF below. For price, we see (no surprise) that recent listing prices are pretty good predictors of current listing prices. Unless some catastrophe or major event (like discovery of oil or a large gold vein) changed things overnight, home prices should have relatively stable short term prices, and therefore, be very auto-correlative.

 
As we did previously, we now want to look at the PACF (below) which shows us that the two most recent days have the most useful information. Although not surprising, I was wondering if we might find some interesting effects related to houses being listed on weekdays vs. weekends, or at specific times of the month. However, it seems that when dealing with such large amounts of money, people have a bit more patience. Perhaps selling a car or a smaller item might show some periodic lags, but the home prices do not.
 
 
Direct download: acf.mp3
Category:general -- posted at: 8:00am PDT

This week I spoke with Elham Shaabani and Paulo Shakarian (@PauloShakASU) about their recent paper Early Identification of Violent Criminal Gang Members (also available onarXiv). In this paper, they use social network analysis techniques and machine learning to provide early detection of known criminal offenders who are in a high risk group for committing violent crimes in the future. Their techniques outperform existing techniques used by the police. Elham and Paulo are part of the Cyber-Socio Intelligent Systems (CySIS) Lab.

Direct download: predicting-violent-offenders.mp3
Category:general -- posted at: 8:00am PDT

A dinner party at Data Skeptic HQ helps teach the uses of fractional factorial design for studying 2-way interactions.

Direct download: Fractional_factorial_design.mp3
Category:general -- posted at: 8:00am PDT

Cheng-tao Chu (@chengtao_chu) joins us this week to discuss his perspective on common mistakes and pitfalls that are made when doing machine learning. This episode is filled with sage advice for beginners and intermediate users of machine learning, and possibly some good reminders for experts as well. Our discussion parallels his recent blog postMachine Learning Done Wrong.

Cheng-tao Chu is an entrepreneur who has worked at many well known silicon valley companies. His paper Map-Reduce for Machine Learning on Multicore is the basis for Apache Mahout. His most recent endeavor has just emerged from steath, so please check out OneInterview.io.

Direct download: machine_learning_done_wrong.mp3
Category:general -- posted at: 8:00am PDT

Co-host Linh Da was in a biking accident after hitting a pothole. She sustained an injury that required stitches. This is the story of our quest to file a 311 complaint and track it through the City of Los Angeles's open data portal.

My guests this episode are Chelsea Ursaner (LA City Open Data Team), Ben Berkowitz (CEO and founder of SeeClickFix), and Russ Klettke (Editor of pothole.info)

Direct download: potholes.mp3
Category:general -- posted at: 8:24am PDT

Certain data mining algorithms (including k-means clustering and k-nearest neighbors) require a user defined parameter k. A user of these algorithms is required to select this value, which raises the questions: what is the "best" value of k that one should select to solve their problem?

This mini-episode explores the appropriate value of k to use when trying to estimate the cost of a house in Los Angeles based on the closests sales in it's area.

Direct download: the-elbow-method.mp3
Category:general -- posted at: 8:00am PDT

Today on Data Skeptic, Lachlan Gunn joins us to discuss his recent paper Too Good to be True. This paper highlights a somewhat paradoxical / counterintuitive fact about how unanimity is unexpected in cases where perfect measurements cannot be taken. With large enough data, some amount of error is expected.

The "Too Good to be True" paper highlights three interesting examples which we discuss in the podcast. You can also watch a lecture from Lachlan on this topic via youtube here.

Direct download: too_good_to_be_true.mp3
Category:general -- posted at: 8:00am PDT

How well does your model explain your data? R-squared is a useful statistic for answering this question. In this episode we explore how it applies to the problem of valuing a house. Aspects like the number of bedrooms go a long way in explaining why different houses have different prices. There's some amount of variance that can be explained by a model, and some amount that cannot be directly measured. R-squared is the ratio of the explained variance to the total variance. It's not a measure of accuracy, it's a measure of the power of one's model.

Direct download: r-squared.mp3
Category:general -- posted at: 8:00am PDT

 
Direct download: think_again.mp3
Category:general -- posted at: 7:30am PDT