Data Skeptic (miniepisode)

Categories

miniepisode
medicine
general
wikipedia
art
financial
gaming
statistics
skepticism
data science
socialweb
love
econometrics
deep neural networks, image recognition
data viz
privacy
open data
psychology
audio
data philanthropy
measurement
gmo
civic data science

Archives

2017

2016

2015

2014

December 2017
S M T W T F S

1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31

Syndication

 Fri, 2 October 2015 The multi-armed bandit problem is named with reference to slot machines (one armed bandits). Given the chance to play from a pool of slot machines, all with unknown payout frequencies, how can you maximize your reward? If you knew in advance which machine was best, you would play exclusively that machine. Any strategy less than this will, on average, earn less payout, and the difference can be called the "regret". You can try each slot machine to learn about it, which we refer to as exploration. When you've spent enough time to be convinced you've identified the best machine, you can then double down and exploit that knowledge. But how do you best balance exploration and exploitation to minimize the regret of your play? This mini-episode explores a few examples including restaurant selection and A/B testing to discuss the nature of this problem. In the end we touch briefly on Thompson sampling as a solution. Direct download: multi-armed-bandit.mp3 Category:miniepisode -- posted at: 12:00am PST Thu, 23 July 2015 This episode explores the k-nearest neighbors algorithm which is an unsupervised, non-parametric method that can be used for both classification and regression. The basica concept is that it leverages some distance function on your dataset to find the $k$ closests other observations of the dataset and averaging them to impute an unknown value or unlabelled datapoint. Direct download: MINI_knn.mp3 Category:miniepisode -- posted at: 10:51pm PST Thu, 9 July 2015 This mini-episode is a high level explanation of the basic idea behind MapReduce, which is a fundamental concept in big data. The origin of the idea comes from a Google paper titled MapReduce: Simplified Data Processing on Large Clusters. This episode makes an analogy to tabulating paper voting ballets as a means of helping to explain how and why MapReduce is an important concept. Direct download: MINI_Map_Reduce.mp3 Category:miniepisode -- posted at: 10:17pm PST Fri, 26 June 2015 More features are not always better! With an increasing number of features to consider, machine learning algorithms suffer from the curse of dimensionality, as they have a wider set and often sparser coverage of examples to consider. This episode explores a real life example of this as Kyle and Linhda discuss their thoughts on purchasing a home. The curse of dimensionality was defined by Richard Bellman, and applies in several slightly nuanced cases. This mini-episode discusses how it applies on machine learning. This episode does not, however, discuss a slightly different version of the curse of dimensionality which appears in decision theoretic situations. Consider the game of chess. One must think ahead several moves in order to execute a successful strategy. However, thinking ahead another move requires a consideration of every possible move of every piece controlled, and every possible response one's opponent may take. The space of possible future states of the board grows exponentially with the horizon one wants to look ahead to. This is present in the notably useful Bellman equation. Direct download: MINI_The_Curse_of_Dimensionality.mp3 Category:miniepisode -- posted at: 12:01am PST Fri, 12 June 2015 This mini-episode discusses Anscombe's Quartet, a series of four datasets which are clearly very different but share some similar statistical properties with one another. For example, each of the four plots has the same mean and variance on both axis, as well as the same correlation coefficient, and same linear regression.   The episode tries to add some context by imagining each of these datasets as data about a sports team, and why it can be important to look beyond basic summary statistics when exploring your dataset. Direct download: MINI_Anscombes_Quartet.mp3 Category:miniepisode -- posted at: 1:00am PST Fri, 29 May 2015 Linhda and Kyle review a New York Times article titled How Your Hometown Affects Your Chances of Marriage. This article explores research about what correlates with the likelihood of being married by age 26 by county. Kyle and LinhDa discuss some of the fine points of this research and the process of identifying factors for consideration. Direct download: marriage-analysis.mp3 Category:miniepisode -- posted at: 12:07am PST Thu, 14 May 2015 This week's episode dicusses z-scores, also known as standard score. This score describes the distance (in standard deviations) that an observation is away from the mean of the population. A closely related top is the 68-95-99.7 rule which tells us that (approximately) 68% of a normally distributed population lies within one standard deviation of the mean, 95 within 2, and 99.7 within 3. Kyle and Linh Da discuss z-scores in the context of human height. If you'd like to calculate your own z-score for height, you can do so below. They further discuss how a z-score can also describe the likelihood that some statistical result is due to chance. Thus, if the significance of a finding can be said to be 3σ, that means that it's 99.7% likely not due to chance, or only 0.3% likely to be due to chance. Direct download: z-scores.mp3 Category:miniepisode -- posted at: 10:08pm PST Fri, 24 April 2015 For our 50th episode we enduldge a bit by cooking Linhda's previously mentioned "healthy" cornbread.  This leads to a discussion of the statistical topic of overdispersion in which the variance of some distribution is larger than what one's underlying model will account for. Direct download: MINI_Cornbread_and_Overdispersion.mp3 Category:miniepisode -- posted at: 12:19am PST Thu, 16 April 2015 This episode overviews some of the fundamental concepts of natural language processing including stemming, n-grams, part of speech tagging, and th bag of words approach. Direct download: nlp.mp3 Category:miniepisode -- posted at: 11:44pm PST Thu, 2 April 2015 This episode explores how going wine testing could teach us about using markov chain monte carlo (mcmc). Direct download: MINI_mcmc.mp3 Category:miniepisode -- posted at: 11:24pm PST