Data Skeptic

This episode explores the root concept of what it is to be Bayesian: describing knowledge of a system probabilistically, having an appropriate prior probability, know how to weigh new evidence, and following Bayes's rule to compute the revised distribution.

We present this concept in a few different contexts but primarily focus on how our bird Yoshi sends signals about her food preferences.

Like many animals, Yoshi is a complex creature whose preferences cannot easily be summarized by a straightforward utility function the way they might in a textbook reinforcement learning problem. Her preferences are sequential, conditional, and evolving. We may not always know what our bird is thinking, but we have some good indicators that give us clues.

Direct download: bayesian-redux.mp3
Category:general -- posted at: 8:00am PDT

This is our interview with Dorje Brody about his recent paper with David Meier, How to model fake news. This paper uses the tools of communication theory and a sub-topic called filtering theory to describe the mathematical basis for an information channel which can contain fake news.

 

Thanks to our sponsor Gartner.

Direct download: modeling-fake-news.mp3
Category:general -- posted at: 8:00am PDT

Without getting into definitions, we have an intuitive sense of what a "community" is. The Louvain Method for Community Detection is one of the best known mathematical techniques designed to detect communities.

This method requires typical graph data in which people are nodes and edges are their connections. It's easy to imagine this data in the context of Facebook or LinkedIn but the technique applies just as well to any other dataset like cellular phone calling records or pen-pals.

The Louvain Method provides a means of measuring the strength of any proposed community based on a concept known as Modularity. Modularity is a value in the range [-1, 1] that measure the density of links internal to a community against links external to the community. The quite palatable assumption here is that a genuine community would have members that are strongly interconnected.

A community is not necessarily the same thing as a clique; it is not required that all community members know each other. Rather, we simply define a community as a graph structure where the nodes are more connected to each other than connected to people outside the community.

It's only natural that any person in a community has many connections to people outside that community. The more a community has internal connections over external connections, the stronger that community is considered to be. The Louvain Method elegantly captures this intuitively desirable quality.

Direct download: louvain-community-detection.mp3
Category:general -- posted at: 8:22am PDT

In this episode, our guest is Dan Kahan about his research into how people consume and interpret science news.

In an era of fake news, motivated reasoning, and alternative facts, important questions need to be asked about how people understand new information.

Dan is a member of the Cultural Cognition Project at Yale University, a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs.

In a paper titled Cultural cognition of scientific consensus, Dan and co-authors Hank Jenkins‐Smith and Donald Braman discuss the "cultural cognition of risk" and establish experimentally that individuals tend to update their beliefs about scientific information through a context of their pre-existing cultural beliefs. In this way, topics such as climate change, nuclear power, and conceal-carry handgun permits often result in people.

The findings of this and other studies tell us that on topics such as these, even when people are given proper information about a scientific consensus, individuals still interpret those results through the lens of their pre-existing cultural beliefs.

The ‘cultural cognition of risk’ refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study presents both correlational and experimental evidence confirming that cultural cognition shapes individuals’ beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy‐making are discussed.

Direct download: cultural-cognition.mp3
Category:general -- posted at: 8:24am PDT

A false discovery rate (FDR) is a methodology that can be useful when struggling with the problem of multiple comparisons.

In any experiment, if the experimenter checks more than one dependent variable, then they are making multiple comparisons. Naturally, if you make enough comparisons, you will eventually find some correlation.

Classically, people applied the Bonferroni Correction. In essence, this procedure dictates that you should lower your p-value (raise your standard of evidence) by a specific amount depending on the number of variables you're considering. While effective, this methodology is strict about preventing false positives (type i errors). You aren't likely to find evidence for a hypothesis that is actually false using Bonferroni. However, your exuberance to avoid type i errors may have introduced some type ii errors. There could be some hypotheses that are actually true, which you did not notice.

This episode covers an alternative known as false discovery rates. The essence of this method is to make more specific adjustments to your expectation of what p-value is sufficient evidence. 

Direct download: false-discovery-rates.mp3
Category:general -- posted at: 8:09am PDT

Digital videos can be described as sequences of still images and associated audio. Audio is easy to fake. What about video?

A video can easily be broken down into a sequence of still images replayed rapidly in sequence. In this context, videos are simply very high dimensional sequences of observations, ripe for input into a machine learning algorithm.

The availability of commodity hardware, clever algorithms, and well-designed software to implement those algorithms at scale make it possible to do machine learning on video, but to what end? There are many answers, one interesting approach being the technology called "DeepFakes".

The Deep of Deepfakes refers to Deep Learning, and the fake refers to the function of the software - to take a real video of a human being and digitally alter their face to match someone else's face. Here are two examples:

This software produces curiously convincing fake videos. Yet, there's something slightly off about them. Surely machine learning can be used to determine real from fake... right? Siwei Lyu and his collaborators certainly thought so and demonstrated this idea by identifying a novel, detectable feature which was commonly missing from videos produced by the Deep Fakes software.

In this episode, we discuss this use case for deep learning, detecting fake videos, and the threat of fake videos in the future.

Direct download: deepfakes.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Kyle reviews what we've learned so far in our series on Fake News and talks briefly about where we're going next.

Direct download: fake-news-midterm.mp3
Category:general -- posted at: 8:00am PDT

Two weeks ago we discussed click through rates or CTRs and their usefulness and limits as a metric. Today, we discuss a related metric known as quality score.

While that phrase has probably been used to mean dozens of different things in different contexts, our discussion focuses around the idea of quality score encountered in Search Engine Marketing (SEM). SEM is the practice of purchasing keyword targeted ads shown to customers using a search engine.

Most SEM is managed via an auction mechanism - the advertiser states the price they are willing to pay, and in real time, the search engine will serve users advertisements and charge the advertiser.

But how to search engines decide who to show and what price to charge? This is a complicated question requiring a multi-part answer to address completely. In this episode, we focus on one part of that equation, which is the quality score the search engine assigns to the ad in context. This quality score is calculated via several factors including crawling the destination page (also called the landing page) and predicting how applicable the content found there is to the ad itself.

Direct download: quality_score.mp3
Category:general -- posted at: 10:28pm PDT

Kyle interviews Steven Sloman, Professor in the school of Cognitive, Linguistic, and Psychological Sciences at Brown University. Steven is co-author of The Knowledge Illusion: Why We Never Think Alone and Causal Models: How People Think about the World and Its Alternatives. Steven shares his perspective and research into how people process information and what this teaches us about the existence of and belief in fake news.

Direct download: the-knowledge-illusion.mp3
Category:general -- posted at: 8:00am PDT

A Click Through Rate (CTR) is the proportion of clicks to impressions of some item of content shared online. This terminology is most commonly used in digital advertising but applies just as well to content websites might choose to feature on their homepage or in search results.

A CTR is intuitively appealing as a metric for optimization. After all, if users are disinterested in some content, under normal circumstances, it's reasonable to assume they would ignore the content, rather than clicking on it. On the other hand, the best content is likely to elicit a high CTR as users signal their interest by following the hyperlink.

In the advertising world, a website could charge per impression, per click, or per action. Both impression and action based pricing have asymmetrical results for the publisher and advertiser. However, paying per click (CPC based advertising) seems to strike a nice balance. For this and other numeric reasons, many digital advertising mechanisms (such as Google Adwords) use CPC as the payment mechanism.

When charging per click, an advertising platform will value a high CTR when selecting which ad to show. As we learned in our episode on Goodhart's Law, once a measure is turned into a target, it ceases to be a good measure. While CTR alone does not entirely drive most online advertising algorithms, it does play an important role. Thus, advertisers are incentivized to adopt strategies that maximize CTR.

On the surface, this sounds like a great idea: provide internet users what they are looking for, and be awarded with their attention and lower advertising costs. However, one possible unintended consequence of this type of optimization is the creation of ads which are designed solely to generate clicks, regardless of if the users are happy with the page they visit after clicking a link.

So, at least in part, websites that optimize for higher CTRs are going to favor content that does a good job getting viewers to click it. Getting a user to view a page is not totally synonymous with getting a user to appreciate the content of a page. The gap between the algorithmic goal and the user experience could be one of the factors that has promoted the creation of fake news.

Direct download: ctrs.mp3
Category:general -- posted at: 8:00am PDT