Data Skeptic

I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks.

When I heard about the announcement, my first thoughts were two-fold.  First, the possibility of optimized integrations with existing Azure services.  This would be a big benefit to heavy Azure users who also want to use Spark.  Second, the benefits of active directory to control Databricks access for large enterprise.

Hear Ali and JG's thoughts and comments on what makes Azure Databricks a novel offering.

 

Direct download: azure-databricks.mp3
Category:general -- posted at: 8:00am PDT

In this episode we discuss the complexity class of EXP-Time which contains algorithms which require $O(2^{p(n)})$ time to run.  In other words, the worst case runtime is exponential in some polynomial of the input size.  Problems in this class are even more difficult than problems in NP since you can't even verify a solution in polynomial time.

We mostly discuss Generalized Chess as an intuitive example of a problem in EXP-Time.  Another well-known problem is determining if a given algorithm will halt in k steps.  That extra condition of restricting it to k steps makes this problem distinct from Turing's original definition of the halting problem which is known to be intractable.

Direct download: exp-time.mp3
Category:general -- posted at: 8:00am PDT

In this week's episode, host Kyle Polich interviews author Lance Fortnow about whether P will ever be equal to NP and solve all of life’s problems. Fortnow begins the discussion with the example question: Are there 100 people on Facebook who are all friends with each other? Even if you were an employee of Facebook and had access to all its data, answering this question naively would require checking more possibilities than any computer, now or in the future, could possibly do. The P/NP question asks whether there exists a more clever and faster algorithm that can answer this problem and others like it.

Direct download: p-vs-np.mp3
Category:data science -- posted at: 8:00am PDT

Algorithms with similar runtimes are said to be in the same complexity class. That runtime is measured in the how many steps an algorithm takes relative to the input size.

The class P contains all algorithms which run in polynomial time (basically, a nested for loop iterating over the input).  NP are algorithms which seem to require brute force.  Brute force search cannot be done in polynomial time, so it seems that problems in NP are more difficult than problems in P.  I say it "seems" this way because, while most people believe it to be true, it has not been proven.  This is the famous P vs. NP conjecture.  It will be discussed in more detail in a future episode.

Given a solution to a particular problem, if it can be verified/checked in polynomial time, that problem might be in NP.  If someone hands you a completed Sudoku puzzle, it's not difficult to see if they made any mistakes.  The effort of developing the solution to the Sudoku game seems to be intrinsically more difficult.  In fact, as far as anyone knows, in the general case of all possible examples of the game, it seems no strategy can do better on average than just random guessing.

This notion of random guessing the solution is where the N in NP comes from: Non-deterministic.  Imagine a machine with a random input already written in its memory.  Given enough such machines, one of them will have the right answer.  If they all ran in parallel, one of them could verify it's input in polynomial time.  This guess / provided input is often called a witness string.

NP is an important concept for many reasons.  To me, the most reason to know about NP is a practical one.  Depending on your goals or the goals of your employer, there are many challenging problems you may attempt to solve.  If a problem you are trying to solve happens to be in NP, then you should consider the implications very carefully.  Perhaps you'll be lucky and discover that your particular instance of the problem is easy.  Sudoku is pretty easy if only 2 remaining squares need to be filled in.  The traveling salesman problem is easy to solve if you live in a country where all roads for a ring with exactly one road in and out.

If the problem you wish to solve is not trivial, or if you will face many instances of the problem and expect some will not be trivial, then it's unlikely you'll be able to find the exact solution.  Sure, maybe you can grab a bunch of commodity servers and try to scale the heck out of your attempt.  Depending on the problem you're solving, that might just work.  If you can out-purchase your problem in computing power, then problems in NP will surrender to you.  But if your input size ever grows, it's unlikely you'll be able to keep up.

If your problem is intractable in this way, all is not lost.  You might be able to find an approximate solution to your problem.  Good enough is better than no solution at all, right?  Most of the time, probably.  However, some tremendous work has also been done studying topics like this.  Are there problems which are not even approximable in polynomial time?  What approximation techniques work best?  Alas, those answers lie elsewhere.

This episode avoids a discussion of a few key points in order to keep the material accessible.  If you find this interesting, you should next familiarize yourself with the notions of NP-Complete, NP-Hard, and co-NP.  These are topics we won't necessarily get to in future episodes.  Michael Sipser's Introduction to the Theory of Computation is a good resource.

 

Direct download: sudoku-in-np.mp3
Category:general -- posted at: 8:00am PDT

In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems.

When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it’s going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms?

A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it's learnable in polynomial time.

Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they're balancing the complexity with the goodness of fit to fit their large training sample size.

As mentioned, a good resource for getting started with correlated equilibria is: https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf

Thanks to our sponsors:

Mendoza College of Business - Get your Masters of Science in Business Analytics from Notre Dame.

brilliant.org - A fun, affordable, online learning tool.  Check out their Computer Science Algorithms course.

Direct download: the-computational-complexity-of-machine-learning.mp3
Category:general -- posted at: 8:00am PDT

1