Fri, 26 January 2018
Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples. Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.
Direct download: markov-decision-process.mp3
Category:general -- posted at: 8:00am PDT