In this lecture, we analyze the concept of Well-posedness of Markov Decision Processes (MDP) optimality problems in the framework of Dontchev and Zolezzi regarding Hadamard and Tikhonov's definitions. Markov Decision Processes are well-known and important dynamic decision models of discrete-time stochastic processes studied at least since 1957 by R. Bellman (A Markovian Decision Process) with a wide range of applications in science and technology. Their stability has been studied in a sense close to well-posedness by E. Gordienko since at least the end of the eighties. We compare both approaches and using the extensive knowledge on well-posedness regarding optimal control problems we discuss explicitly their relationship.