cs234 / background lecture 0 - markov decision processes value iteration

Note: This was from stanford cs221 and is not part of cs234. I used this to gain some prerequistie knowledge for cs234

Resources:

Introduction

Based on uncertainty / randomness of the real world where a state + action combo can lead to two different states.

Applications:

Markov Decision Processes

Transition State Diagram:

Blocks represent the state nodes/ state-action node. Label edges represent the probability of that transition occuring from that state-action node.

Transition Function: Gives you the probability of going from a given state to another state given an action.

Reward Function: Gives you the reward of the transition from one state to another given an action

Markov: Future states only depend on the current state and action taken.

Policy ($\pi$): A mapping from each state $s \in \text{states}$ to each action $a \in \text{Actions}(s)$

Discount Factor: The value of future rewards relative to your current day rewards

Policy Evaluation

Utility: The discounted sum of rewards over a random path that is yielded by a specific policy.

Value: The expected utility of a specific policy

Value of a Policy: Expected utility received by following policy $\pi$ from state $s$ - denoted by $V_\pi(s)$

Q-Value of a Policy: Expected utility from a state-action node - denoted by $Q_\pi(s, a)$

Recurrence Using $V_\pi(s)$ and $Q_\pi(s, a)$:

$$ V_\pi(s) = \begin{cases} { 0 \text{ if } \text{IsEnd}(s) = \text{True, } \text{else } Q_\pi(s, a)} \end{cases} $$ $$Q_\pi(s, a) = \sum_{s’}T(s, a, s’)[R(s, a, s’) + \gamma V_\pi(s’)]$$

Iterative Algorithm: Start with arbitrary policy values and apply the recurrence until convergence

Implementation Details: We want to wait until convergence but that might take a while so we use this heuristic:

$$\text{max}_{s \in \text{states}} V^t _\pi(s) - V^{t-1} _\pi(s) \leq \epsilon$$

Value Iteration

Optimal Value: The maximum value obtained by any policy - denoted by $V_{opt}(s)$

$$Q_{opt}(s, a) = \sum_{s’}T(s, a, s’)[R(s, a, s’) + \gamma V_{opt}(s’)]$$ $$ V_{\text{opt}}(s) = \begin{cases} 0 \text{ if } \text{IsEnd}(s) = \text{True, }
\text{else } \max_{a \in \text{Actions}(s)} Q_\pi(s, a) \end{cases} $$ $$\pi_{opt}(s) = argmax_{a \in \text{Actions(s) }} \text{ }Q_{opt}(s,a)$$

Iterative Algorithm: