Contact Person: Erik Madsen - email@example.com
Abstract: We provide an axiomatic analysis of dynamic random utility, characterizing the stochastic choice behavior of agents who solve dynamic decision problems by maximizing some stochastic process (Ut) of utilities. We show first that even when (Ut) is arbitrary, dynamic random utility imposes new testable restrictions on how behavior across periods is related, over and above period-by-period analogs of the static random utility axioms: An important feature of dynamic random utility is that behavior may appear history dependent, because past choices reveal information about agents' past utilities and (Ut) may be serially correlated; however, our key new axioms highlight that the model entails specic limits on the form of history dependence that can arise. Second, we show that when agents' choices today influence their menu tomorrow (e.g., in consumption savings or stopping problems), imposing natural Bayesian rationality axioms restricts the form of randomness that (Ut) can display. By contrast, a specication of utility shocks that is widely used in empirical work violates these restrictions, leading to behavior that may display a negative option value and can produce biased parameter estimates. Finally, dynamic stochastic choice data allows us to characterize important special cases of random utility|in particular, learning and taste persistence|that on static domains are indistinguishable from the general model.
Paper Link: Dynamic Random Utility by Mira Frick, Ryota Iijima, & Tomasz Strzalecki