Review of Utility Theory, Expected Utility Utility theory is the foundation of neoclassical economic demand theory. According to this theory, consumption of goods and services provides satisfaction, or utility, to consumers. Facing a limited budget constraint (wealth or income), the problem facing each consumer is how to allocate purchases out of that budget in such a way that utility is maximized. In most of undergraduate economics, this maximization problem is presented as static. That is, the budget is exhausted entirely with current purchases in order to maximize current utility. So there is no sense of forward-looking decisions or savings out of a budget in order to provide future utility. Further, this static framework is limiting if you want to talk about decisions today which generate or affect utility in the future. This survey will address the theory of static utility first, and then go on to extend this into the idea of decisions related to future or uncertain utility, using the model of expected utility. Since utility is an abstract concept, we do not attempt to measure it directly. Its primary use is as a tool for describing consumer preferences and so as long as certain properties are satisfied, we can use a mathematical utility function, U (X, Y), to denote the utility from consumption of X units of 'good x' and Y units of 'good y.' This can be expanded to any number of goods, but to keep things simple we will assume only two goods. As usual in economics, we will make the 'all other things equal' (ceteris paribus) assumption regarding anything we don't explicitly include in the model. The aforementioned properties (also called axioms of preference) are: completeness, transitivity, and non-satiation. Completeness means that people are able to rank any two options presented to them. (i.e. I prefer 4 beers and 6 Cokes per week to 5 beers and 4 Cokes.) Transitivity means that these rankings are 'consistent.' If one prefers A to B and B to C, then she must also prefer A over C. Finally, nonsatiation just means that more is always preferred to less, ceteris paribus. This is to say that X and Y are both economic goods in the sense that consumers will receive some positive utility from consuming one more unit, however small, no matter how much they currently have or consume. These properties of preferences are actually very general and need not be tied to a utility function. The theory of revealed preference, developed by Paul Samuelson and Hal Varian, demonstrated that so long as the choices individuals make satisfy the completeness and transitivity axioms, they are consistent with rationality as defined in neoclassical theory. Marginal Utility A fundamental concept in demand theory is that of marginal utility. Marginal utility is the additional satisfaction one receives by consuming one more unit of a good or service. This is key for making purchase decisions at the margin. Do I want one more beer or one more burrito? Mathematically, the marginal utility of X is the change in total utility, U(X, Y), for a one unit (or smaller) change in X, holding the consumption of Y constant. Using calculus, this is the partial derivative of the utility function with respect to X: MUx = U. In general, utility from consuming X goods follows a pattern of diminishing marginal utility. As more and more of a good is consumed, each new unit gives some utility but less than the previous unit. A comparison of marginal utilities, then gives an idea of which of two possible purchase decisions will be more favorable, but only a partial idea because it only considers the benefit without factoring in the cost. We need to pay attention to the cost of another unit in order to be able to compare the relative value of a purchase. We do this by considering the marginal utility per dollar of one more unit of a
good, or MUx/Px. Only when the marginal utility per dollar is equal for the last unit of each good purchased: MUx/Px = MUy/Py is it possible to have maximized utility. This should make logical sense. Let's take a counter example to illustrate. Say a consumer, let's call him Floyd, is spending all of his weekly income on beer (good X) and burritos (good Y). The price of beer is $1 and the price of a burrito is $2. If the last beer he bought gives him MUx=20 and the last burrito gives MUy= 30. Just comparing marginal utility you might think he would be better off buying another burrito and one less beer. However, looking at the marginal utility per dollar MUx/Px = 20 > 30/2 = 15 = MUy/Py. So actually Floyd would increase his utility by buying fewer burritos and more beer (of course). Because of diminishing marginal utility, this will decrease the MU of beer and increase the MU of burritos. Ideally, this will adjust until the MU/P is equal at some value in between, say 17.5. The utility numbers don't really matter, its the idea. Since the last dollar spent on burritos in our example bought less utility than if it had been spent on beer, it can't be utility maximizing. This brings us to a term you may see in readings which use utility theory, the Marginal Rate of Substitution (MRS). The idea of the marginal rate of substitution is that it is a measure of the willingness of consumers to trade less of one good for more of the other, keeping their level of satisfaction constant. Graphically, the MRS is the slope of a contour line of the utility function graph. This is analogous to a topographic map of a mountain. Think of the utility function as the mountain where altitude is measured in utility and the contour line as a curve showing constant 'elevation' of utility on the function. These contour lines are called indifference curves because consumers are indifferent between different consumption bundles on the curve. They would, however, prefer to be on a higher curve, because that gives them more utility. An example of an indifference curve map is shown below: Y If an individual is consuming such that her utility is maximized, two conditions must be satisfied. She must be spending all of her budget (otherwise, she could get more utility by buying more) and her MRS must equal the ratio of prices, Px/Py. The price ratio represents the opportunity cost of one more unit of X in terms of Y. So, in words, this last condition means that she must be willing to give up Y to get X at the same rate as the opportunity cost of X. This is just another way to say her marginal utility per dollar is equal for both goods. X
MRS= MUx MUy = Px Py You can see this from the equation above by multiplying both sides of the equation by MUy and dividing both sides by Px. Graphically, the optimum looks like this: Y X So neoclassical theory predicts that consumer demand will represent this optimizing behavior for all goods and services. It is fairly straightforward to derive a demand curve mathematically from a 'well behaved' utility function. Few economists would argue that each purchase decision involves this explicit mental optimization. (What does your utility function look like?) Rather, the idea is that this model does a good job of representing the incentives facing consumers and adjustments they make in response to changes in prices, income, etc. Expected Utility In cases where there is some uncertainty or risk involved in a consumer decision, we need to adapt this idea to incorporate the probabilistic nature of a range of possible outcomes. Expected utility, sometimes also called von Neumann-Morgenstern utility after its founders, is found by taking the utility from each of the possible outcomes and calculating a weighted average by multiplying each by its probability and summing them up. A very simple example will illustrate nicely. Say an individual is facing a decision where one choice, call it option A, has 10 possible outcomes, each equally likely ( having 1/10 probability of coming to pass). These outcomes have a unique utility value to the individual from 1 10 in one unit increments. To find the expected utility of this decision we find the weighted average. Let's work it out to illustrate: 1*(1/10) + 2*(1/10) + 3*(1/10) + 4*(1/10) + 5*(1/10) + 6*(1/10) + 7*(1/10) + 8*(1/10) + 9*(1/10) + 10*(1/10) = 55/10 = 5.5. So the expected utility of option A is 5.5. A few things to make note of from this example. First, since it is an average and not the value of a specific event, the value of expected utility is often impossible to get in reality. There is no outcome of
option A which yields 5.5 units of utility. So what is the deal with expected utility? If option A were repeated over and over again, say 100 times, the individual would expect to get an average of 5.5 units of utility over those 100 trials. Thus the idea of expected value. Try the following to solidify your understanding. Take a quarter (or any coin) and a flip it 20 times. If it comes up heads, that is valued at 15. If it comes up tails, that is valued at 5. Write down the result of each of the flips and its value. Then find the average (not the expected value) value of a flip in your experiment by totaling up all of the values and dividing by 20. Now calculate the expected value of this coin flip experiment using the above definition. How close was your average to the expected value? Try flipping another 10 times and include those 10 flips in your average. Secondly, we used what is called a uniform probability distribution (equal probability of all possible outcomes) for our examples. We also used a discrete example with a fixed number of possible outcomes. There are many probability distributions which are possible, the most common being a normal distribution (think the dreaded 'bell curve'). In many of these, the bulk of observations will occur near the average or the mean and as you move farther away from the mean the probability of an event gets smaller and smaller. Expected values can be calculated using any probability distribution but in Econ 281 we will almost exclusively use simple, discrete probability distributions such as those above. So in order to make comparisons based on expected utility, individuals must consider the relative value of each outcome (note that the 10 units of utility in the example above has no real meaning apart from the fact that it represents a value 10 times higher than the 1 unit outcome) in terms of utility and the probability of that outcome occurring. Expected utility is used primarily in two ways in economics. In situations where there is some risk or uncertainty, expected value gives a measure of the relative value of specific choices. In situations where decisions involve payoffs which occur in the future and are therefore inherently uncertain, the present expected utility value is the relevant 'best case' for evaluating choices. The latter is the basis of Robert Lucas's rational expectations framework for looking at forward-looking behavior and decisions of agents in economic situations. Rational expectations (which has its skeptics, including your professor) holds that agents will make utility maximizing choices on average. The basic argument for rational expectations is that if agents were to make systematic errors which reduce their utility, over time they would perceive this and make corrections. (This brief notion does not do the theory justice but you get the idea.) In experimental and behavioral economics, expected utility is an important consideration since many of the situations we look at involve some measure of risk. Whether we are explicitly studying a question of decisions under risk or if our experiment is a game in which the subjects do not know what others will do the concept of risk must be addressed. The former is a whole field of study of its own (see Holt Ch. 4). The latter is essentially a question of making sure you correctly induce values. For this we must briefly consider ideas of individuals' attitudes toward risk. Risk preferences are an important consideration for the experimenter. Individual subjects may be risk neutral, risk averse, or risk loving. These attitudes toward risk are best illustrated with an example. Consider a choice of lotteries. A lottery is just an event with a list of probabilistic outcomes, each with some associated payoff. In both lotteries, the payoffs are determined by a coin flip. In lottery A, heads earns $100 and tails earns(loses) -$50. In lottery B, heads earns $10 and tails earns $1. If given the choice, which lottery would you choose? First, let's calculate the expected value of the payoffs (be careful here, this is not necessarily expected utility):
E(A) = 100*0.5 + (-50)*0.5 = 25 E(B) = 10*0.5 + 1*0.5 = 5.5 Since there is a 50% chance of losing money in lottery A and no chance of losing money in B, notice that A is inherently more risky. (A more risky lottery could just risk lower minimum payoffs, but we'll stick with losses for now.) However, lottery A has a much higher expected payoff. A risk neutral individual will only consider the expected value of the payoffs and not the comparative risk. So if you are risk neutral, you would choose lottery A. However, if someone were to choose the 'safe' option, we would say that she is risk averse. Risk loving individuals are a rare breed who will choose more risky options with a lower expected payoff. We don't have an example of that here but say we introduced a lottery C which earned $1,000 if heads came up 10 times in a row but lost $100 if even one tail came up (expected value -$98.92). (I say rare but judging by the number of state lottery tickets sold perhaps not so much.) To put this in terms of expected utility, risk neutral individuals receive higher expected utility from choices which have a higher expected payoff but risk averse individuals place a lower utility value on choices which involve potential losses. The reasoning behind this is that risk averse individuals place higher utility on losses than gains. So the potential loss of $50 means giving up more utility to an individual who chooses lottery B than the potential gain of $100. This is often attributed to diminishing marginal utility of money. Much of the research on bounded rationality has focused on the degree to which rational choice, as represented by utility or revealed preference theory, reflects the actual decisions made in important economic settings. Also, some theories of bounded self-interest use alternate formulations of utility functions which incorporate the payoffs of others.