# Prospect Theory: A Framework for Understanding Cognitive Biases

**Related to: **Shane Legg on Prospect Theory and Computational Finance

This post is on prospect theory partly because it fits the theme of replacing simple utility functions with complicated reward functions, but mostly because somehow Less Wrong doesn’t have any posts on prospect theory yet and that needs to change.

Kahneman and Tversky, the first researchers to identify and rigorously study cognitive biases, proved that a simple version of expected utility theory did not accurately describe human behavior. Their response was to develop prospect theory, a model of how people really make decisions. Although the math is less elegant than that of expected utility, and the shapes of the curves have to be experimentally derived, it is worth a look because it successfully predicts many of the standard biases.

*(source: Wikipedia)*

A prospect theory agent tasked with a decision first sets it within a frame with a convenient zero point, allowing em to classify the results of the decision as either losses or gains. Ey then computes a subjective expected utility, where the subjective expected utility equals the subjective value times the subjective probability. The subjective value is calculated from the real value using a value function similar to the one on the left-hand graph, and the subjective probability is calculated from the real probability using a weighting function similar to the one on the right-hand graph.

Clear as mud? Let’s fill some numbers into the functions—the exact assignments don’t really matter as long as we capture the spirit of where things change steeply versus slowly—and run through an example.

Imagine a prospect theory agent—let’s call him Prospero—trying to decide whether or not to buy an hurricane insurance policy costing $5000/year. Prospero owns assets worth $10,000, and estimates a 50%/year chance of a hurricane destroying his assets; to make things simple, he will be moving in one year and so need not consider the future. Under expected utility theory, he should feel neutral about the policy.

Under prospect theory, he first sets a frame in which to consider the decision; his current state is a natural frame, so we’ll go with that.

We see on the left-hand graph that an objective $10,000 loss feels like a $5,000 loss, and an objective $5000 loss feels like a $4000 loss. And we see on the right-hand graph that a 50% probability feels like a 40% probability.

Now Prospero’s choice is a certain $4000 loss if he buys the insurance, versus a 40% chance of a $5000 loss if he doesn’t. Buying has a subjective expected utility of -$4000; not buying has a subjective expected utility of -$2000. So Prospero decisively rejects the insurance.

But suppose Prospero is fatalistic; he views his assets as already having been blown away. Here he might choose a different frame: the frame in which he starts with zero assets, and anything beyond that is viewed as a gain.

Since the gain half of the value function levels off more quickly than the loss half, $5000 is now subjectively worth $3000, and $10000 is now subjectively worth $3500.

Here he must choose between a certain gain of $5000 and a 50% chance of gaining $10000. Expected utility gives the same result as before, obviously. In prospect theory, he chooses between a certain subjective gain of $3000 and a 40% chance of gaining $3500. The insurance gives him subjective expected utility of $3000, and rejecting it gives him subjective expected utility of $1400.

All of a sudden Prospero wants the insurance.

We notice the opposite effect if there is only a a 1% chance of a hurricane. The insurance salesman lowers his price to $100 to preserve the neutrality of the insurance option when using utility.

But subjective probability rises very quickly, so a 1% chance may correspond to a subjective 10% chance. Now in the first frame, Prospero must decide between an objective loss of -$100 with certainty (corresponding to -$300 subjective since the value function is steeper closer to zero) or an objective loss of -$10,000 with objective probability 1% (subjective of 10%). Now the expected subjective utilities are -$300 if he buys, versus -$500 if he rejects. And so he buys the insurance. When we change the risk of hurricane from 50% to 1%, then even though we reduce the price of the insurance by an exactly equal amount, Prospero’s decision switches from not buying to buying.

Let’s see how many previously discussed biases we can fit into this model.

Prospero’s change from rejecting the insurance when framed as gains, to buying it when framed as losses, directly mirrors the change in preferred survival strategies mentioned in Circular Altruism.

The necessity of frame-shifting between different perceptions of losses also produces the Sunk Cost Fallacy.

The greater steepness of the value function with losses as opposed to gains is not even an explanation for, but merely a mathematical representation of, loss aversion.

The leveling off of the value function that turned the huge objective difference between +$5000 and +$10000 into the teensy little subjective difference between +$3000 and +$3500 mirrors the scope insensitivity under which people show about the same level of interest in proposals to save endangered birds whether a thousand, ten thousand, or a hundred thousand birds are involved.

It may not be an official bias, but the “but there’s still a chance, right” outlook looks a lot like the sharply rising curve of the subjective probability function near zero.

And although it is not immediately obvious from the theory, some people want to link the idea of a frame to priming and anchoring-adjustment, on the grounds that when a suitable reference frame doesn’t exist any primed stimulus can help establish one.

And now, the twist: prospect theory probably isn’t exactly true. Although it holds up well in experiments where subjects are asked to make hypothetical choices, it may fare less well in the rare experiments where researchers can afford to offer subjects choices for real money (this isn’t the best paper out there, but it’s one I could find freely available).

Nevertheless, prospect theory seems fundamentally closer to the mark than simple expected utility theory, and if any model is ever created that can explain both hypothetical and real choices, I would be very surprised if at least part of it did not involve something looking a lot like Kahneman and Tversky’s model.

- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 180 points) (
- Thinking and Deciding: a chapter by chapter review by 9 May 2012 23:52 UTC; 55 points) (
- Do Humans Want Things? by 4 Aug 2011 5:00 UTC; 35 points) (
- The Savage theorem and the Ellsberg paradox by 14 Jan 2012 19:06 UTC; 19 points) (
- 30 Sep 2014 17:03 UTC; 15 points) 's comment on Open thread, Sept. 29 - Oct.5, 2014 by (
- Gambler’s Reward: Optimal Betting Size by 17 Jan 2012 20:32 UTC; 12 points) (
- 15 Feb 2012 10:31 UTC; 10 points) 's comment on Open Thread, February 15-29, 2012 by (
- 24 Jul 2012 4:44 UTC; 6 points) 's comment on Stupid Questions Open Thread Round 3 by (
- Meetup : Madison: Prospect Theory by 12 Oct 2012 4:34 UTC; 5 points) (
- 2 May 2015 19:53 UTC; 4 points) 's comment on Stupid Questions May 2015 by (
- 11 Nov 2011 4:14 UTC; 3 points) 's comment on Do the people behind the veil of ignorance vote for “specks”? by (
- 9 Aug 2013 19:44 UTC; 1 point) 's comment on Random variables and Evidential Decision Theory by (
- 30 Oct 2011 5:32 UTC; 1 point) 's comment on Rational to distrust your own rationality? by (
- 10 Dec 2011 18:29 UTC; 1 point) 's comment on Example of poor decision making under pressure (from game show) by (
- Meetup : Seattle Rationality Reading Group by 24 Feb 2016 21:12 UTC; 1 point) (
- 12 Jul 2012 20:42 UTC; 1 point) 's comment on Lotteries: A Waste of Hope by (
- 14 Nov 2011 23:35 UTC; 0 points) 's comment on Is an Intelligence Explosion a Disjunctive or Conjunctive Event? by (

If a person objects to singular they, I’m having a hard time seeing them not objecting to this. So why not just use singular they? It’d make this a lot more readable.

I’ve been meaning to make a post about this small procedural note. Singular they has a long history in English as a gender-neutral third person singular pronoun. Languages tend to resist the introduction of new pronouns, as they’re “closed class”—part of the language’s grammar. It’s especially problematic that nobody can even agree on which invented pronoun to get behind!

Can’t we all just use singular they? It’s much nicer.

Okay, okay, I’ll use singular they if you all promise that the first time someone pompously chides me for using “they” in the singular, you’ll give them at least as much trouble as you’re giving me for using gender-neutral third person pronouns.

Indeed I shall so chide. It’s not so much that “ey” and the like bother me, it’s mostly that Less Wrong might become one of the first communities where people can use singular they without flinching due to vague anticipation of undue contempt. Such trivial inconveniences add up very quickly for a certain kind of mind, like mine.

Well, I don’t anticipate undue contempt when using the singular

theyon Language Log, either. :-)Deal. I’ll even pull rank with my formal qualifications on English grammar, should they care about that.

Question for the formally qualified grammarian: When using singular “they”, which is correct?

“When a person is biased, they

makemistakes”“When a person is based, they

makesmistakes”The second sounds absolutely horrible, but if singular “they” is really being used as a singular in the same sense as “he” or “she”, it sounds like it ought to be correct.

Consider:

When you, Yvain, are biased, you make mistakes

Clearly in the 2nd person singular, the verb displays “plural” agreement. It’s the same for “they”.

Have a gander at Language Log where the “singular they” has been extensively discussed—mostly, apparently, because it’s something of a litmus test to determine whether someone is a descriptivist or a prescriptivist grammarian; the LL crowd falling squarely in the descriptivist camp.

The short answer is that it’s grammatically plural; it’s a “plural of indeterminacy of number” primarily, and has taken on under social pressure an aspect of “plural of indeterminacy of gender”. Number one is correct.

ETA: background info.

I like singular they, but I also think ze is better than ey because it looks less like a cut-off other word

I agree. Singular they is so awesome.

I agree that known biases can be explained by curves like those, plus the choice of a “frame”. But how do we know we’re not overfitting?

In other words: does prospect theory pay rent?

I’d want to at least see that we’re identifying some real differences between people when we fit their curves from a bunch of measurements of their behavior—I’d expect their personally fit model to describe their (held-out from fitting) future actions better than one fit over the whole population, etc.

It seems like the additional degree of freedom “well, it depends on how they chose their frame in this instance” needs to be nailed down as part of testing the model’s fit on future actions.

I am not entirely qualified to answer this objection, and I hope that one day someone who is more mathematical will make a post on the exact math involved.

Until then, I would say that the important part of prospect theory is not fitting numbers to the curves or determining the exact curve for each different person, but the discovery that the curves have the same basic shape in everyone. For example, that the slope of the losses curve is always greater than the slope of the gains curve; that the slope of both curves is steepest near zero but eventually levels out; that gains are always concave and losses are always convex. That subjective probability is steepest near zero, and also steep near one, but flatter in the middle. That decisions depend on frames, which can be changed and scaled depending on presentation.

I’m describing these visually because that’s how I think; in the paper I linked to on top, Kahneman and Tversky describe the same information in the terms of mathematical equations which expected utility follows. None of these are intuitively predictable without having done the experiment, and all of them are pretty constant across different decisions.

I’m not sure what the status of research on applied prospect theory—figuring out the exact equations you can plug a frame and an amount of money into and predict the decision—is, but it must have had some success to win a Nobel Prize.

We already knew that losses weigh roughly 2-3x (I forget which) as heavy as gains.

It’s interesting but not surprising that people can re-orient losses and gains by framing.

It does make sense that the subjective value of monetary gains and losses should be more steeply sloped around 0, to the extent that emotional pain/reward needs to be strong enough in order to guide decisions even for small amounts of money (as in everyday transactions), but the dynamic range of the physical systems that register these feelings is limited. So we expect the magnitude of the slope to decrease as the quantities grow larger.

I wonder what happens to people who invest and manage to reframe their losses and gains as being percentage-of-total-wealth? We shouldn’t accept that the only allowed frames are those that shift the origin.

It is interesting to point out that people act by weighting outcomes with a subjective probability that consistently differs from the actual information available to them. I’d like to understand the evidence for that better, but it’s plausible—I can imagine it following from some fact about our brain architecture.

I’d be more impressed with the theory if it could really identify a characteristic of a person, even in just the domain of monetary loss/gain, such that it will predict future decisions even when that person is substantially poorer or richer than when the parameters were fit to them.

Well, in two pictures it sums up loss aversion, scope insensitivity, overestimation of high probabilities, underestimation of low probabilities, and the framing effect. There’s no information on there that corresponds to non-testable predictions, and the framing effect is a very real thing- you can often pick it for people.

It doesn’t seem to simplify anything either, since the curves have to be justified by experiment instead of some simple theory, but it is a conveniently compact way of quantitatively representing what we know. How would you make quantitative statements about how loss aversion works without something equivalent to prospect theory?

I agree that the left curve (subjective value of monetary loss/gain) shows loss aversion and maybe scope insensitivity (there’s only so much pain/reinforcement our brain can physically represent, and most of that dynamic range is reserved for routine quantities, not extreme ones), at least for money.

I’m not sure how the right curve, which I presume is used to explain the (objectively wrong under expected utility maximization) decisions/preferences people actually take when given actual probabilities, shows over- or under- estimation of probabilities. If you asked them to estimate the probability, maybe they’d report accurately—I presumed that’s what the x axis

was. If I use another interpretation, the graph may show under-estimation of low probabilities, but ALSO shows under-estimation of high probabilities (not over-estimation). Could you explain your interpretation?Otherwise, I agree. These curves take these shapes because they’re fit to real data.

I’m curious if the curves derived for an objective value like money, are actually predictive for other types of values (which may be difficult to test, if the mapping from circumstance to value is as personally idiosyncratic as utility).

10 years too late—but i’m certain he has it mixed up. The graph clearly shows overestimation of extremely low probabilities (i.e 1% feels like 10%)

strongly agree. this feels like post hoc descriptions along the lines of psycho-analysis.

I have a paper in press at the Journal of Applied Psychology that used both hypothetical scenarios and real money in prospect theory experiments. We looked at whether people shifted their reference points post hoc—after they had learned the outcomes of their decisions. Our results showed that people shifted their reference points to either maintain positive moods or repair negative moods.

If you are interested, you can see the paper here:

http://faculty.washington.edu/mdj3/Johnson,%20Ilies,%20&%20Boles%20%28in%20press%29.pdf

It can be interesting to look at prospect theory curves that are based on experimental data. Here are the best fit curves for 10 subjects in one study, Gonzalez & Wu (1999), for the value function for gains (v) and the probability weighting function (w). Each subject in the study made 165 (hypothetical) decisions about gambles with various possible outcomes and probabilities, in the domain of gains only (no losses).

Especially when you compare subject 9 to everyone else.

The images are not working anymore.

The wayback machine has a stored copy here.

(RETRACTED) This is an official bias, known as the certainty effect. (/RETRACTED)

EDIT (thanks, Vaniver): This is closely related to the certainty effect, which describes the sharp change in weighting near p=1 when an outcome switches from a sure thing to merely a likely possibility. The sharp change in weighting near p=0 is similar, as an outcome switches from an impossibility to merely an unlikely possibility, but I don’t think it has a handy name.

That looks like something else, actually- that’s the sharply falling weight near 1, as uncertain things aren’t as valuable as certain things. Yvain is discussing when people model a tiny chance of winning as much larger- as vividly displayed by the lottery, for example.

You’re right; comment retracted/edited. I’d thought that it referred to the sharp changes in weight near 1 and 0, but a little bit of googling confirms that the term is only applied to the change near 1.

.

This post mis-uses the term “utility”. Expected utility theory does not treat utility as linear in money, as you suggest.

See http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem, or perhaps also

http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/

The main descriptive difference between prospect theory and EU theory is that for monetary decisions, EU theory uses one curve (utility function), whereas prospect theory uses two curves (a value function and weight function) as well as a framing variable… it’s about three times as suspect for overfitting, so I think I’ll wait until it pays a little more rent :)

The other big difference is that the prospect theory value function is defined relative to a reference point (which typically represents the status quo) while the EU theory utility function is defined based on total wealth. So (as jimmy said) the nonlinearity of the prospect theory curve has a big effect on pretty much any decision (since any change from the current state is taking you through the curviest part of the curve), but the nonlinearity of EU theory curve is relatively minor unless the stakes are large relative to your total wealth. Under those conditions, EU theory (based on the utility of total wealth) is essentially equivalent to expected value.

Let’s say that you have $30,000 in total wealth and you’re given a choice of getting $10 for sure or getting $21 with p=.5. On the EU curve, the relationship between U($30,000), U($30,010), and U($30,021) should be nearly linear, so with any reasonable curve EU theory predicts that you prefer the 50% chance at $21 (indeed, you’d even prefer a 50% chance at $20.01 to $10 for sure as long as your curve is something like the square root function or even the natural log function). But on the prospect theory curve, V($0), V($10), and V($21) are very nonlinear, so even if we just treat probabilities as probabilities (rather than using the probability weighting function) prospect theory predicts that you’ll prefer the certain $10 (at least, it will if the V(x) curve is the square root function, or x^.88 as is commonly used).

When people are actually given choices like $10 for sure vs. $21 w. p=.5, they tend to choose $10 for sure just as prospect theory predicts (and EU theory does not). That’s paying rent in anticipated experiences. Prospect theory was developed by asking people a bunch of questions like that one, seeing what they did, and fitting curves to the data so that predictions about hundreds of similar decisions could be made based on a model with only a few parameters. That research produced a lot of data which was inconsistent with expected value (which, for these types of gambles, implies that it was also inconsistent with EU theory based on utility-of-wealth) and so Kahneman & Tversky developed a relatively simple model that did fit the empirical data, prospect theory.

That’s what Yvain and I are calling framing.

What you’re calling EU theory is a very restricted version of EU theory, where you require utility to be a function of total monetary wealth, or total material wealth. You might call it “Expected Utility of Wealth” theory. EU theory is actually much more general, and assigns utility to

outcomesrather than amounts of money or even lists of possessions. This is all discussed inhttp://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem , and

http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/

But for predictive purposes, EU theory is so ridiculously general (there are so many situational parameters) that, as far as anyone knows, it has almost no predictive power. So for the purposes of prediction, I think you’re justified in talking about “EUW” theory, because without a highly restrictive assumption like utility being a function of wealth, EU theory has no chance of making predictions.

Nonetheless, I want to encourage you, and anyone else, to make explicit the assumption “utility is a function of wealth” when you’re making it. My reason is that, in toy decision-theory problems, EU theory is usually part of the framework, and it’s a reasonable framework provided we don’t impose the restrictions that make it predictively meaningful and false.

Utility is generally accepted to be differentiable in money, which means that it’s approximately linear in amounts that are insignificant over your lifetime earnings. If you use a non-linear utility to explain risk aversion for a small amount of money, and extend this until you get large amounts of money, it results in absurdly huge utility falloff. I remember someone posted an article on this. I can’t seem to find it at the moment.

Unless you have a good estimate of your future earnings and can borrow up to that at low interest rates, I think “amounts that are insignificant compared to your current liquidity” might be a slightly more rational metric. Note also that any explanation of human risk aversion (as opposed to rational risk aversion) is trying to explain behaviors that evolved during a time when “borrowing at low interest rates” wasn’t really an option. If a failed risk means you starve to death next year, it doesn’t matter how copious a quantity of food you otherwise would have acquired in subsequent years.

http://lesswrong.com/lw/9oe/risk_aversion_vs_concave_utility_function/5svv

Are you looking for this?

I recommend Kahneman & Tversky’s 1984 paper Choices, Values, and Frames (republished as chp 1 in their book of the same name) as a more readable (and shorter) introduction to prospect theory than their 1979 paper which Yvain has linked. It contains several examples demonstrating consequences of the shape of the functions and exploring the possibilities for framing effects.

I’m learning about utitlity theory just now, but I hadn’t heard about prospect theory before. Thanks for posting it.

I know the main point of the post was to introduce prospect theory, but I wanted to add a comment about standard utility theory. In the text you write that standard utility theory predicts Prospero should be indifferent between a certain $5,000 and a 50-50 chance of either $0 or $10,000. This isn’t quite right, maximising expected utitlity isn’t the same as maximising expected wealth.

In standard utility theory you have a utility function U(W), so Prospero has the choice between U(5,000) and a 50-50 chance of U(0) or U(10,000). The expected utility need not be the same for both cases. In fact, most investors are assumed to have a utility function such that each addttional dollar adds less utility than the previous one (diminishing marginal utility of wealth). E.g $10 adds less utility to a millionaire than it would to the same person if he were broke and homeless. An investor with diminishing marginal utility of wealth would always take the insurance since, taking the certain $5,000 as the base case, the 50% chance of losing that $5,000 would cost more utility than the 50% chance of the gain of an extra $5,000 would add.

In this case, what is the difference between standard theory and prospect theory? Taking the first graph, you could regard this as a plot of a standard utility function with wealth on the x axis and utility on the y axis. The differences seem to be:

in the second plot, it is shown that a prospect theory agent seems to behave as if small probabilities are larger than they actually are, and as if large probabilities are smaller than the actually are;

the fact that Prospero’s utility function is different depending on how the question is framed;

the shape of the utility function has the form shown in the first graph, wheras in standard utility theory it can take a wider variety of possible shapes.

One difference is that in standard utility theory, while utility doesn’t have to be linear in money, if you ‘zoom in’ enough it is very close.

In prospect theory the shape doesn’t change. It generally makes sense to be risk averse when you’re risking amounts near your total wealth, but prospect theory says that you’ll be risk averse at the $1 level too.

An excellent introduction, and I love how you’ve tied it in with LW discussion on cognitive biases.

Also check out temporal motivation theory (2006), which tries to integrate (cumulative) prospect theory with other theories of human behavior.

Construal level theory is on that fringe, for example. Or as it’s more commonly known, Near/Far. Unfortunately I didn’t find anything in that area to be particularly compelling, but it’s probably fertile ground for using Bayes to go where science can’t. I vaguely remember using those tools to cast an interesting light on some aspects of moral psychology, even if the papers themselves were meh. That said I could easily have missed the best papers or best insights.

One thing I’m a bit confused about: How would weighted probabilities work when there are more than two possible outcomes? “sum Probability(x) = 1” does not imply “sum Weighted Probability(x) = 1″, and furthermore, you can get a different weighted probability distribution by grouping similar outcomes and applying the weighting stepwise, first to groups of similar outcomes, and then to specific outcomes within the groups.

I think there’s probably an interesting point in there but I can’t quite parse the text. Can you give an example?

Suppose there is a 90% chance of maintaining what the prospect theory agent perceives as the status quo, which means a 10% probability of something different happening, which looks like it might correspond to a weighted probability of around 25% according to the graph. But now suppose that there are 10 equally likely (1%) possible outcomes other than status quo. Each of the 10 possibilities considered in isolation will have a weighted probability of 10% according to the graph, even though the weighted probability of anything other than the status quo happening is only 25%

You’re getting into advanced questions; prospect theory was initially formulated to only deal with gambles with 2 (or fewer) possible outcomes so that it didn’t have to deal with this sort of stuff. Eventually Tversky & Kahneman (1992) came out with a more complicated version of the theory, Cumulative Prospect Theory, which addressed this problem by being rank-dependent. Looking at the graph of w(p), basically what you do is rank the outcomes in order of their value, line them up along the probability axis in order giving each one a width equal to its probability, and weight each one by the change in w(p) over its width. So if the 10 outcomes each with probability .01 are all losses, then the largest loss gets the weight w(.01), the next-largest loss gets the weight w(.02)-w(.01), the next gets the weight w(.03)-w(.02), … and the last one gets w(.10)-w(.09). So the total weight given to the 10 outcomes is still only w(.10), just as it would be if they were all combined into one outcome.

For more of the nitty gritty (like separating gains & losses), you can see the Tversky & Kahneman (1992) paper, or I found the explanation in this Fennema & Wakker (1997) paper easier to understand.

Tversky, A. & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5: 297–323.

Did you mean?

I’d upvote this twice if I could.

The weighted probability curve reminded me of some other research I first heard of a couple of years ago, to do with human choices being made by comparing them to their neighbouring choices, rather than on an absolute scale of utility. The result of this being that people find it hard to appraise things on more than five levels of gradation (“worse”, “this”, “better”, and intervals between them). This provides a plausible explanation for why we rank so many things out of five.

I looked for the research in question, and found Decision by Sampling. Having now had a look at the actual paper, it actually references prospect theory twice. I really should follow these things up more.

I think the simpler explanation for why we rank things out of five, is because 5 is half of 10.

I’ve seen some psych research using 7 options—does anyone know if there’s a reason for that? Do they know what they’re doing more than the people who rank things using 5?

Fascinating. I’m amazed that nobody has brought this up here before—this is something I should have read about years ago.

What happens if you graph subjective expected value to probability and outcome? It looks like a lot of it could cancel, giving something close to true expected value.

I think the distinction between decisions (as an end result) and other brain processes can be useful in fields of behavioral economics and the like on the short term, as it reahes results quite fast. But the complexity of decisions makes me visit the examples of unifications in physics. Perhaps if all decisions (not only final output) are treated as the same phenomena, aspects like framing can be understood as altering sub decisions by using their constant value functions, leading to a different decision later in time (which just happens to be the output decision). The idea is perhaps understanding the building blocks of decisions (on a level smaller than final outputs and bigger than single nueron firings) can provide a better model for decision making