Right, it seems kind of strange to declare that you're considering only states of the world in your decisions, but then to treat judgments of right and wrong as an deontological layer on top of that where you consider whether the consequentialist rule was followed correctly. But that does seem to be a mainstream version of consequentialism. As far as I can tell, it mostly leads to convoluted, confused-sounding arguments like the above and the linked talk by Neiladri Sinhababu, but maybe I'm missing something important.
I think it leads to very confusing and technical arguments if free will is assumed. If not, there's basically reason to morally judging others (other than the learning potential for future decisions).
I think the mainstream version of consequentialism, if I understand what you are saying correctly, can still be followed for personal decisions as they happen. Or, when making a decision, you personally do your best to optimize for the future. That seems quite reasonable to me, it's just really hard to understand and criticize from an outside perspective.
You come to what is more or less the right consequentialist answer in the end, but it seems to me that your path is needlessly convoluted. Why are we judging past actions? Generally, the reason is to give us insight into and perhaps influence future decisions. So we don't judge the lottery purchase to have been good, because it wouldn't be a good idea to imitate it (we have no way to successfully imitate "buy a winning lottery ticket" behavior, and imitating "buy a lottery ticket" behavior has poor expected utility, and similarly for many broader or narrower classes of similar actions), and so we want to discourage people from imitating it, not encourage them. If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?
your path is needlessly convoluted
Agreed. This really wasn't my best piece. I figured it would be better to publish it than not though. Was hoping it would turn out better. If the response is good I may rewrite it. However, I do feel like it is a complicated issue, so could require quite a bit of text to explain no matter how good the writing style.
Why are we judging past actions?
The first reason that comes to my mind is to say things like "X is a bad person", or "Y cheated on this test, which was bad", etc. If we are to evaluate them consequentially, I'm making the argument that seeing things from their point of view is exceedingly difficult. It's thus very difficult to ask if another person is acting in a 'utilitarian' way, especially if that person claims to be.
So we don't judge the lottery purchase to have been good,
In regard to the lottery purchase, the question is what does 'good' mean in the first place. I'm saying it is strongly coupled to a specific reference frame, and it's hard to make it an 'objective good' of any kind. However, it can be used to more clearly talk about specific kinds of 'good'. For instance, perhaps in this case if we used the 'reference frame' of our audience, we could explain the situation to them well, discouraging them (assuming a realistic audience).
If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?
I guess here the question is what it means to 'judge'. If 'judging' just means saying what happened (there was a person, he did this, this happened), then yes. If it is attempting to understand the decision making of the person in order to understand how 'morally good' that person is, or can be expected to be, those are different questions.
Say the player thought that they were likely win the lottery, that it was a good purchase. This may seem insane to someone familiar with probability and the lottery system, but not everyone is familiar with these things.
I would say this person made a good decision with bad information.
Perhaps we should attempt to stop placing so much emphasis on individualism and just try to do the best we can while not judging others nor other decisions much.
There are lots of times when it's important to judge people e.g. for hiring or performance reviews.
I would say this person made a good decision with bad information.
I would agree that they made a good decision, good decision being defined as 'decision which optimizes expected value with information about the outcome'. My point was to clarify what 'good decision' meant.
There are lots of times when it's important to judge people e.g. for hiring or performance reviews.
In this case I was attempting to look at a very simple example (the lottery) so we could make moral claims about individuals. This is different from general performance. On that note though, the question of trying to separate what in an individuals' history they were or were not responsible for would be interesting for hiring or performance reviews, but it definitely is a tricky question.
Reference Frames for Expected Value
Puzzle 1: George mortgages his house to invest in lottery tickets. He wins and becomes a millionaire. Did he make a good choice?
Puzzle 2: The U.S. president questions if he should bluff a nuclear war or concede to the USSR. He bluffs and it just barely works. Although there were several close calls for nuclear catastrophe, everything works out ok. Was this ethical?
One interpretation of consequentialism is that decisions that produce good outcomes are good decisions, rather than decisions that produce good expected outcomes.12 One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.3 This general view seems to be surprisingly common.4
This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others. However, while ‘expected value’ can be useful for individual decision making, I make the case that it is very difficult to use to judge other people’s decisions in a meaningful way.5 This is because ‘expected value’ is typically defined in reference to a specific set of information and intelligence rather than an objective truth about the world.
Two questions to help guide this:
- Should we judge previous actions based on ‘expected’ or ‘actual’ value?
- Should we make future decisions to optimize ‘expected’ or ‘actual’ value?
I believe these are in a sense quite simple, but require some consideration to definitions.6
Optimizing Future Decisions: Actual vs. Expected Value
The second question is the easiest of the two, so I’ll begin with that one. The simple answer is that this is a question of defining ‘expected value’. Once we do so the question kind of goes away.
There is nothing fundamentally different between expected value and actual value. A more fair comparison may be ‘expected value from the perspective of the decision maker’ with ‘expected value from a later, more accurate prospective’.
Expected value converges on actual value with lots of information. Said differently, actual value is expected value with complete information.
In the case of an individual purchasing lottery tickets successfully (Puzzle 1), the ‘actual value’ is still not exact from our point of view. While we may know how much money was won, or what profit was made. We also don’t know what the counterfactual would have been. It is still theoretically possible that in the worlds where George wouldn’t have purchased the lottery tickets, he would have been substantially better off. While the fact that we have imperfect information doesn’t matter too much, I think it demonstrates that presenting a description of the outcome as ‘actual value’ is incomplete. ‘Actual value’ exists only theoretically, even after the fact.7
So this question becomes, then ‘should one make a decision to optimize value using the information and knowledge available to them, or using perfect knowledge and information?’ Obviously, in this case, ‘perfect knowledge’ is inaccessible to them (or the ‘expected value’ and ‘actual value’ would be the same exact thing). I believe it should be quite apparent that in this case, the best one can do (and should do) is make the best decision using their available information.
This question is similar to asking ‘should you drive your car as quickly as your car can drive, or much faster than your car can drive?’ Obviously you may like to drive faster, but that’s by definition not an option. Another question: ‘should you do well in life or should you become an all-powerful dragon king?’
Judging Previous Decisions: Actual vs. Expected Value
Judging previous decisions can get tricky.
Let’s study the lottery example again. A person purchases a lottery ticket and wins. For simplicity, let’s say the decision to purchase the ticket was done only to optimize money.
The question is, what is the expected value of purchasing the lottery ticket? How does this change depending on information and knowledge?
In general purchasing a lottery ticket can be expected to be a net loss in earnings, and thus a bad decision. However, if one was sure they would win, it would be a pretty good idea. Given the knowledge that the player won, the player made a good decision. Winning the lottery clearly is better than not playing once.
More interesting is considering the limitation not in information about the outcome but about knowledge of probability. Say the player thought that they were likely win the lottery, that it was a good purchase. This may seem insane to someone familiar with probability and the lottery system, but not everyone is familiar with these things.
From the point of view of the player, the lottery ticket purchase had net-positive utility. From the point of view of a person with knowledge of the lottery and/or statistics, the purchase had net-negative utility. From the point of view of any of these two groups, after they know that the lottery will be a success, it was a net positive decision.
| No Knowledge of Outcome | Knowledge of Outcome | |
|---|---|---|
| ‘Intelligent’ Person with Knowledge of Probability | Negative | Positive |
| Lottery Player | Positive | Positive |
Expected Value of purchasing a Lottery Ticket from different Reference Points
To make things a bit more interesting, imagine that there’s a genius out there with a computer simulation of our exact universe. This person can tell which lottery ticket will win in advance because they can run the simulations. To this ‘genius’ it’s obvious that the purchase is a net-positive outcome.
| No Knowledge of Outcome | Knowledge of Outcome | |
|---|---|---|
| Genius | Positive | Positive |
| ‘Intelligent’ Person with Knowledge of Probability | Negative | Positive |
| Lottery Player | Positive | Positive |
Expected Value of purchasing a Lottery Ticket from different Reference Points
So what is the expected value of purchasing the lottery ticket? The answer is that the ‘expected value’ is completely dependent on the ‘reference frame’, or a specific set of information and intelligence. From the reference frame of the ‘intelligent person’ this was low in expected value, so was a bad decision. From that of the genius, it was a good decision. And from the player, a good decision.
Judging
So how do we judge this poor (well, soon rich) lottery player? They made a good decision respective to the results, respective to the genius, and compared to their own knowledge. Should we say ‘oh, this person should have had slightly more knowledge, but not too much knowledge, and thus they made a bad choice’? What does that even mean?
Perhaps we could judge the player for not reading into lottery facts before playing. Wasn’t it irresponsible for falling for such a simple fallacy? Or perhaps the person was ‘lazy’ to not learn probability in the first place.
Well, things like these seem like intuitions to me. We may have the intuitions to us that the lottery is a poor choice. We may find facts to prove these intuitions accurate. But the gambler my not have these intuitions. It seems unfair to consider any intuitions ‘obvious’ to those who do not share them.
One might also say that the gambler probably knew it was a bad idea, but let his or her ‘inner irrationalities’ control the decision process. Perhaps they were trying to take an ‘easy way out’ of some sort. However, these seem quite judgmental as well. If a person experiences strong emotional responses; fear, anger, laziness; those inner struggles would change their expected value calculation. It might be a really bad, heuristically-driven ‘calculation’, but it would be the best they would have at that time.
Free Will Bounded Expected Value
We are getting to the question of free will and determinism. After all, if there is any sort of free will, perhaps we have the ability to make decisions that are sub-optimal by our expected value functions. Perhaps we commonly do so (else it wouldn’t be much in the sense of ‘free’ will.)
This would be interesting because it would imply an ‘expected result’ that the person should have calculated, even if they didn’t actually do so. We need to understand the person’s actions and understanding, not in terms of what we know, or what they knew, but what they should have figured out given their knowledge.
This would require a very well specified Free Will Boundary of some sort. A line around a few thought processes, parts of the brain, and resource constraints, which could produce a thereby optimal expected result calculation. Anything less than this ‘optimal given Free Will Boundary’ expected value calculation would be fair game for judging.
Conclusion: Should we Even Judge People or Decisions Anyway?
So, deciding to make future decisions based on expected value seems reasonable. The main question in this essay, the harder question, is if we can judge previous decisions based on their respective expected values, and how to possibly come up with the relevant expected values to do so.
I think that we naturally judge people. We have old and modern heroes and villains. Judging people is simply something that humans do. However, I believe that on close inspection this is very challenging if not impossible to do reasonably and precisely.
Perhaps we should attempt to stop placing so much emphasis on individualism and just try to do the best we can while not judging others nor other decisions much. Considerations of judging may be interesting, but the main take away may be the complexity itself, indicated that judgements are very subjective and incredibly messy.
That said, it can still be useful to analyze previous decisions or individuals. That seems like one of the best ways to update our priors of the world. We just need to remember not to treat it personally.
-
Dorsey, Dale. “Consequentialism, Metaphysical Realism, and the Argument from Cluelessness.” University of Kansas Department of Philosophy http://people.ku.edu/~ddorsey/cluelessness.pdf ↩
-
Sinhababu, Neiladri. “Moral Luck.” Tedx Presentation http://www.youtube.com/watch?v=RQ7j7TD8PWc ↩
-
This is assuming the terrorists are trying to produce ‘disutility’ or a value separate from ‘utility’. I feel like from their perspective, maximizing an intrinsic value dissimilar from our notion of utility would be maximizing ‘expected value’. But analyzing the morality of people with alternative value systems is a very different matter. ↩
-
These people tend not to like consequentialism much. ↩
-
I don’t want to impose what I deem to be a false individualistic appeal, so consider this to mean that one would have a difficult time judging anyone at any time except for their spontaneous consciousness. ↩
-
I bring them up because they are what I considered and have talked to others about before understanding what makes them frustrating to answer. Basically, they are nice starting points for getting towards answering the questions that were meant to be asked instead. ↩
-
This is true for essentially all physical activities. Thought experiments or very simple simulations may be exempt. ↩
When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to.
These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible.
Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure.
Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
I definitely found out something similar. I've come to believe that most 'popular science', 'popular history' etc books are on audible, but almost anything with equations or code is not.
The 'great courses' have been quite fantastic for me for learning about the social sciences. I found out about those recently.
Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
Conservation of credit/blame?
If we give Brin and Page full credit for everything Google did for three years, then we must give zero credit to everyone else for those outcomes.
Not exactly. Counterfactual credit on individuals isn't additive. For instance, we can say that almost every piece of the car is counterfactually (for the removal of the piece) essential (worth the entire value of the car).
That said, in this case, one would have to wonder what these employees would have done otherwise. I imagine the value of the founders for Google counterfactually, is somewhat equal to the value of Google minus the value of what the used-by-Google resources (money and lots of talent), had they been used elsewhere.
Why not just use parenthetical probabilities where it's useful? I'm pretty sure (80%) that this is more likely to catch on than your proposal.
I feel like probabilities are longer (80 as opposed to 3) and in some ways contain more information (1 out of a 100) than we can really state ~i1. In effect, we have few significant figures.
That said, the understandability of percentages may outweigh this benefit (-~i1).
You want to translation from numbers to certainty to be
0->1/2=50%
1->2/3=67%
2->4/5=80%
3->8/9=89%
4->16/17=94%
5->32/33=97%
6->64/65=98.5%
7->128/129=99.2%
8->256/257=99.6%
9->512/513=99.8%
10->1024/1025=99.9%
n->2^n/(2^n+1)
Here, the percent p is given the number n, such that it would take n more bits of information to convince you that you are wrong than it would take to convince you that you are correct. These numbers are very natural, and for some purposes, it would be better to use these numbers than to use the percents.
Notice that this luckily (Maybe you planned this) fits with your number system, except that 99.9 is really really certain compared to what you would expect would come one number after 95%. I know that it would be very hard to get that kind of detail in the intermediate values between 4 and 5, but even if you only ever say 0, 1, 2, 3, 4, 10, I still think it is worth it to emphasize the big difference between 95% and 99.9%.
Very good point. I like how this could go higher or have in-between values quite easily. In retrospect an equation like this makes much more sense than an intuitive guess, what I wrote down was mostly to use as a start.
I'm not sure if this is exactly the perfect equation for this, given that I think I'd probably want them to be a bit more spaced out if they went to 10 (going further in confidence past 99.9% perhaps) ~i1.
While this sounds like it would be useful, it would also turn a lot of people off to the site ~i4
I'm not suggesting that we make this a mandated LessWrong policy, but I think it may be fun to play with personally ~i4. I imagine that something more sensical would be created after some thought and work ~i2.
Also it could make a lot of sense for an organization to accept a standard like this rather than a blog community. For instance, many startups or EA orgs already use a lot of internal jargon ~e3i2, so something like this doesn't seem like a clear negative in that light ~i2.
Concerning the appearance aspect, it may be possible to hide the phrases (similar to the markdown mention earlier) with a small symbol that one could "inspect" when interested.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Not sure what you mean here. Future is never actual, only expected (or, more often, unexpected).
This just has to do with a question that was a poorly question to begin with. When one makes decisions, should they optimize for 'expected value' or 'actual value'. The answer is that the 'actual value' is obviously unknowable, so it's a moot question. That said, I've discussed this with people who weren't sure, so wanted to make this clear.
I call these "future decisions" to contrast them with 'past decisions' which can't really be made but judged, as they have already occurred.