This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.
Welcome to the Rationality reading group. This fortnight we discuss Part V: Value Theory (pp. 1359-1450). This post summarizes each article of the sequence, linking to the original LessWrong post where available.
V. Value Theory
264. Where Recursive Justification Hits Bottom - Ultimately, when you reflect on how your mind operates, and consider questions like "why does Occam's Razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
265. My Kind of Reflection - A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.
266. No Universally Compelling Arguments - Because minds are physical processes, it is theoretically possible to specify a mind which draws any conclusion in response to any argument. There is no argument that will convince every possible mind.
267. Created Already in Motion - There is no computer program so persuasive that you can run it on a rock. A mind, in order to be a mind, needs some sort of dynamic rules of inference or action. A mind has to be created already in motion.
268. Sorting Pebbles into Correct Heaps - A parable about an imaginary society that has arbitrary, alien values.
269. 2-Place and 1-Place Words - It is possible to talk about "sexiness" as a property of an observer and a subject. It is also equally possible to talk about "sexiness" as a property of a subject, as long as each observer can have a different process to determine how sexy someone is. Failing to do either of these will cause you trouble.
270. What Would You Do Without Morality? - If your own theory of morality was disproved, and you were persuaded that there was no morality, that everything was permissible and nothing was forbidden, what would you do? Would you still tip cabdrivers?
271. Changing Your Metaethics - Discusses the various lines of retreat that have been set up in the discussion on metaethics.
272. Could Anything Be Right? - You do know quite a bit about morality. It's not perfect information, surely, or absolutely reliable, but you have someplace to start. If you didn't, you'd have a much harder time thinking about morality than you do.
273. Morality as Fixed Computation - A clarification about Yudkowsky's metaethics.
274. Magical Categories - We underestimate the complexity of our own unnatural categories. This doesn't work when you're trying to build a FAI.
275. The True Prisoner's Dilemma - The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.
276. Sympathetic Minds - Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.
277. High Challenge - Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
278. Serious Stories - Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster". I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don't know if it can last in the long run.
279. Value is Fragile - An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
280. The Gift We Give to Tomorrow - How did love ever come into the universe? How did that happen, and how special was it, really?
This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
The next reading will cover Part W: Quantified Humanism (pp. 1453-1514) and Interlude: The Twelve Virtues of Rationality (pp. 1516-1521). The discussion will go live on Wednesday, 23 March 2016, right here on the discussion forum of LessWrong.
I assume that when you write '10 +/- 5', you mean that Option A could have a utility on the open interval with 0 and 10 as lower and upper bounds.
You can transform this into a decision problem under risk. Assuming that, say, in option A, you're not reasoning as though 6 is more probable than 10 because 6 is closer to 5 than 10 is (your problem statement did not indicate anything like this), then you can assign an expected utility to each option by making an equiprobable prior over the open interval including the set of possible utilities for each action. For example, since there are 10 members in the set defined as the open interval with 0 and 10 as lower and upper bounds, you would assign a probability of 0.1 to each member of the set. Furthermore, the expected utility for each Option is as follows:
A = (0*0.1) + (1*0.1) + (2*0.1) + (3*0.1) + (4*0.1) + (5*0.1) +(6*0.1) + (7*0.1) + (8*0.1) + (9*0.1) + (10*0.1) = 1.5
B = 0
C = 0.3
D = -61
The expected utility formalism prescribes A. Choosing any other option violates the Von Neumann-Morgenstern axioms.
However, my guess is that your utilities are secretly dollar values and you have an implicit utility function over outcomes. You can represent this by introducing a term u into the expected utility calculations that weights the outcomes by their real utility. This makes sense in the real world because of things like gambler's ruin. In a world of perfect emptiness, you have infinite money to lose, so it makes sense to maximize expected value. In the real world, you can run out of money, so evolution might make you loss averse to compensate. This was the original motivation for formulating the notion of expected utility (some quantitative measure of desirableness weighted by probability), as opposed to the earlier notion of expected value (dollar value weighted by probability).
Your analysis misses the point that you may play the game many times and change your estimates as you go.
For the record, 10 ± 5 means an interval from 5 to 15, not 0 to 10, and in any case I intended it as a shorthand for a bell-like distribution with a mean of 10 and a standard deviation of 5.