Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ike 16 December 2017 10:00:40PM 0 points [-]

Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?

Comment author: PhilGoetz 17 December 2017 06:26:40PM *  0 points [-]

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will." You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can't be that one.

People in the LW community don't usually do that. I see sloppy statements claiming that humans "should" one-box, based on a presumption that they have no free will. That's making a claim within a paradigm while rejecting the paradigm. It makes no sense.

Consider what Eliezer says about coin flips:

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

The mind projection fallacy is treating the word "probability" not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don't project them onto the external world. That doesn't make "coin.probability == 0.5" a "false" statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.

"Free will" is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can't fully simulate your own brain within your own brain; you can't demand that we use the territory as our map.

Comment author: PhilGoetz 17 December 2017 04:28:54PM *  1 point [-]

Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.

Along similar lines, always define words so that no other word conveys too much mutual information about them. For instance, many people have argued with me that I should use the word "totalitarian" to mean "the fascist nations of the 20th century". Well, we already have a word for that, which is "fascist", so to define "totalitarian" as a synonym makes it a useless word.

The word "fascist" raises the question of when to use extensional vs. intensional definitions. It's conventionally defined extensionally, to mean the Axis powers in World War 2. This is not a useful definition, as we already have a label for that. Worse, people define it extensionally but pretend they've defined it intensionally. They call people today "fascist", conveying connotations in a way that can't be easily disputed, because there is no intensional definition to evaluate the claim.

Sometimes you want to switch back and forth between extensional and intensional definitions. In art history, we have a term for each period or "movement", like "neo-classical" and "Romantic". The exemplars of the category are defined both intensionally and extensionally, as those artworks having certain properties and produced in certain geographic locations during a certain time period. It is appropriate to use the intensional definition alone if describing a contemporary work of art (you can call it "Romantic" if it looks Romantic), but inappropriate to use examples that fit the intension but not the extension as exemplars, or to deduce things about the category from them. This keeps the categories stable.

A little ways back I talked about defining the phrase "Buddha nature". Phrases also have definitions--words are not atoms of meaning. Analyzing a phrase as if our theories of grammar worked, ignoring knowledge about idioms, is an error rationalists sometimes commit.

Pretending words don't have connotations is another error rationalists commit regularly--often in sneaky ways, deliberately using the connotations, while pretending they're being objective. Marxist literary criticism, for instance, loads a lot into the word "bourgeois".

Another category missing here is gostoks and doshes. This is when a word's connotations and tribal affiliation-signalling displace its semantic content entirely, and no one notices it has no meaning. Extremely common in Marxism and in "theory"; "capitalism" and "bourgeois" being the most-common examples. "Bourgeoisie" originally meant people like Rockefeller and the Borges, but as soon as artists began using the word, they used it to mean "people who don't like my scribbles," and now it has no meaning at all, but demonic connotations. "Capitalism" has no meaning that can single out post-feudal societies in the way Marxists pretend it does; any definition of it that I've seen includes things that Marxists don't want it to, like the Soviet Union, absolute monarchies, or even hunter-gatherer tribes. It should be called simply "free markets", which is what they really object to and much more accurate at identifying the economic systems that they oppose, but they don't want to admit that the essence of their ideology is opposition to freedom.

Avoid words with connotations that you haven't justified. Don't say "cheap" if you mean "inexpensive" or "shoddy". Especially avoid words which have a synonym with the opposite connotation: "frugal" and "miserly". Be aware of your etymological payloads: "awesome" and "awful" (full of awe), "incredible" (not credible), "wonderful" (thought-provoking).

Another category is when 2 subcultures have different sets of definitions for the same words, and don't realize it. For instance, in the humanities, "rational" literally means ratio-based reasoning, which rejects the use of real numbers, continuous equations, empirical measurements, or continuous changes over time. This is the basis of the Romantic/Modernist hatred of "science" (by which they mean Aristotelian rationality), and of many post-modern arguments that rationality doesn't work. Many people in the humanities are genuinely unaware that science is different than it was 2400 years ago, and most were 100% ignorant of science until perhaps the mid-20th century. A "classical education" excludes all empiricism.

Another problem is meaning drift. When you use writings from different centuries, you need to be aware of how the meanings of words and phrases have changed over time. For instance, the official academic line nowadays is that alchemy and astrology are legitimate sciences; this is justified in part by using the word "science" as if it meant the same as the Latin "scientia".

A problem in translation is decollapsing definitions. Medieval Latin conflated some important concepts because their neo-Platonist metaphysics said that all good things sort of went together. So for instance they had a single word, "pulchrum", which meant "beautiful", "sexy", "appropriate to its purpose", "good", and "noble". Translators will translate that into English based on the context, but that's not conveying the original mindset. This comes up most frequently when ancient writers made puns, like Plato's puns in the Crito, or "Jesus'" (Greek) puns in the opening chapters of John, which are destroyed in translation, leaving the reader with a false impression of the speaker's intent.

I disagree that saying "X is Y by definition" Is usually wrong, but I should probably leave my comment on that post instead of here.

Comment author: Psychohistorian2 06 March 2008 06:35:32AM 18 points [-]

This summary is quite useful. Eliezer, it would be very nice if you added forward links to your post. I often find myself wanting to recommend reading a series you've written to a friend, but in order to read it they would need to start at the end and link their way back to the beginning. If a link to follow ups were provided at the top or bottom of prior posts, it would make these a lot easier to follow write on a particular topic, since I could recommend one post and my friend could hopefully figure out the rest.

Comment author: PhilGoetz 17 December 2017 04:17:28PM *  0 points [-]

[moved to top level of replies]

Comment author: MugaSofer 10 April 2013 02:55:00PM 4 points [-]

"You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler"" Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"? How is either of those sentences wrong? Sure one is longer than the other, but just because somebody doesn't know the word god or wants to explicitly define it doesn't mean they are wrong.

The point is that the longer sentence sounds less plausible. Using shorthand ("God" for "A supernatural universe-creating entity" and "miracle" for "temporarily suspended the laws of physics") makes the concept sound less improbable. Thus it is "wrong", in that it is a bad idea (supposedly.)

Comment author: PhilGoetz 17 December 2017 04:02:04PM 0 points [-]

But you're arguing against Eliezer, as "God" and "miracle" were (and still are) commonly-used words, and so Eliezer is saying those are good, short words for them.

Comment author: PhilGoetz 17 December 2017 02:10:13AM 0 points [-]

Great post! There is also the non-discrete aspect of compression: information loss. English has, according to some dictionaries, over a million words. It's unlikely we store most of our information in English. Probably there is some sort of dimension reduction, like PCA. There is in any case probably lossy compression. This means people with different histories will use different frequency tables for their compression, and will throw out different information when encoding a verbal statement. I think you would almost certainly find that if you measure word use frequency for different people, then cluster the word use distributions, some clusters would correspond to ideologies. The interesting question is which comes first, the ideology, or the word usage frequency (caused by different life experiences).

Comment author: Vaniver 16 December 2017 03:00:14AM 0 points [-]

I don't think this gets Parfit's Hitchhiker right. You need a decision theory that, when safely returned to the city, pays the rescuer even though they have no external obligation to do so. Otherwise they won't have rescued you.

Comment author: PhilGoetz 16 December 2017 02:44:20PM *  1 point [-]

I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)

Your comment implies you're talking about policy, which must be modelled as an iterated game. I don't deny that one-boxing is good in the iterated game.

My concern in this post is that there's been a lack of distinction in the community between "one-boxing is the best policy" and "one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment." This lack of distinction has led many people into wishful or magical rather than rational thinking.

Comment author: Vaniver 15 December 2017 10:18:36PM 1 point [-]

The argument for one-boxing is that you aren't entirely sure you understand physics, but you know Omega has a really good track record--so good that it is more likely that your understanding of physics is false than that you can falsify Omega's prediction. This is a strict reliance on empirical observations as opposed to abstract reason: count up how often Omega has been right and compute a prior.

Isn't it that you aren't entirely sure that you understand psychology, or that you do understand psychology well enough to think that you're predictable? My understanding is that many people have run Newcomb's Problem-style experiments at philosophy departments (or other places) and have a sufficiently high accuracy that it makes sense to one-box at such events, even against fallible human predictors.

Comment author: PhilGoetz 16 December 2017 01:03:00AM *  0 points [-]

I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.

Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:

  • intended to one-box, one-boxed
  • intended to one-box, two-boxed
  • intended to two-box, one-boxed
  • intended to two-box, two-boxed

I don't know if the usual statement of Newcomb's problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that's a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we're just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.

We shouldn't ask, "Should you two-box?", but, "Should you two-box now, given how you would have acted earlier?" The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, 'coz that $1M probably ain't there.

So Newcomb's problem isn't a paradox. If we're talking just about the final decision, the one made by a subject after Omega's prediction, then the subject should probably two-box (as argued in the post). If we're talking about two decisions, one before and one after the box-opening, then all we're asking is whether you can convince Omega that you're going to one-box if you aren't. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.

Comment author: ike 15 December 2017 07:37:17PM 0 points [-]

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

Re QM: sometimes I've seen it stipulated that the world in which the scenario happens is deterministic. It's entirely possible that the amount of noise generated by QM isn't enough to affect your choice (besides for a very unlikely "your brain has a couple bits changed randomly in exactly the right way to change your choice", but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).

Comment author: PhilGoetz 15 December 2017 07:48:31PM *  0 points [-]

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.

To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.

Comment author: PhilGoetz 15 December 2017 07:41:12PM *  2 points [-]

I think that first you should elaborate on what you mean by "the goals of humanity". Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.

Okay, I admit it--what I really think is that "goals of humanity" is a nonsensical phrase, especially when spoken by an American academic. It would be a little better to talk about values instead of goals, but not much better. The phrase still implies the unspoken belief that everyone would think like the person who speaks it, if only they were smarter.

Comment author: ike 15 December 2017 07:04:37PM 0 points [-]

What part of physics implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions?

Comment author: PhilGoetz 15 December 2017 07:12:03PM *  0 points [-]

The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you've already agreed that you should two=box if Omega can't perfectly predict your actions. If Omega can, you won't be able to two-box unless Omega already predicted that you would, so it won't hurt to try to 2-box.

View more: Next