Vladimir_Nesov comments on Abnormal Cryonics - Less Wrong

56 Post author: Will_Newsome 26 May 2010 07:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (365)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 27 May 2010 11:17:28PM *  0 points [-]

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you?

You haven't misunderstood me, but you need to pay attention to this question, because it's more or less a consensus on Less Wrong that your position expressed in the above quote is wrong. You should maybe ask around for clarification of this point, if you don't get a change of mind from discussion with me.

You may try the metaethics sequence, and also/in particular these posts:

That preference is computed in the mind doesn't make it any less of territory than anything else. This is just a piece of territory that happens to be currently located in human minds. (Well, not quite, but to a first approximation.)

Your map may easily change even if the territory stays the same. This changes your belief, but this change doesn't influence what's true about the territory. Likewise, your estimate of how good situation X is may change, once you process new arguments or change your understanding of the situation, for example by observing new data, but that change of your belief doesn't influence how good X actually is. Morality is not a matter of interpretation.

Comment author: Will_Newsome 27 May 2010 11:41:14PM 0 points [-]

Before I spend a lot of effort trying to figure out where I went wrong (which I'm completely willing to do, because I read all of those posts and the metaethics sequence and figured I understood them), can you confirm that you read my EDIT above, and that the misunderstanding addressed there does not encompass the problem?

Comment author: Vladimir_Nesov 27 May 2010 11:52:56PM *  0 points [-]

Now I have read the edit, but it doesn't seem to address the problem. Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection". If you expect to believe something, you should just believe it right away. See Conservation of expected evidence. But then, "probability this decision is right" is not something you can use for making the decision, not directly.

Comment author: Nick_Tarleton 28 May 2010 04:36:28AM *  0 points [-]

Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection".

This might not be the most useful concept, true, but the issue at hand is the meta-level one of people's possible overconfidence about it.

Comment author: Vladimir_Nesov 28 May 2010 11:51:01AM *  2 points [-]

"Probability of signing up being good", especially obfuscated with "justified upon infinite reflection", being subtly similar to "probability of the decision to sign up being correct", is too much of a ruse to use without very careful elaboration. A decision can be absolutely, 99.999999% correct, while the probability of it being good remains at 1%, both known to the decider.

Comment author: Will_Newsome 28 May 2010 12:11:14AM *  0 points [-]

So you read footnote 2 of the post and do not think it is a relevant and necessary distinction? And you read Steven's comment in the other thread where it seems he dissolved our disagreement and determined we were talking about different things?

I know about the conservation of expected evidence. I understand and have demonstrated understanding of the content in the various links you've given me. I really doubt I've been making the obvious errors you accuse me of for the many months I've been conversing with people at SIAI (and at Less Wrong meetups and at the decision theory workshop) without anyone noticing.

Here's a basic summary of what you seem to think I'm confused about: There is a broad concept of identity in my head. Given this concept of identity I do not want to sign up for cryonics. If this concept of identity changed such that the set of computations I identified with became smaller, then cryonics would become more appealing. I am talking about the probability of expected utility, not the probability of an event. The first is in the map (even if the map is in the territory, which I realize, of course), the second is in the territory.

EDIT: I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

Comment author: Vladimir_Nesov 28 May 2010 12:25:26AM *  1 point [-]

I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

That preference is yours and yours alone, without any community to share it, doesn't make its content any less of a fact than if you'd had a whole humanity of identical people to back it up. (This identity/probability discussion is tangential to a more focused question of correctness of choice.)

Comment author: Vladimir_Nesov 28 May 2010 12:20:19AM *  0 points [-]

The easiest step is for you to look over the last two paragraphs of this comment and see if you agree with that. (Agree/disagree in what sense, if you suspect essential interpretational ambiguity.)

I don't know why you brought up the concept of identity (or indeed cryonics) in the above, it wasn't part of this particular discussion.

Comment author: Will_Newsome 28 May 2010 12:26:36AM 0 points [-]

At first glance and 15 seconds of thinking, I agree, but: "but that change of your belief doesn't influence how good X actually is" is to me more like "but that change of your belief doesn't influence how good X will be considered upon an infinite amount of infinitely good reflection".

Comment author: Vladimir_Nesov 28 May 2010 12:40:05AM *  0 points [-]

Now try to figure out what does the question "What color the sky actually is?" mean, when compared with "How good X actually is?" and your interpretation "How good will X seem after infinite amount of infinitely good reflection". The "infinitely good reflection" thing is a surrogate for the fact itself, no less in the first case, and no more in the second.

If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following?

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory.

You can't "assign utility as you please", this is not a matter of choice. The decision is either correct or it isn't, and you can't make it correct or incorrect by willing so. You may only work on figuring out which way it is, like with any other fact.

Comment author: Will_Newsome 28 May 2010 02:17:06AM *  2 points [-]

Edit: adding a sentence in bold that is really important but that I failed to notice the first time. (Nick Tarleton alerted me to an error in this comment that I needed to fix.)

Any intelligent agent will discover that the sky is blue. Not every intelligent agent will think that the blue sky is equally beautiful. Me, I like grey skies and rainy days. If I discover that I actually like blue skies at a later point, then that changes the perceived utility of seeing a grey sky relative to a blue one. The simple change in preference also changes my expected utility. Yes, maybe the new utility was the 'correct' utility all along, but how is that an argument against anything I've said in my posts or comments? I get the impression you consistently take the territory view where I take the map view, and I further think that the map view is way more useful for agents like me that aren't infinitely intelligent nor infinitely reflective. (Nick Tarleton disagrees about taking the map view and I am now reconsidering. He raises the important point that taking the territory view doesn't mean throwing out the map, and gives the map something to be about. I think he's probably right.)

You may only work on figuring out which way it is, like with any other fact.

And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.

Comment author: Vladimir_Nesov 28 May 2010 12:08:02PM *  0 points [-]

And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.

This reads to me like

To figure out the weight of a person, we need to develop experimental procedures, make observations, and so on. Yes, maybe it turns out that "weight of a person" is a universal constant and that all experimenters will agree that it's exactly 80 kg in all cases, and weighting people will thus turn out really easy, but I don't buy this assertion.

See the error? That there are moral facts doesn't imply that everyone's preference is identical, that "all intelligent agents" will value the same thing. Every sane agent should agree on what is moral, but not every sane agent is moved by what is moral, some may be moved by what is prime or something, while agreeing with you that what is prime is often not moral. (See also this comment.)

Comment author: Blueberry 28 May 2010 02:57:45PM 1 point [-]

I'm a little confused about your "weight of a person" example because 'a' is ambiguous in English. Did you mean one specific person, or the weighing of different people?

Every sane agent should agree on what is moral

What if CEV doesn't exist, and there really are different groups of humans with different values? Is one set of values "moral" and the other "that other human thing that's analogous to morality but isn't morality"? Primeness is so different from morality that it's clear we're talking about two different things. But say we take what you're calling morality and modify it very slightly, only to the point where many humans still hold to the modified view. It's not clear to me that the agents will say "I'm moved by this modified view, not morality". Why wouldn't they say "No, this modification is the correct morality, and I am moved by morality!"

I have read the metaethics sequence but don't claim to fully understand it, so feel free to point me to a particular part of it.

Comment author: Nick_Tarleton 28 May 2010 04:29:55AM *  1 point [-]

If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following?

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory.

In this exchange

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

There are important subjective considerations, such as age and definition of identity,

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

Will, by "definition of identity", meant a part of preference, making the point that people might have varying preferences (this being the sense in which preference is "subjective") that make cryonics a good idea for some but not others. He read your response as a statement of something like moral realism/externalism; he intended his response to address this, though it was phrased confusingly.

Comment author: Vladimir_Nesov 28 May 2010 12:11:18PM *  0 points [-]

That would be a potentially defensible view (What are the causes of variation? How do we know it's there?), but I'm not sure it's Will's (and using the word "definition" in this sense goes very much against the definition of "definition").