Will_Newsome comments on Abnormal Cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (365)
Before I spend a lot of effort trying to figure out where I went wrong (which I'm completely willing to do, because I read all of those posts and the metaethics sequence and figured I understood them), can you confirm that you read my EDIT above, and that the misunderstanding addressed there does not encompass the problem?
Now I have read the edit, but it doesn't seem to address the problem. Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection". If you expect to believe something, you should just believe it right away. See Conservation of expected evidence. But then, "probability this decision is right" is not something you can use for making the decision, not directly.
This might not be the most useful concept, true, but the issue at hand is the meta-level one of people's possible overconfidence about it.
"Probability of signing up being good", especially obfuscated with "justified upon infinite reflection", being subtly similar to "probability of the decision to sign up being correct", is too much of a ruse to use without very careful elaboration. A decision can be absolutely, 99.999999% correct, while the probability of it being good remains at 1%, both known to the decider.
So you read footnote 2 of the post and do not think it is a relevant and necessary distinction? And you read Steven's comment in the other thread where it seems he dissolved our disagreement and determined we were talking about different things?
I know about the conservation of expected evidence. I understand and have demonstrated understanding of the content in the various links you've given me. I really doubt I've been making the obvious errors you accuse me of for the many months I've been conversing with people at SIAI (and at Less Wrong meetups and at the decision theory workshop) without anyone noticing.
Here's a basic summary of what you seem to think I'm confused about: There is a broad concept of identity in my head. Given this concept of identity I do not want to sign up for cryonics. If this concept of identity changed such that the set of computations I identified with became smaller, then cryonics would become more appealing. I am talking about the probability of expected utility, not the probability of an event. The first is in the map (even if the map is in the territory, which I realize, of course), the second is in the territory.
EDIT: I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.
That preference is yours and yours alone, without any community to share it, doesn't make its content any less of a fact than if you'd had a whole humanity of identical people to back it up. (This identity/probability discussion is tangential to a more focused question of correctness of choice.)
The easiest step is for you to look over the last two paragraphs of this comment and see if you agree with that. (Agree/disagree in what sense, if you suspect essential interpretational ambiguity.)
I don't know why you brought up the concept of identity (or indeed cryonics) in the above, it wasn't part of this particular discussion.
At first glance and 15 seconds of thinking, I agree, but: "but that change of your belief doesn't influence how good X actually is" is to me more like "but that change of your belief doesn't influence how good X will be considered upon an infinite amount of infinitely good reflection".
Now try to figure out what does the question "What color the sky actually is?" mean, when compared with "How good X actually is?" and your interpretation "How good will X seem after infinite amount of infinitely good reflection". The "infinitely good reflection" thing is a surrogate for the fact itself, no less in the first case, and no more in the second.
If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following?
You can't "assign utility as you please", this is not a matter of choice. The decision is either correct or it isn't, and you can't make it correct or incorrect by willing so. You may only work on figuring out which way it is, like with any other fact.
Edit: adding a sentence in bold that is really important but that I failed to notice the first time. (Nick Tarleton alerted me to an error in this comment that I needed to fix.)
Any intelligent agent will discover that the sky is blue. Not every intelligent agent will think that the blue sky is equally beautiful. Me, I like grey skies and rainy days. If I discover that I actually like blue skies at a later point, then that changes the perceived utility of seeing a grey sky relative to a blue one. The simple change in preference also changes my expected utility. Yes, maybe the new utility was the 'correct' utility all along, but how is that an argument against anything I've said in my posts or comments? I get the impression you consistently take the territory view where I take the map view, and I further think that the map view is way more useful for agents like me that aren't infinitely intelligent nor infinitely reflective. (Nick Tarleton disagrees about taking the map view and I am now reconsidering. He raises the important point that taking the territory view doesn't mean throwing out the map, and gives the map something to be about. I think he's probably right.)
And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.
This reads to me like
See the error? That there are moral facts doesn't imply that everyone's preference is identical, that "all intelligent agents" will value the same thing. Every sane agent should agree on what is moral, but not every sane agent is moved by what is moral, some may be moved by what is prime or something, while agreeing with you that what is prime is often not moral. (See also this comment.)
I'm a little confused about your "weight of a person" example because 'a' is ambiguous in English. Did you mean one specific person, or the weighing of different people?
What if CEV doesn't exist, and there really are different groups of humans with different values? Is one set of values "moral" and the other "that other human thing that's analogous to morality but isn't morality"? Primeness is so different from morality that it's clear we're talking about two different things. But say we take what you're calling morality and modify it very slightly, only to the point where many humans still hold to the modified view. It's not clear to me that the agents will say "I'm moved by this modified view, not morality". Why wouldn't they say "No, this modification is the correct morality, and I am moved by morality!"
I have read the metaethics sequence but don't claim to fully understand it, so feel free to point me to a particular part of it.
Of course different people have different values. These values might be similar, but they won't be identical.
Yes, but what is "prime number"? Is it 5, or is it 7? 5 is clearly different from 7, although it's very similar to it in that it's also prime. Use the analogy of prime=moral and 5=Blueberry's values, 7=Will's values.
Because that would be pointless disputing of definitions - clearly, different things are meant by word "morality" in your example.
In this exchange
Will, by "definition of identity", meant a part of preference, making the point that people might have varying preferences (this being the sense in which preference is "subjective") that make cryonics a good idea for some but not others. He read your response as a statement of something like moral realism/externalism; he intended his response to address this, though it was phrased confusingly.
That would be a potentially defensible view (What are the causes of variation? How do we know it's there?), but I'm not sure it's Will's (and using the word "definition" in this sense goes very much against the definition of "definition").