knb comments on Why No Wireheading? - Less Wrong

16 [deleted] 18 June 2011 11:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread.

Comment author: knb 19 June 2011 09:00:47AM *  3 points [-]

To clarify, are you claiming that wireheading is actually a good thing for everyone, and we're just confused? Or do you merely think wireheading feels like a fine idea for you but others may have different values? At times, your post feels like the former, but that seems too bizarre to be true.

My own view is that people probably have different values on this issue. Just as suicide seems like a good idea to some people, while most people are horrified by the idea of committing suicide, we can genuinely disagree and have different values and make different decisions based on our life circumstances.

Comment author: [deleted] 19 June 2011 01:50:10PM 1 point [-]

To clarify, are you claiming that wireheading is actually a good thing for everyone, and we're just confused? Or do you merely think wireheading feels like a fine idea for you but others may have different values? At times, your post feels like the former, but that seems to bizarre to be true.

As I said here, it really weirds me out if it weren't a universally good or bad idea. As such, it should be good for everyone or no-one. Wireheading doesn't seem like something agents as similar as humans should be able to agree to disagree on.

Comment author: knb 20 June 2011 01:54:16AM 2 points [-]

Wireheading doesn't seem like something agents as similar as humans should be able to agree to disagree on.

Suicide is even more basic than wireheading, yet humans disagree about whether or not to commit suicide. There are even some philosophers who have thought about it and concluded suicide is the "rational" decision. If humans cannot, in fact agree about whether to exist or not, how can you think wireheading has a "right" answer?

Comment author: [deleted] 20 June 2011 04:33:00PM 1 point [-]

Humans do also still disagree on p-zombies or, more basic, evolution. That doesn't mean there isn't a correct answer.

But you're right that pretty much any value claim is disputed and when taking into account past societies, there aren't even obvious majority views on anything. Still, I'm not comfortable just giving up. "People just are that different" is a last resort, not the default position to take in value disputes.

Comment author: knb 20 June 2011 11:28:00PM 0 points [-]

Humans do also still disagree on p-zombies or, more basic, evolution. That doesn't mean there isn't a correct answer.

The distinction is that evolution and zombies are factual disputes. Factual views can be objectively wrong, preferences are purely subjective. There is no particular reason any one mind in the space of possible minds should prefer wireheading.

Comment author: [deleted] 20 June 2011 11:50:20PM 0 points [-]

To clarify, the claim is not "all agents should prefer wireheading" or "humans should have wireheading-compatible values", but "if an agent has this set of values and this decision algorithm, then it should wirehead", with humans being such an agent. The wireheading argument does not propose that humans change their values, but that wireheading actually is a good fulfillment of their existent values (despite seeming objections). That's as much a factual claim as evolution.

The reason I don't easily expect rational disagreement is that I expect a) all humans to have the same decision algorithm and b) terminal values are simple and essentially hard-coded.

b) might be false, but then I don't see a realistic mechanism how they got there in the first place. What's the evolutionary advantage of an agent that has highly volatile terminal values and can easily be hijacked, or relies on fairly advanced circuitry to even do value calculations?

Comment author: Wei_Dai 22 June 2011 10:22:36PM 3 points [-]

What's the evolutionary advantage of an agent that has highly volatile terminal values and can easily be hijacked,

Humans seem to act as general meme hosts. It seems fairly easy for a human to be hijacked by a meme in a way that decreases their genetic inclusive fitness. Presumably this kind of design at least had an evolutionary advantage, in our EEA, or we wouldn't be this way.

or relies on fairly advanced circuitry to even do value calculations?

If you can host arbitrary memes, then "external referent consequentialism" doesn't really need any extra circuitry. You just have to be convinced that it's something you ought to do.

Comment author: prase 19 June 2011 08:15:48PM 0 points [-]

By the way, by a universally good idea you mean a) an idea about which any person can be persuaded by a rational argument, or b) objectively morally good idea, or c) something else? Because if a), it is very unlikely to be so. There are people who don't accept logic.

Comment author: [deleted] 20 June 2011 06:46:20PM 0 points [-]

I mean an idea that, if properly understood, every human would agree with, so a). Well, there are some humans that, for various reasons, might not be able to actually follow the reasoning or who are so broken to reject correct arguments and whatnot. So "every" is certainly exaggerated, but you get the idea. I would not expect rational disagreement.