Vladimir_Nesov comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 29 January 2011 09:09:22PM 0 points [-]

You're responding to an interpretation of what I said that assumes I'm stupid, not the thing I was actually trying to say. Do you seriously think I've spent a year at SIAI without understanding such basic arguments? I'm not retarded. I just don't have the energy to think through all the ways that people could interpret what I'm saying as something dumb because it pattern matches to things dumb people say. I'm going to start disclaiming this at the top of every comment, as suggested by Steve Rayhawk.

Specifically, in this case, in the comment you replied to and elsewhere in this thread, I said: "this doesn't apply to AIs that are bad at that kind of philosophical reflection". I'm making a claim that all well-designed AIs will converge to universal 'morality' that we'd like upon reflection even if it wasn't explicitly coded to approximate human values. I'm not saying your average AI programmer can make an AI that does this, though I am suggesting it is plausible.

This is stupid. I'm suggesting a hypothesis with low probability that is contrary to standard opinion. If you want to dismiss it via absurdity heuristic go ahead, but that doesn't mean that there aren't other people who might actually think about what I might mean while assuming that I've actually thought about the things I'm trying to say. This same annoying thing happened with Jef Allbright, who had interesting things to say but no one had the ontology to understand him so they just assumed he was speaking nonsense. Including Eliezer. LW inherited Eliezer's weakness in this regard, though admittedly the strength of narrowness and precision was probably bolstered in its absence.

If what I am saying sounds mysterious, that is a fact about your unwillingness to be charitable as much as it is about my unwillingness to be precise. (And if you disagree with that, see it as an example.) That we are both apparently unwilling doesn't mean that either of us is stupid. It just means that we are not each others' intended audience.

Comment author: Vladimir_Nesov 29 January 2011 09:38:49PM 0 points [-]

You're responding to an interpretation of what I said that assumes I'm stupid, not the thing I was actually trying to say. Do you seriously think I've spent a year at SIAI without understanding such basic arguments?

With each comment like this you make, and lack of comments that show clear understanding, I think that more and more confidently, yes. Disclaimers don't help in such cases. You don't have to be stupid, you clearly aren't, but you seem to be using your intelligence to confuse yourself by lumping everything together instead of carefully examining distinct issues. Even if you actually understand something, adding a lot of noise over this understanding makes the overall model much less accurate.