Caledonian2 comments on Qualitative Strategies of Friendliness - Less Wrong

10 Post author: Eliezer_Yudkowsky 30 August 2008 02:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Caledonian2 31 August 2008 01:09:06PM -1 points [-]

Who is it that's using these words "benefit", talking about "lesser" intelligences, invoking this mysterious property of "better"-ness? Is it a star, a mountain, an atom? Why no, it's a human!

...I'm seriously starting to wonder if some people just lack the reflective gear required to abstract over their background frameworks. All this talk of moral "danger" and things "better" than us, is the execution of a computation embodied in humans, nowhere else,

I don't find anything mysterious about the concept - and I certainly don't see that 'betterness' is embodied only in humans.

We need to presume that AIs are better than humans in at least one way - a way that is important - or your whole argument falls apart. If the AIs won't be better than humans in any way, what makes them more dangerous than humans are?

Electric brain stimulation seems like it's less likely to lead to tolerance effects, but it's very difficult to find a surgeon willing to implant electrodes into one's brain or spinal cord because you want to wirehead yourself (and it doesn't seem to be as effective in humans as in rats, for whatever reason).

Doug S., do you have any sources for that last claim? I've only ever come across one reference to a serious wireheading attempt in humans, and if I recall correctly it was a disaster - the human responded precisely as the rats did and had to have the wire's trigger taken away.

I suspect that any further attempts were intentionally directed so as not to be fully effective, to avoid similar outcomes. But new data could change my mind on that, if you possess some.