MugaSofer comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: MugaSofer 23 January 2013 01:40:39PM -2 points [-]

(Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.)

At least for now, it'd take a pretty determined troll who could build an em for the sole purpose of being a terrible person. Not saying some humanity-first movement mightn't pull it off, but by that point you could hopefully have legal recognition (assuming there's no risk or accidental fooming and they pass the Turing test.)

Comment author: ArisKatsaris 23 January 2013 02:04:43PM *  2 points [-]

I don't think we're talking ems, we're talking conscious algorithms which aren't necessarily humanlike or even particularly intelligent.

And as for the Turing Test, one oughtn't confuse consciousness with intelligence. A 6-year old human child couldn't pass off as an adult human, but we still believe the child to be conscious, and my own memories indicate that I indeed was at that age.

Comment author: MugaSofer 23 January 2013 02:21:48PM -2 points [-]

Well, I think consciousness, intelligence and personhood are sliding scales anyway, so I may be imagining the output of a Nonperson Predicate somewhat differently to LW norm. OTOH, I guess it's not a priori impossible that a simple human-level AI could fit on something avvailable to the public, and such an insight would be ... risky, yeah. Upvoted.

Comment author: ArisKatsaris 23 January 2013 02:54:57PM 0 points [-]

First of all, I also believe that consciousness is most probably a sliding scale.

Secondly, again you just used "human-level" without specifying human-level at what, at intelligence or at consciousness; as such I'm not sure whether I actually communicated adequately my point that we're not discussing intelligence here, but just consciousness.

Comment author: MugaSofer 24 January 2013 01:58:51PM -2 points [-]

Well, they do seem to be correlated in any case. However, I was referring to consciousness (whatever that is.)