Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Luke_A_Somers comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 14 January 2013 02:20:22PM 0 points [-]

The OP quite explicitly covers creation of nonhuman intelligence and considers it equally bad.

Comment author: Irgy 15 January 2013 01:17:21PM 0 points [-]

Really? Where? I just reread it with that in mind and I still couldn't find it. The closest I came was that he once used the term "sentient simulation", which is at least technically broad enough to cover both. He does make a point there about sentience being something which may not exactly match our concept of a human, is that what you're referring to? He then goes on to talk about this concept (or, specifically, the method needed to avoid it) as a "nonperson predicate", again suggesting that what's important is whether it's like a human-like rather than anything more fundamental. I don't see how you could think "nonperson predicate" is covering both human and nonhuman intelligence equally.

Comment author: Luke_A_Somers 15 January 2013 04:13:16PM 0 points [-]

Is a human mind the simplest possible mind that can be sentient? What if, in the course of trying to model its own programmers, a relatively younger AI manages to create a sentient simulation trapped within itself? How soon do you have to start worrying? Ask yourself that fundamental question, "What do I think I know, and how do I think I know it?"

I read this as being simpler than a real human mind. Since it's simpler, the abstractions used are going to be imperfect, and the design would end up being something that is in some way artificial. It's not as explicit as I said, but I still think the implication is pretty strong.

Comment author: Irgy 20 January 2013 11:47:23PM 0 points [-]

I've actually lost track of how this impacts my original point. As stated, it was that we're worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.

I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you've convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.

Comment author: Luke_A_Somers 21 January 2013 02:22:42PM 2 points [-]

My point is he's clearly not drawing a box tightly around what's human or not. If he's concerned with clearly-sub-human AI, then he's casting a significantly wider net than it seems you're assuming he is. And considering that he's written extensively on the variety of mind-space, assuming he's taking a tightly parochial view is poorly founded.

Comment author: MugaSofer 21 January 2013 09:22:09AM *  -2 points [-]

"Is a human mind the simplest possible mind?"

"But if it was simpler, it wouldn't be human!"


Comment author: Luke_A_Somers 21 January 2013 02:20:17PM 1 point [-]

What? That's completely irrelevant to the question at hand.

By considering the question of whether simpler-than-human minds are possible in this context, it's clear that Eliezer was thinking about the question and giving them moral weight. He doesn't need to ANSWER the question I was posing to make that much clear.

Comment author: MugaSofer 21 January 2013 03:44:35PM 0 points [-]

Wait, what?

*Clicks "Show more comments above."

Oops. I thought you were replying to the quoted text. Upvoted and retracted my comment.

Comment author: nshepperd 21 January 2013 10:08:09AM *  4 points [-]

"Person" seems to be used here as the philosophical term meaning something like "sentient entity with moral value". Personhood is not limited to human beings.

ETA: Also, wrt the AI itself, the directly next two articles in this sequence explicitly deal with the issue of making the AI itself nonsentient, as I'm surprised to find a comment from myself in 2011 pointing out. Did you really not read the surrounding articles?