Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Luke_A_Somers comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 15 January 2013 04:13:16PM 0 points [-]

Is a human mind the simplest possible mind that can be sentient? What if, in the course of trying to model its own programmers, a relatively younger AI manages to create a sentient simulation trapped within itself? How soon do you have to start worrying? Ask yourself that fundamental question, "What do I think I know, and how do I think I know it?"

I read this as being simpler than a real human mind. Since it's simpler, the abstractions used are going to be imperfect, and the design would end up being something that is in some way artificial. It's not as explicit as I said, but I still think the implication is pretty strong.

Comment author: Irgy 20 January 2013 11:47:23PM 0 points [-]

I've actually lost track of how this impacts my original point. As stated, it was that we're worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.

I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you've convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.

Comment author: Luke_A_Somers 21 January 2013 02:22:42PM 2 points [-]

My point is he's clearly not drawing a box tightly around what's human or not. If he's concerned with clearly-sub-human AI, then he's casting a significantly wider net than it seems you're assuming he is. And considering that he's written extensively on the variety of mind-space, assuming he's taking a tightly parochial view is poorly founded.

Comment author: MugaSofer 21 January 2013 09:22:09AM *  -2 points [-]

"Is a human mind the simplest possible mind?"

"But if it was simpler, it wouldn't be human!"

Downvoted.

Comment author: Luke_A_Somers 21 January 2013 02:20:17PM 1 point [-]

What? That's completely irrelevant to the question at hand.

By considering the question of whether simpler-than-human minds are possible in this context, it's clear that Eliezer was thinking about the question and giving them moral weight. He doesn't need to ANSWER the question I was posing to make that much clear.

Comment author: MugaSofer 21 January 2013 03:44:35PM 0 points [-]

Wait, what?

*Clicks "Show more comments above."

Oops. I thought you were replying to the quoted text. Upvoted and retracted my comment.