You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

pedanterrific comments on Ontologial Reductionism and Invisible Dragons - Less Wrong Discussion

-11 Post author: Balofsky 20 March 2012 02:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: pedanterrific 21 March 2012 10:24:20PM 1 point [-]

As an example (and I don't mean this to be below the belt)

Why would this be below the belt? If "greater consciousness" is what you value, it seems self-evidently true.

and I don't think such methods of reasoning are appropriate for the discussion of such issues.

Is there a reason for this other than disapproval of the conclusions?

Comment author: Balofsky 21 March 2012 11:41:32PM 0 points [-]

-I say "below the belt," because I imagine that there are individuals of the Less Wrong community who strongly support SIAI's work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans, and I didn't want these individuals to think that I was making an assumption about their ethical opinions based on their support of AI research.

-Yes, it is largely because of disapproval of the conclusions, but I disapprove of the conclusions because the conclusions are not rational in the face of other intellectual considerations. The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.

Comment author: pedanterrific 21 March 2012 11:51:28PM 3 points [-]

there are individuals of the Less Wrong community who strongly support SIAI's work and goals concerning AI, but who simultaneously would not consider such AI creations to be of greater moral value than humans

I normally hate to do this, but Nonsentient Optimizers says it better than I could. If you're building an AI as a tool, don't make it a person.

The failure to see a qualitative difference between humans, baboons and computers suggests an inability to distinguish between living and non-living entities, and I think that is irrational.

That's a question of values, though. I don't value magnitude of consciousness; if baboons were uplifted to be more intelligent than humans on average, I would still value humans more.

Comment author: drethelin 22 March 2012 03:41:21AM 0 points [-]

How do you define a living entity?