Shane_Legg comments on Magical Categories - Less Wrong

24 Post author: Eliezer_Yudkowsky 24 August 2008 07:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Shane_Legg 28 August 2008 04:17:00PM 0 points [-]

Eli, to my mind you seem to be underestimating the potential of a super intelligent machine.

How do I know that hemlock is poisonous? Well, I've heard the story that Socrates died by hemlock poisoning. This is not a conclusion that I've arrived at due to the physical properties of hemlock that I have observed and how this would affect the human body, indeed, as far as I know, I've never even seen hemlock before. The idea that hemlock is a poison is a pattern in my environment: every time I hear about the trial of Socrates I hear about it being the poison that killed him. It's also not a very useful piece of information in terms of achieving any goals I care about as I don't imagine that I'll ever encounter a case of hemlock poisoning first hand. Now, if I can learn that hemlock is a poison this way, surely a super intelligent machine could too? I think any machine that can't do this is certainly not super intelligent.

In the same way a super intelligent machine will form good models of what we consider to be right and wrong, including the way in which these ideas vary from person to person, place to place, culture to culture. Your comments about the machine getting the people to appear happy or saying "Yes" vs. "No", well, I don't understand this. It's as if you seem to think that a super intelligent machine will only have a shallow understanding of its world.

Please note (I'm saying this for other people reading this comment) even if a super intelligent machine will form good models of human ethics though observing human culture, this doens't mean that the machine will take this as its goal.