Latanius2 comments on Ethical Injunctions - Less Wrong

26 Post author: Eliezer_Yudkowsky 20 October 2008 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Latanius2 21 October 2008 08:11:54AM 0 points [-]

"looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of 'What would you ask me to do if you knew what I know and thought as fast as I do?'"

We're sufficiently more intelligent than monkeys to do that reasoning... so humanity's goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.

We don't have the right to say, "if monkeys were more intelligent and consistent, they would think like us": we're also a random product of evolution, from the point of view of monkeys. (Tile the world with ugly concrete buildings? Uhhh...)

So I think that to preserve our humanity in the process we should be the ones who become gradually more and more intelligent (and decide what goals to follow next). Humans are complicated, so to simulate it in a Friendly AI, we'd need comparably complex systems... and they are probably chaotic, too. Isn't it... simply... impossible? (Not in a sense that "we can't make it", but "we can prove nobody can"...)