You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Logos01 comments on AGI Quotes - Less Wrong Discussion

6 Post author: lukeprog 02 November 2011 08:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: Logos01 04 November 2011 03:18:51PM *  -1 points [-]

There's no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.

I was assuming the latter. As to the former, again: hence my caveat. I don't much care what the possibility of AGI mindspace is, I've already arbitrarily limited the kinds I'm talking about to a very narrow window.

So objecting to my valuative statement regarding that narrow window with the statement, "But there's no reason to think it would be in that window!" -- just shows that you're lacking reading skills, to be quite frank.

I don't much care what the range of possible values is for f(x) for x=0..10000000, when I've already asked the question what is f(10)? If it's a sentient entity that is recursively intelligent, then at some point it alone would become more "cognizant" than the entire human race put together.

If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?