loqi comments on Sorting Pebbles Into Correct Heaps - Less Wrong

75 Post author: Eliezer_Yudkowsky 10 August 2008 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: elspood 25 April 2011 11:59:42PM 0 points [-]

When I read this parable, I was already looking for a reason to understand why Friendly AI necessarily meant "friendly to human interests or with respect to human moral systems". Hence, my conclusion from this parable was that Eliezer was trying to show how, from the perspective of AGI, human goals and ambitions are little more than trying to find a good way to pile up our pebbles. It probably doesn't matter that the pattern we're currently on to is "bigger and bigger piles of primes", since pebble-sorting isn't certain at all to be the right mountain to be climbing. An FAI might be able to convince us that 108301 is a good pile from within our own paradigm, but how can it ever convince us that we have the wrong paradigm altogether, especially if that appears counter to our own interests?

What if Eliezer were to suddenly find himself alone among neanderthals? Knowing, with his advanced knowledge and intelligence, that neanderthals were doomed to extinction, would he be immoral or unfriendly to continue to devote his efforts to developing greater and greater intelligences, instead of trying to find a way to sustain the neanderthal paradigm for its own sake? Similarly, why should we try to restrain future AGI so that it maintains the human paradigm?

The obvious answer is that we want to stay alive, and we don't want our atoms used for other things. But why does it matter what we want, if we aren't ever able to know if what we want is correct for the universe at large? What if our only purpose is to simply enable the next stage of intelligence, then to disappear into the past? It seems more rational to me to abandon focus specifically on FAI, and just build AGI as quickly as possible before humanity destroys itself.

Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?

Comment author: loqi 26 April 2011 12:49:05AM 0 points [-]

Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?

Winning is a truer mark of rationality.

Comment author: NancyLebovitz 26 April 2011 01:51:41AM 3 points [-]

I wonder about the time scale for winning. After all, a poker player using an optimal strategy can still expect extended periods of losing, and poker is better defined than a lot of life situations.

Comment author: katydee 26 April 2011 02:07:22AM *  5 points [-]

I think it's more apt to characterize winning as a goal of rationality, not as its mark.

In Bayesian terms, while those applying the methods of rationality should win more than the general population on average-- p(winning|rationalist) > p(winning|non-rationalist)-- the number of rationalists in the population is low enough at present that p(non-rationalist|winning) almost certainly > p(rationalist|winning), so observing whether or not someone is winning is not very good evidence as to their rationality.

Comment author: loqi 27 April 2011 01:45:46AM 1 point [-]

Ack, you're entirely right. "Mark" is somewhat ambiguous to me without context, I think I had imbued it with some measure of goalness from the GP's use.

I have a bad habit of uncritically imitating peoples' word choices within the scope of a conversation. In this case, it bit me by echoing the GP's is-ought confusion... yikes!