In response to Magical Categories
Comment author: elspood 04 June 2011 12:26:09AM 1 point [-]

Can anyone please explain the reference to the horror seen firsthand at http://www.mail-archive.com/agi@v2.listbox.com/? I tried going back in the archives to see if something happened in August 2008 or earlier (the date of Eliezer's post), but the list archive site doesn't have anything older than October 2008 currently. My curiosity is piqued and I need closure on the anecdote. If nothing else, others might benefit from knowing what horrors might be avoided during AGI research.

Comment author: jimrandomh 26 April 2011 12:21:21AM 14 points [-]

But why does it matter what we want, if we aren't ever able to know if what we want is correct for the universe at large?

There is no sense in which what we want may be correct or incorrect for the universe at large, because the universe does not care. Caring is a thing that minds do, and the universe is not a mind.

What if our only purpose is to simply enable the next stage of intelligence, then to disappear into the past?

Our purpose is whatever we choose it to be; purposes are goals seen from another angle. There is no source of purposefulness outside the universe. My goals require that humans stick around, so our purpose with respect to my goal system does not involve disappearing into the past. I think most peoples' goal systems are similar.

Comment author: elspood 26 April 2011 04:37:34AM 0 points [-]

There is no sense in which what we want may be correct or incorrect for the universe at large, because the universe does not care. Caring is a thing that minds do, and the universe is not a mind.

Yes, I agree, and I realize that that isn't what I was actually trying to say. What I meant was, there is a set of possible, superlatively rational intelligences that may make better use of the universe than humanity (or humanity + a constrained FAI). If Omega reveals to you that such an intelligence would come about if you implement AGI with no Friendly constraint, at the cost of the extinction of humanity, would you build it? This to me drives directly to the heart of whether you value rationality over existence. You don't personally 'win', humanity doesn't 'win', but rationality is maximized.

My goals require that humans stick around, so our purpose with respect to my goal system does not involve disappearing into the past. I think most peoples' goal systems are similar.

I think we need to unpack that a little, because I don't think you mean "humans stick around more or less unchanged from their current state". This is what I was trying to drive at about the Neanderthals. In some sense we ARE Neanderthals, slightly farther along an evolutionary timescale, but you wouldn't likely feel any moral qualms about their extinction.

So if you do expect that humanity will continue to evolve, probably into something unrecognizable to 21st century humans, in what sense does humanity actually "stick around"? Do you mean you, personally, want to maintain your own conscious self indefinitely, so that no matter what the future, "you" will in some sense be part of it? Or do you mean "whatever intelligent life exists in the future, its ancestry is strictly human"?

Comment author: Leo_G. 11 August 2008 01:39:48AM 7 points [-]

It gets interesting when the pebblesorters turn on a correctly functioning FAI, which starts telling them that they should build a pile of 108301 and legislative bodies spend the next decade debating whether or not it is in fact a correct pile. "How does this AI know better anyway? That looks new and strange." "That doesn't sound correct to me at all. You'd have to be crazy to build 108301. It's so different from 2029! It's a slippery slope to 256!" And so on.

This really is a fantastic parable--it shows off perhaps a dozen different aspects of the forrest we were missing for the trees.

Comment author: elspood 25 April 2011 11:59:42PM 0 points [-]

When I read this parable, I was already looking for a reason to understand why Friendly AI necessarily meant "friendly to human interests or with respect to human moral systems". Hence, my conclusion from this parable was that Eliezer was trying to show how, from the perspective of AGI, human goals and ambitions are little more than trying to find a good way to pile up our pebbles. It probably doesn't matter that the pattern we're currently on to is "bigger and bigger piles of primes", since pebble-sorting isn't certain at all to be the right mountain to be climbing. An FAI might be able to convince us that 108301 is a good pile from within our own paradigm, but how can it ever convince us that we have the wrong paradigm altogether, especially if that appears counter to our own interests?

What if Eliezer were to suddenly find himself alone among neanderthals? Knowing, with his advanced knowledge and intelligence, that neanderthals were doomed to extinction, would he be immoral or unfriendly to continue to devote his efforts to developing greater and greater intelligences, instead of trying to find a way to sustain the neanderthal paradigm for its own sake? Similarly, why should we try to restrain future AGI so that it maintains the human paradigm?

The obvious answer is that we want to stay alive, and we don't want our atoms used for other things. But why does it matter what we want, if we aren't ever able to know if what we want is correct for the universe at large? What if our only purpose is to simply enable the next stage of intelligence, then to disappear into the past? It seems more rational to me to abandon focus specifically on FAI, and just build AGI as quickly as possible before humanity destroys itself.

Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?

View more: Prev