All of Leo_G.'s Comments + Replies

Leo_G.110

It gets interesting when the pebblesorters turn on a correctly functioning FAI, which starts telling them that they should build a pile of 108301 and legislative bodies spend the next decade debating whether or not it is in fact a correct pile. "How does this AI know better anyway? That looks new and strange." "That doesn't sound correct to me at all. You'd have to be crazy to build 108301. It's so different from 2029! It's a slippery slope to 256!" And so on.

This really is a fantastic parable--it shows off perhaps a dozen different aspects of the forrest we were missing for the trees.

0elspood
When I read this parable, I was already looking for a reason to understand why Friendly AI necessarily meant "friendly to human interests or with respect to human moral systems". Hence, my conclusion from this parable was that Eliezer was trying to show how, from the perspective of AGI, human goals and ambitions are little more than trying to find a good way to pile up our pebbles. It probably doesn't matter that the pattern we're currently on to is "bigger and bigger piles of primes", since pebble-sorting isn't certain at all to be the right mountain to be climbing. An FAI might be able to convince us that 108301 is a good pile from within our own paradigm, but how can it ever convince us that we have the wrong paradigm altogether, especially if that appears counter to our own interests? What if Eliezer were to suddenly find himself alone among neanderthals? Knowing, with his advanced knowledge and intelligence, that neanderthals were doomed to extinction, would he be immoral or unfriendly to continue to devote his efforts to developing greater and greater intelligences, instead of trying to find a way to sustain the neanderthal paradigm for its own sake? Similarly, why should we try to restrain future AGI so that it maintains the human paradigm? The obvious answer is that we want to stay alive, and we don't want our atoms used for other things. But why does it matter what we want, if we aren't ever able to know if what we want is correct for the universe at large? What if our only purpose is to simply enable the next stage of intelligence, then to disappear into the past? It seems more rational to me to abandon focus specifically on FAI, and just build AGI as quickly as possible before humanity destroys itself. Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?
Leo_G.00

Not to start a traditional puzzle flame war, but can someone clarify for me the situation where at least three people on the island have blue eyes? It seems that in this case everyone knows that everyone knows [...] that there is "at least one person with blue eyes".

Blue-eyed person A: "Look at that poor sap over there with the blue eyes [B]. I bet he thinks there's only one person with blue eyes [C] on this island. Little does he know!"

Help?

Leo_G.00

Ben, I quite agree. Paul, I think that's what I was saying, in fact. That to start talking about it as a matter of morals is to oversell how hard of a question it is. It's the proverbial red button that keeps the world from blowing up. Sometimes the answer is obvious.

Leo_G.00

Ben, I quite agree. Paul, I think that's what I was saying, in fact. That to start talking about it as a matter of morals is to oversell how hard of a question it is. It's the proverbial red button that keeps the world from blowing up. Sometimes the answer is obvious.

Leo_G.10

Paul, I was just as confused as you, but in the context of the paragraph, it makes sense. The preceding sentence added, it reads:

"The fact that our ethical intuitions have their roots in biology reveals that our efforts to ground ethics in religious conceptions of "moral duty" are misguided. Saving a drowning child is no more a moral duty than understanding a syllogism is a logical one."

The point appears to be that using the word duty adds too much conscious thought where there is none. Our selfish genes make us lust to save the child, ... (read more)

Leo_G.00

I assure you, if there is one thing that Robert Anson Heinlein considered holy, it was logic. Wait, maybe it was free love. But if there were TWO THINGS he considered holy...

If you read any of his future histories, you see tales of libertarian utopias set free by humans achieving, if not surpassing, the rationality of which evolved human minds are capable.

My favorite RAH excerpt, from Coventry:

First, they junked the concept of Justice. Examined semantically "justice" has no referent - there is no observable phenomenon in the space-time-matter co
... (read more)
Leo_G.8-1

Woops, looks like I may have shot myself in the foot. The same way argument screens off authority, the actual experiment that was run screens off the intentions of the researcher.

Efficacy of the drug -> Results of the experiment <- Bias of the researcher

Efficacy, Bias -> Results of the experiment -> Our analysis of the efficacy of the drug

Leo_G.50

Doug S., I agree on principle, but disagree on your particular example because it is not statistical in nature. Should we not be hugging the query "Is the argument sound?" If a random monkey typed up a third identical argument and put it in the envelope, it's just as true. The difference between this and the a medical trial is that we have an independent means to verify the truth. Argument screens off Methodology...

If evidence is collected in violation of the fourth amendment rights of the accused, it's inadmissable in court, yes, but that doesn'... (read more)

0Kenny
If there is evidence of the researcher's private thoughts, they aren't private. In the hypothetical situation, an outside observer wouldn't know that the methodologies are different. You're right to suspect that there probably would be evidence that the methodologies differed in a realistic scenario.
Leo_G.10

Eliezer, can we get some confidence intervals on those time estimations? If nothing else, I'd like to know what your thought process is about what would go into rediscovering calculus in a month.