Eliezer, exactly how many decibels of evidence would it require to persuade you that there is magic in the universe?
For example, see this claim of magic: http://www.clairval.com/lettres/en/2006/12/08/2061206.htm
How many times would a coin have to come up heads (if there were some way for it to test this) before there would be a chance you wouldn't defy the data in a case like this? If you saw 20 heads in a row, would you expect more of them? Or 40?
Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)
In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn't see that themselves, so as long as they programmed the AI to make "correct" heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.
Apart from that, this post seems to support TGGP's position. Even if there is some computation (i....
Roko: it's good to see that there is at least one other human being here.
Carl, thanks for that answer, that makes sense. But actually I suspect that normal humans have bounded utility functions that do not increase indefinitely with, for example, cheese-cakes. Instead, their functions have an absolute maximum which is actually reachable, and nothing else that is done will actually increase it.
Michael Vassar: Actually in real life I do some EXTREMELY counterintuitive things. Also, I would be happy to know the actual consequences of my beliefs. I'm not afrai...
Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001.
This seems to me the way humans naturally think, and the reason that sufficiently low-probability events are simply ignored.
From Nick Bostrom's paper on infinite ethics:
"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practicall...
The "mistake" Michael is talking about it the belief that utility maximization can lead to counter intuitive actions, in particular actions that humanly speaking are bound to be useless, such as accepting a Wager or a Mugging.
This is in fact not a mistake at all, but a simple fact (as Carl Shulman and Nick Tarleton suspect.) The belief that it does not is simply a result of Anthropomorphic Optimism as Eliezer describes it; i.e. "This particular optimization process, especially because it satisfies certain criteria of rationality, must come to the same conclusions I do." Have you ever considered the possibility that your conclusions do not satisfy those criteria of rationality?
After thinking more about it, I might be wrong: actually the calculation might end up giving the same result for every human being.
Caledonian: what kind of motivations do you have?
As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality. But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us.
I vote in favor of banning Caledonian. He isn't just dissenting, which many commenters do often enough. He isn't even trying to be right, he's just trying to say Eliezer is wrong.
Eliezer, the money pump results from circular preferences, which should exist according to your description of the inconsistency. Suppose we have a million statements, each of which you believe to be true with equal confidence, one of which is "The LHC will not destroy the earth."
Suppose I am about to pick a random statement from the list of a million, and I will destroy the earth if I happen to pick a false statement. By your own admission, you estimate that there is more than one false statement in the list. You will therefore prefer that I pla...
Eliezer, you are thinking of Utilitarian (also begins with U, which may explain the confusion.) See http://utilitarian-essays.com/pascal.html
I'll get back to the other things later (including the money pump.) Unfortunately I will be busy for a while.
Can't give details, there would be a risk of revealing my identity.
I have come up with a hypothesis to explain the inconsistency. Eliezer's verbal estimate of how many similar claims he can make, while being wrong on average only once, is actually his best estimate of his subjective uncertainty. How he would act in relation to the lottery is his estimate influenced by the overconfidence bias. This is an interesting hypothesis because it would provide a measurement of his overconfidence. For example, which would he stop: The "Destroy the earth if God e...
Recently I did some probability calculations, starting with "made-up" numbers, and updating using Bayes' Rule, and the result was that something would likely happen which my gut said most firmly would absolutely not, never, ever, happen.
I told myself that my probability assignments must have been way off, or I must have made an error somewhere. After all, my gut couldn't possibly be so mistaken.
The thing happened, by the way.
This is one reason why I agree with RI, and disagree with Eliezer.
I've touched things a few thousand years old. But I think I get more psychological effect from just looking at a bird, for example, and thinking of its ancestors flying around in the time of the dinosaurs.
I've mentioned in the past that human brains evaluate moral propositions as "true" and "false" in the same way as other propositions.
It's true that it there are possible minds that do not do this. But the first AI will be programmed by human beings who are imitating their own minds. So it is very likely that this AI will evaluate moral propositions in the same way that human minds do, namely as true or false. Otherwise it would be very difficult for human beings to engage this AI in conversation, and one of the goals of the programmers ...
Poke, in the two sentences:
"You should open the door before attempting to walk through it."
"You should not murder."
The word "should" means EXACTLY the same thing. And since you can understand the first claim, you can understand the second as well.
Mike Blume: "Intelligence is a product of structure, and structure comes from an ordering of lower levels."
I agree with that (at least for the kind of intelligence we know about), but the structure rests on universal laws of physics: how did those laws get to be universal?
We might be living in a simulation. If we are, then as Eliezer pointed out himself, we have no idea what kind of physics exist in the "real world." In fact, there is no reason to assume any likeness at all between our world and the real world. For example, the fundamental entities in the real world could be intelligent beings, instead of quarks. If so, then there could be some "shadowy figure" after all. This might be passing the buck, but at least it would be passing it back to somewhere where we can't say anything about it anymore.