Unknown
Unknown has not written any posts yet.

Eliezer, exactly how many decibels of evidence would it require to persuade you that there is magic in the universe?
For example, see this claim of magic: http://www.clairval.com/lettres/en/2006/12/08/2061206.htm
How many times would a coin have to come up heads (if there were some way for it to test this) before there would be a chance you wouldn't defy the data in a case like this? If you saw 20 heads in a row, would you expect more of them? Or 40?
Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)
In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn't see that themselves, so as long as they programmed the AI to make "correct" heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.
Apart from that, this post seems to support TGGP's position. Even if there is some computation (i.e. primeness) which is actually determining the Pebble people, there is no particular reason to use that computation... (read more)
Roko: it's good to see that there is at least one other human being here.
Carl, thanks for that answer, that makes sense. But actually I suspect that normal humans have bounded utility functions that do not increase indefinitely with, for example, cheese-cakes. Instead, their functions have an absolute maximum which is actually reachable, and nothing else that is done will actually increase it.
Michael Vassar: Actually in real life I do some EXTREMELY counterintuitive things. Also, I would be happy to know the actual consequences of my beliefs. I'm not afraid that I would have to act in any particular way, because I am quite aware that I am a human being and... (read more)
Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001.
This seems to me the way humans naturally think, and the reason that sufficiently low-probability events are simply ignored.
From Nick Bostrom's paper on infinite ethics:
"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem."
Later:
"Aggregative consequentialism is often... (read more)
The "mistake" Michael is talking about it the belief that utility maximization can lead to counter intuitive actions, in particular actions that humanly speaking are bound to be useless, such as accepting a Wager or a Mugging.
This is in fact not a mistake at all, but a simple fact (as Carl Shulman and Nick Tarleton suspect.) The belief that it does not is simply a result of Anthropomorphic Optimism as Eliezer describes it; i.e. "This particular optimization process, especially because it satisfies certain criteria of rationality, must come to the same conclusions I do." Have you ever considered the possibility that your conclusions do not satisfy those criteria of rationality?
After thinking more about it, I might be wrong: actually the calculation might end up giving the same result for every human being.
Caledonian: what kind of motivations do you have?
As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality. But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us.
Eliezer is making a disguised argument that the universe is caused by intelligent design: the fact that the laws of nature stay the same over time, instead of changing randomly, shows that the Intelligence has goals that remain stable over time, even if we don't know what those goals are.