All of anon19's Comments + Replies

anon1900

luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or oth... (read more)

anon1900

I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?

anon1930

luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.

The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.

nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not real... (read more)

anon1910

Tim:

Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think... (read more)

anon1940

Lord:

I don't think there are scientists, who, in their capacity as scientists, debate what constitutes natural and artificial.

anon1900

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

anon1900

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

anon1900

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

anon1900

Silly typo: I'm sure you meant 4:1, not 8:1.

anon1910

luzr: You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same? Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. Total simulation is what we do when we don't have anything better.

anon1910

Tyrrell: My impression is that you're overstating Robin's case. The main advantage of his model seems to be that it gives numbers, which is perhaps nice, but it's not at all clear why those numbers should be correct. It seems like they assume a regularity between some rather uncomparable things, which one can draw parallels between using the abstractions of economics; but it's not so very clear that they apply. Eliezer's point with the Fermi thing isn't "I'm Fermi!" or "you're Fermi!", but just that since powerful ideas have a tendency ... (read more)

anon1950

This is unimportant, but in the original human experience of milk, somewhat-spoiled milk was not in fact bad to drink. Old milk being actually rotten came as a surprise to my family when we moved to North America from Eastern Europe.

anon1900

Nick: It seems like a bad idea to me to call a prediction underconfident or overconfident depending on the particular outcome. Shouldn't it depend rather on the "correct" distribution of outcomes, i.e. the Bayesian posterior taking all your information into account? I mean, with your definition, if we do the coin flip again, with 99% heads and 1% tails, and our prediction is 99% heads and 1% tails, then if it comes up heads we're slightly underconfident, and if it comes up tails we're strongly overconfident. Hence there's no such thing as an actu... (read more)

anon1900

Nick: Sorry, I got it backwards. What you seem to be saying is that well-calibratedness means that relative entropy of your distribution relative to the "correct" one is equal to your entropy. This does hold for the uniform guess. But once again, considering a situation where your information tells you the coin will land "heads" with 99% probability, it would seem that the only well-calibrated guesses are 99%-1% and 50%-50%. I don't yet have an intuition for why both of these guesses are strictly "better" in any way than an 80%-20% guess, but I'll think about it. It definitely avoids the sensitivity that seemed to come out of the "rough" definition, where 50% is great but 49.9% is horrible.

anon1910

This notion of calibratedness seems to have bad properties to me. Consider a situation where I'm trying to guess a distribution for the outcomes of a coin flip with a coin which, my information tells me, lands "heads" 99% of the time. Then a guess of 50% and 50% is "calibrated" because of the 50% predictions I make, exactly half come out right. But a guess 49.9% heads and 50.1% tails is horribly calibrated; the "49.9%" predictions come out 99% correct, and the "50.1%" predictions come out 1% correct. So the concept, ... (read more)

anon1900

Could you give a more precise definition of "calibrated"? Your example of 1/37 for each of 37 different possibilities, justified by saying that indeed one of the 37 will happen, seems facile. Do you mean that the "correct" distribution, relative to your guess, has low relative entropy?