ScottAaronson
ScottAaronson has not written any posts yet.

ScottAaronson has not written any posts yet.

(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it's extremely interesting to think about what such a world would be like --- and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence -- well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that... (read 391 more words →)
shminux: I don't know any way, even in principle, to prove that uncertainty is Knightian. (How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?) Though even here, there's an interesting caveat. Namely, I also would have thought as a teenager that there could be no way, even in principle, to "prove" something is "truly probabilistic," rather than deterministic but with complicated hidden parameters. But that was before I learned the Bell/CHSH theorem, which does pretty much exactly that (if you grant some mild locality assumptions)! So it's at least logically possible that some future physical... (read more)
Alright, consider the following questions:
What's it like to be simulated in homomorphically encrypted form (http://en.wikipedia.org/wiki/Homomorphic_encryption)---so that someone who saw the entire computation (including its inputs and outputs), and only lacked a faraway decryption key, would have no clue that the whole thing is isomorphic to what your brain is doing?
What's it like to be simulated by a reversible computer, and immediately "uncomputed"? Would you undergo the exact same set of experiences twice? Or once "forwards" and then once "backwards" (whatever that means)? Or, since the computation leaves no trace of its ever having happened, and is "just a convoluted implementation of the identity function," would you not experience anything?
Once
Well, I can try to make my best guess if forced to -- using symmetry arguments or any other heuristic at my disposal -- but my best guess might differ from some other, equally-rational person's best guess. What I mean by a probabilistic system's being "mechanistic" is that the probabilities can be calculated in such a way that no two rational people will disagree about them (as with, say, a radioactive decay time, or the least significant digit of next week's Dow Jones average).
Also, the point of my "Earth C" example was that symmetry arguments can only be used once we know the reference class of things to symmetrize over --... (read more)
Well, all I can say is that "getting a deity off the hook" couldn't possibly be further from my motives! :-) For the record, I see no evidence for a deity anything like that of conventional religions, and I see enormous evidence that such a deity would have to be pretty morally monstrous if it did exist. (I like the Yiddish proverb: "If God lived on earth, people would break His windows.") I'm guessing this isn't a hard sell here on LW.
Furthermore, for me the theodicy problem isn't even really connected to free will. As Dostoyevsky pointed out, even if there is indeterminist free will, you would still... (read more)
Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we... (read more)
The relevant passage of the essay (p. 65) goes into more detail than the paraphrase you quoted, but the short answer is: how does the superintelligence know it should assume the uniform distribution, and not some other distribution? For example, suppose someone tips it off about a third Earth, C, which is "close enough" to Earths A and B even if not microscopically identical, and in which you made the same decision as in B. Therefore, this person says, the probabilities should be adjusted to (1/3,2/3) rather than (1/2,1/2). It's not obvious whether the person is right---is Earth C really close enough to A and B?---but the superintelligence decides... (read more)
As a point of information, I too am only interested in predicting macroscopic actions (indeed, only probabilistically), not in what you call "absolute prediction." The worry, of course, is that chaotic amplification of small effects would preclude even "pretty good" prediction.
"Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability."
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that's "inherently unknowable" enough for me! :-) Or to say it even more strongly: I don't actually care much whether someone chooses to regard the unknowability of such a fact as "part of the map" or "part of the territory" -- any more than, if a bear were chasing me, I'd... (read 383 more words →)
(1) Well, that's the funny thing about "should": if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, "how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to 'find itself' in a possible world with twice as many copies?" -- then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2)... (read more)