The main disagreement between Aaronson's idea and LW ideas seems to be this:
...If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.
(...)
As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers. So that’s a possibility that this essay explores at some length. To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for th
As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers.
Even if Aaronson's speculation that human minds are not copyable turns out to be correct, that doesn't rule out copyable minds being built in the future, either de novo AIs or what he (on page 58) calls "mockups" of human minds that are functionally close enough to the originals to fool their close friends. The philosophical problems with copyable minds will still be an issue for those minds, and therefore minds not being copyable can't be the only hope of avoiding these difficulties.
To put this another way, suppose Aaronson definitively shows that according to quantum physics, minds of biological humans can't be copied exactly. But how does he know that he is actually one of the original biological humans, and not for example a "mockup" living inside a digital simulation, and hence copyable? I think that is reason enough for him to directly attack the philosophical problems associated with copyable minds instead of trying to dodge them.
Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?
Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I dis...
But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates?
If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).
various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles
It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that...
I really don't like the term "LW consensus" (isn't there a LW post about how you should separate out bundles of ideas and consider them separately because there's no reason to expect the truth of one idea in a bundle to correlate strongly with the truth of the others? If there isn't, there should be). I've been using "LW memeplex" instead to emphasize that these ideas have been bundled together for not necessarily systematically good reasons.
I think that last paragraph you quote needs the following extra bit of context:
To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.
... because otherwise it looks as if Aaronson is saying something really silly, which he isn't.
If we could fax ourselves to Mars, or undergo uploading, then still wonder whether we're still "us" -- the same as we wonder now when such capabilities are just theoretical/hypotheticals -- that should count as a strong indication that such questions are not very practically relevant, contrary to Aaronson's assertion. Surely we'd need some legal rules, but the basis for those wouldn't be much different than any basis we have now -- we'd still be none the wiser about what identity means, even standing around with our clones.
For example, if we were to wonder about a question of "what effect will a foom-able AI have on our civilization", surely asking after the fact would yield different answers to asking before. With copies / uploads etc., you and your perfect copy could hold a meeting contemplating who stays married with the wife, and still start from the same basis with the same difficulty of finding the "true" answer as if you'd discussed the topic with a pal roleplaying your clone, in the present time.
This paper has some useful comments on methodology that seem relevant to some recent criticism of MIRI's recent research, e.g. the discussion in Section 2.2 about replacing questions with other questions, which is arguably what both the Löb paper and the prisoner's dilemma paper do.
In particular:
...whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and compl
I'm not a perfect copy of myself from one moment to the next, so I just don't see the force of his objection.
Fundamentally, those willing to teleport themselves will and those unwilling won't. Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive. Practically, it will be convenient for both the teleporters and the nonteleporters to treat the teleporters as if they have continuous identity.
Sometimes you don't need copying to get a tricky decision problem, amnesia or invisible coinflips are enough. For example, we have the Sleeping Beauty problem, the Absent-Minded Driver which is a good test case for LW ideas, or Psy-Kosh's problem which doesn't even need amnesia.
I tend to see Knightian unpredictability as a necessary condition for free will
But it's not. (In the link, I use fiction to defang the bugbear and break the intuition pumps associating prediction and unfreedom.) ETA: Aaronson writes
even if Alice can’t tell Bob what he’s going to do, it’s easy enough for her to demonstrate to him afterwards that she knew.
But that's not a problem for Bob's freedom or free will, even if Bob finds it annoying. That's the point of my story.
"Knightian freedom" is a misnomer, in something like the way "a ...
"But calling this Knightian unpredictability 'free will' just confuses both issues."
torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.
On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!
A better summary of Aaronson's paper:
I want to know:
Were Bohr and Compton right or weren’t they? Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?
EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.
If you have...
That Aaronson mentions EY isn't exactly a surprise; the two shared a well-known discussion on AI and MWI several years ago. EY mentions it in the Sequences.
Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.
In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:
..."One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would
I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:
My own view is that the quantitative aspects are crucial when discussing these experiments.
What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe ...
I like his causal answer to Newcomb's problem:
...In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously).
But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus.
If he says:
"In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no."
and he's right, then LW consensus is religion (in other words, you made up your mind too early).
I'm not quite sure what you mean here. Do you mean that if he's right, then LW consensus is wrong, and that makes LW consensus a religion?
That seems both wrong and rather mean to both LW consensus and religion.
Absolutely, here's the relevant quote:
"The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?"
You could do worse things with your time than read the whole thing, in my opinion.
Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.