Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.

New Comment
109 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The main disagreement between Aaronson's idea and LW ideas seems to be this:

If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.

(...)

As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers. So that’s a possibility that this essay explores at some length. To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for th

... (read more)

As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds can’t be copied perfectly from one physical substrate to another, as can programs on standard digital computers.

Even if Aaronson's speculation that human minds are not copyable turns out to be correct, that doesn't rule out copyable minds being built in the future, either de novo AIs or what he (on page 58) calls "mockups" of human minds that are functionally close enough to the originals to fool their close friends. The philosophical problems with copyable minds will still be an issue for those minds, and therefore minds not being copyable can't be the only hope of avoiding these difficulties.

To put this another way, suppose Aaronson definitively shows that according to quantum physics, minds of biological humans can't be copied exactly. But how does he know that he is actually one of the original biological humans, and not for example a "mockup" living inside a digital simulation, and hence copyable? I think that is reason enough for him to directly attack the philosophical problems associated with copyable minds instead of trying to dodge them.

Wei, I completely agree that people should "directly attack the philosophical problems associated with copyable minds," and am glad that you, Eliezer, and others have been trying to do that! I also agree that I can't prove I'm not living in a simulation --- nor that that fact won't be revealed to me tomorrow by a being in the meta-world, who will also introduce me to dozens of copies of myself running in other simulations. But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates? What if the very empirical facts that we could copy a program, trace its execution, predict its outputs using an abacus, run the program backwards, in heavily-encrypted form, in one branch of a quantum computation, at one step per millennium, etc. etc., were to count as reductios that there's probably nothing that it's like to be that program --- or at any rate, nothing comprehensible to beings such as us?

Again, I certainly don't know that this is a reasonable way to think. I myself would probably have ridiculed it, before I realized that various things that confused me for years and that I dis... (read more)

But as long as we're trading hypotheticals: what if minds (or rather, the sorts of minds we have) can only be associated with uncopyable physical substrates?

If that turns out to be the case, I don't think it would much diminish either my intellectual curiosity about how problems associated with mind copying ought to be solved nor the practical importance of solving such problems (to help prepare for a future where most minds will probably be copyable, even if my own isn't).

various things that confused me for years and that I discuss in the essay (Newcomb, Boltzmann brains, the "teleportation paradox," Wigner's friend, the measurement problem, Bostrom's observer-counting problems...) all seemed to beckon me in that direction from different angles

It seems likely that in the future we'll be able to build minds that are very human-like, but copyable. For example we could take someone's gene sequence, put them inside a virtual embryo inside a digital simulation, let it grow into an infant and then raise it in a virtual environment similar to a biological human child's. I'm assuming that you don't dispute this will be possible (at least in principle), but are saying that... (read more)

2ScottAaronson
(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it's extremely interesting to think about what such a world would be like --- and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence -- well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that each copy "knew" about the others, and "knew" that it might need to kill itself for the benefit of its doppelgangers. And as for why it behaved that way --- well, we could answer that question in terms of the code, or in terms of the intentions of the people who wrote the code. Of course, if the code hadn't been written by anyone, but was instead (say) the outcome of some evolutionary process, then we'd have to look for an explanation in terms of that process. But of course it would help to have the code to examine! (2) You argue that, if I were copyable, then the copies would wonder about the same puzzles that the "uncopyable" version wonders about -- and for that reason, it can't be legitimate even to try to resolve those puzzles by assuming that I'm not copyable. Compare to the following argument: if I were a character in a novel, then that character would say exactly the same things I say for the same reasons, and wonder about the same things that I wonder about. Therefore, when reasoning about (say) physics or cosmology, it's illegitimate even to make the tentative assumption that I'm not a character in a novel. This is a fun argument, but there are several possible responses, among them: haven't we just begged the question, by assuming there is something it's like to be a copyable em or a character in a novel? Again, I don't declare with John Searle that there's obviously nothing that it's like
3Wei Dai
I'm not interested so much in how they will reason, but in how they should reason. When you say "we" here, do you literally mean "we" or do you mean "biological humans"? Because I can see how understanding the effect of microscopic noise on the sodium-ion channels might give us insight into whether biological humans are copyable, but it doesn't seem to tell us whether we are biological humans or for example digital simulations (and therefore whether your proposed solution to the philosophical puzzles is of any relevance to us). I thought you were proposing that if your theory is correct then we would eventually be able to determine that by introspection, since you said copyable minds might have no subjective experience or a different kind of subjective experience.
4ScottAaronson
(1) Well, that's the funny thing about "should": if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, "how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to 'find itself' in a possible world with twice as many copies?" -- then we need some account of the subjective experience of copyable entities before we can even start to answer the question. (2) Yes, certainly it's possible that we're all living in a digital simulation -- in which case, maybe we're uncopyable from within the simulation, but copyable by someone outside the simulation with "sysadmin access." But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It's no different than with any "ordinary" scientific question. (3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don't think we can determine the truth of that possibility by introspection -- or for that matter, even by "extrospection"! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it's even logically coherent to imagine a distinction between them and copyable minds.
3Wei Dai
If that's the most you're expecting to show at the end of your research program, then I don't understand why you see it as a "hope" of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it's just that it doesn't seem to solve the problems that originally motivated you.) For example according to Nick Bostrom's Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations "shouldn't even be included in my reference class" if you don't hope to conclude that you, personally, are not copyable?
2gjm
What would make them "count as reductios that there's probably nothing that it's like to be that program", and how?
9ScottAaronson
Alright, consider the following questions: * What's it like to be simulated in homomorphically encrypted form (http://en.wikipedia.org/wiki/Homomorphic_encryption)---so that someone who saw the entire computation (including its inputs and outputs), and only lacked a faraway decryption key, would have no clue that the whole thing is isomorphic to what your brain is doing? * What's it like to be simulated by a reversible computer, and immediately "uncomputed"? Would you undergo the exact same set of experiences twice? Or once "forwards" and then once "backwards" (whatever that means)? Or, since the computation leaves no trace of its ever having happened, and is "just a convoluted implementation of the identity function," would you not experience anything? * Once the code of your brain is stored in a computer, why would anyone even have to bother running the code to evoke your subjective experience? And what counts as running it? Is it enough to do a debugging trace with pen and paper? * Suppose that, purely for internal error-correction purposes, a computer actually "executes" you three times in parallel, then outputs the MAJORITY of the results. Is there now one conscious entity or three? (Or maybe 7, for every nonempty subset of executions?) Crucially, unlike some philosophers (e.g. John Searle), I don't pound the table and declare it "obvious" that there's nothing that it's like to be simulated in the strange ways above. All I say is that I don't think I have any idea what it's like, in even the same imperfect way that I can imagine what it's like to be another human being (or even, say, an unclonable extraterrestrial) by analogy with my own case. And that's why I'm not as troubled as some people are, if some otherwise-plausible cosmological theory predicts that the overwhelming majority of "copies" of me should be Boltzmann brains, computer simulations, etc. I view that as a sign, not that I'm almost certainly a copy (though I might be), but simply that I

I really don't like the term "LW consensus" (isn't there a LW post about how you should separate out bundles of ideas and consider them separately because there's no reason to expect the truth of one idea in a bundle to correlate strongly with the truth of the others? If there isn't, there should be). I've been using "LW memeplex" instead to emphasize that these ideas have been bundled together for not necessarily systematically good reasons.

4cousin_it
OK, replaced with "LW ideas".
[-]gjm140

I think that last paragraph you quote needs the following extra bit of context:

To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re not copyable. The universe has never shown any particular tendency to cater to human philosophical prejudices! But I’d say the difficulties provide more than enough reason to care about the copyability question.

... because otherwise it looks as if Aaronson is saying something really silly, which he isn't.

2cousin_it
Good point, thanks! Added that bit.

If we could fax ourselves to Mars, or undergo uploading, then still wonder whether we're still "us" -- the same as we wonder now when such capabilities are just theoretical/hypotheticals -- that should count as a strong indication that such questions are not very practically relevant, contrary to Aaronson's assertion. Surely we'd need some legal rules, but the basis for those wouldn't be much different than any basis we have now -- we'd still be none the wiser about what identity means, even standing around with our clones.

For example, if we were to wonder about a question of "what effect will a foom-able AI have on our civilization", surely asking after the fact would yield different answers to asking before. With copies / uploads etc., you and your perfect copy could hold a meeting contemplating who stays married with the wife, and still start from the same basis with the same difficulty of finding the "true" answer as if you'd discussed the topic with a pal roleplaying your clone, in the present time.

This paper has some useful comments on methodology that seem relevant to some recent criticism of MIRI's recent research, e.g. the discussion in Section 2.2 about replacing questions with other questions, which is arguably what both the Löb paper and the prisoner's dilemma paper do.

In particular:

whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.

Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and compl

... (read more)
4IlyaShpitser
Yes, this is what modern causal inference did (I suppose by taking Hume's counterfactual definition of causation, and various people's efforts to deal with confounding/incompatability in data analysis as starting points).

I'm not a perfect copy of myself from one moment to the next, so I just don't see the force of his objection.

Fundamentally, those willing to teleport themselves will and those unwilling won't. Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive. Practically, it will be convenient for both the teleporters and the nonteleporters to treat the teleporters as if they have continuous identity.

2torekp
That is admirably concise, correct, and totally on target.
0ESRogs
Is the meaning of this statement that it's a choice whether I consider me-at-different-times to be the same person as me-now?
2torekp
That would be one way to look at it. Another would be to put aside the "same person?" question and just answer the "do I intend to work for the benefit of this future person?" question more directly, using the facts about causal connections, personality similarities, etc.
1ESRogs
Ah, thanks!
1ScottAaronson
"Intertemporal solidarity is just as much a choice today as it will be should teleporters arrive." I should clarify that I see no special philosophical problem with teleportation that necessarily destroys the original copy, as quantum teleportation would (see the end of Section 3.2). As you suggest, that strikes me as hardly more perplexing than someone's boarding a plane at Newark and getting off at LAX. For me, all the difficulties arise when we imagine that the teleportation would leave the original copy intact, so that the "new" and "original" copies could then interact with each other, and you'd face conundrums like whether "you" will experience pain if you shoot your teleported doppelganger. This sort of issue simply doesn't arise with the traditional problem of intertemporal identity, unless of course we posit closed timelike curves.

Sometimes you don't need copying to get a tricky decision problem, amnesia or invisible coinflips are enough. For example, we have the Sleeping Beauty problem, the Absent-Minded Driver which is a good test case for LW ideas, or Psy-Kosh's problem which doesn't even need amnesia.

-3Locaha
You are not a COPY (perfect or otherwise) of yourself from one moment to the next. Not by any meaningful definition of the word copy.
6buybuydandavis
The whole copying language kind of begs the question. Compare Dan(t=n) and Dan(t=n+1). Not identical. That's as true now as it will be in a teleporting and replicating future. Calling it "the same" Dan or a "different" Dan is a choice.
0Locaha
"Copy" implies having more than 1 object : The Copy and the Original at the same point of time, but not space. Dan(t=n) and Dan(t=n+1) are not copies. Dan(Time=n, Location=a) and Dan(Time=n, Location=b) are copies.
2naasking
Why preference space over time? Time is just another dimension after all. buybuydandavis's definition of "copy" seems to avoiding preference for a particular dimension, and so seems more general.
2Eugine_Nier
You may want to read up on the no-cloning theorem in quantum mechanics. The simple answer to your question is that time interacts differently with causality from space.
1buybuydandavis
I don't think so, and I don't think the original author assumes as much. If your digital copy is created through a process that destroys you, is it not a copy?
-1Locaha
Hmmm... I suppose it is. But are you saying you are being constantly destroyed and remade from one moment to the next? I know Pratchett used the idea in " The Thief of Time", but that's a fantasy author... But even if we assume that what happening to the atoms of your body from one moment to the other can be described as destruction and recreation (I'm not even sure those words have meaning when we talk about atoms), you will still have the burden of proving that the process is analogous to the whatever way you are going to teleport yourself to Mars.
0buybuydandavis
There's not much analogy required, as he argued from "not a perfect copy", i.e., the existence of difference. No, but that intertemporal solidarity is my choice, just as someone's intertemporal solidarity with their teleported copy would be their choice.
-4Locaha
You can choose to think whatever you like, but I don't think it changes the laws of the universe. You either have a continuous existence in time or you don't. You may decide that your Copy on Mars is you, but it is not. Your mind won't continue to operate on Mars if you shoot yourself on Earth.
3TheOtherDave
(shrug) The laws of the universe, in the sense you mean the term here, are silent on many things. Is Sam my friend, or not? Basically, I choose. If I decide Sam is, then Sam is (although I might not be Sam's). If I decide Sam's not, then Sam's not. There's no fact of the matter beside that. The laws of the universe are silent.* Is my copy on Mars me, or isn't it? Perhaps the laws of the universe are equally silent. * - of course, at another level of description this is false, since the laws of the universe also constrain what choice I make, but at that level "you can choose to think whatever you like" is false, so I assume that's not the level you are referencing.
-1Locaha
So, if you decide that your brain after being shot is still you and then shoot yourself, you will not die? Can I decide I'm Bill Gates? Like, for a couple of days?
3TheOtherDave
Yes. In fact, this isn't hypothetical; lots of people on this site in fact do believe that their brains after they've been shot, if adequately cryopreserved, are still them and that they haven't necessarily died. I don't know, can you? Have you tried? (Of course, that won't alter what the legal system does.)
0Locaha
Yeah, it's not working, If I was Bill Gates, I'd be in a different body and location.
1TheOtherDave
Then: no, apparently you can't. Your notion of personal identity seems to be tied to a particular body and location, if I'm reading you right. Which also implies that your notion of personal identity can't survive death, and can't be simultaneously present on Earth and Mars. Which of course does not preclude the possibility of someone on Mars, or existing after your death, who would pass all conceivable tests of being you as well as you would.
0Matthew_Opitz
TheOtherDave, you seem to be implying that Locaha is unusual in not being able to experience Bill Gates's reality, and that in principle it should be possible to "identify with" Bill Gates and then suddenly "wake up" in Bill Gates's body with all of his memories and whatnot, thinking that you had always been Bill Gates and being none the wiser that you had just been experiencing a different body's reality a moment ago. If that is possible, then how do we know that we aren't doing this all the time? Also, if this were possible, then we would not really have to worry about death necessarily entailing non-existence. We would just "wake up" as someone else that next second with all of that person's memories, thinking that we had always been that person. (Of course, then that begs the question: who would we wake up as? Perhaps the person with the most similar brain as our former one, since that seems to be how we stick with our existing brain as it changes incrementally from moment to moment?)
0TheOtherDave
I don't think Locaha's inability to experience themselves as Bill Gates is unusual in the slightest. I suspect most of us are unable to do so. Also, I haven't said a word about Bill Gates' memory and whatnot. If having all Bill Gates' memories and whatnot is necessary for someone to be Bill Gates, then very few people indeed are capable of it. (Indeed, there are plausible circumstances under which Bill Gates himself would no longer be capable of being Bill Gates.)
-2Eugine_Nier
The point is that teleported Dan may be different from non-teleported Dan in ways that are very different (meta-different?) from the differences between Dan(t=n) and Dan(t=n+1). This is certainly how quantum systems work.
2buybuydandavis
Maybe. But the teleported differences aren't necessarily worse.
-2Eugine_Nier
They won't necessarily exist either. I'm describing a way the world might turn out to be, I never said this is the only way.

I tend to see Knightian unpredictability as a necessary condition for free will

But it's not. (In the link, I use fiction to defang the bugbear and break the intuition pumps associating prediction and unfreedom.) ETA: Aaronson writes

even if Alice can’t tell Bob what he’s going to do, it’s easy enough for her to demonstrate to him afterwards that she knew.

But that's not a problem for Bob's freedom or free will, even if Bob finds it annoying. That's the point of my story.

"Knightian freedom" is a misnomer, in something like the way "a ... (read more)

"But calling this Knightian unpredictability 'free will' just confuses both issues."

torekp, a quick clarification: I never DO identify Knightian unpredictability with "free will" in the essay. On the contrary, precisely because "free will" has too many overloaded meanings, I make a point of separating out what I'm talking about, and of referring to it as "freedom," "Knightian freedom," or "Knightian unpredictability," but never free will.

On the other hand, I also offer arguments for why I think unpredictability IS at least indirectly relevant to what most people want to know about when they discuss "free will" -- in much the same way that intelligent behavior (e.g., passing the Turing Test) is relevant to what people want to know about when they discuss consciousness. It's not that I'm unaware of the arguments that there's no connection whatsoever between the two; it's just that I disagree with them!

8torekp
Sorry about misrepresenting you. I should have said "associating it with free will" instead of "calling it free will". I do think the association is a mistake. Admittedly it fits with a long tradition, in theology especially, of seeing freedom of action as being mutually exclusive with causal determination. It's just that the tradition is a mistake. Probably a motivated one (it conveniently gets a deity off the hook for creating and raising such badly behaved "children").
6ScottAaronson
Well, all I can say is that "getting a deity off the hook" couldn't possibly be further from my motives! :-) For the record, I see no evidence for a deity anything like that of conventional religions, and I see enormous evidence that such a deity would have to be pretty morally monstrous if it did exist. (I like the Yiddish proverb: "If God lived on earth, people would break His windows.") I'm guessing this isn't a hard sell here on LW. Furthermore, for me the theodicy problem isn't even really connected to free will. As Dostoyevsky pointed out, even if there is indeterminist free will, you would still hope that a loving deity would install some "safety bumpers," so that people could choose to do somewhat bad things (like stealing hubcaps), but would be prevented from doing really, really bad ones (like mass-murdering children). One last clarification: the whole point of my perspective is that I don't have to care about so-called "causal determination"---either the theistic kind or the scientific kind---until and unless it gets cashed out into actual predictions! (See Sec. 2.6.)
6Shmi
Downvoted for extremely uncharitable reading of the paper.
1MrMind
Upvoted for being one the very few downvoters who explains why. Without feedback there's no improvement.
4torekp
It's worse than I thought. Aaronson really does want to address the free will debate in philosophy - and utterly botches the job. Aaronson speaks for some of his interlocutors (who are very smart - i.e. they say what I would say ;) ): and answers: Uh, so: the truth about the modal number of hairs on a human foot, for example, doesn't care what we want. But, if I were to claim that (a good portion of) a famous philosophical debate amounted to the question of how many hairs are on our feet, you could reject the claim immediately. And you could cite the fact that nobody gives a damn about that as a sufficient reason to reject the substitution of the new question for the old. Sure, in this toy example, you could cite many other reasons too. But with philosophical (using the next word super broadly) definitions, sometimes the most obvious flaw is that nobody gives a damn about the definiens, while lots care passionately about the concept supposedly defined. Dennett rightly nails much philosophizing about free will, on precisely this point. Interlocutors: Aaronson: But this doesn't address the point, unless Knightian-uncertain "actions" somehow fall more under the control of the agent than do probabilistic processes. The truth is the opposite. I have the most control over actions that flow deterministically from my beliefs and desires. I have almost as much, if it is highly probable that my act will be the one that the beliefs and desires indicate is best. And I have none, if it is completely uncertain. For example, if I am pondering whether to eat a berry and then realize that this type of berry is fatally poisonous, this realization should ideally be decisive, with certainty. But I'll settle for 99.99...% probability that I don't eat it. If it is wide-open uncertain whether I will eat it - arbitrary in a deep way - that does not help my sense of control and agency. To put it mildly! Finally, Aaronson has a close brush with the truth when he rejects the following
0ESRogs
I agree with you on unpredictability not being important for agency, but I don't understand the last third of this comment. What is the point that you are trying to make about bi-directional determinism? Specifically, could you restate the mistake you think Aaronson is making in the "our choices today might play a role" quote?
0torekp
Sorry, I was unclear. I don't think that's a mistake at all! The only "problem" is that it may be an understatement. On a bi-directional determinist picture, our choices today utterly decisively select one past, in a logical sense. That is, statements specifically describing a single past follow logically from statements describing our choices today plus other facts of today's universe. The present still doesn't cause the past, but that's a mere tautology: we call the later event the "effect" and the earlier one the "cause".
0ESRogs
That's not necessarily true if multiple pasts are consistent with the state of the present, right? In other words, if there is information loss as you move forward in time.
0torekp
Indeed. Those wouldn't be bi-directional determinist theories, though. Interestingly, QM gets portrayed a lot like a bi-directional determinist theory in the wiki article on the black hole information paradox. (I don't know enough QM to know how accurate that is.)
-1Manfred
Not bad at all :)
[-][anonymous]50

A better summary of Aaronson's paper:

I want to know:

Were Bohr and Compton right or weren’t they? Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.

If you have... (read more)

4SilasBarta
Eh, I don't think I count as a luminary, but thanks :-) Aaronson's crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science. One of them, about Newcomb's problem where my main criticisms were a) he's overstating the level and kind of precision you would need when measuring a human for prediction; and b) that the interesting philosophical implications of Newcomb's problem follow from already-achievable predictor accuracies. The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)
[-][anonymous]40

That Aaronson mentions EY isn't exactly a surprise; the two shared a well-known discussion on AI and MWI several years ago. EY mentions it in the Sequences.

Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.

In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:

"One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would

... (read more)

I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:

My own view is that the quantitative aspects are crucial when discussing these experiments.

What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe ... (read more)

2ESRogs
Is there an existing principled explanation for why this is not relevant to the free will debate, but predicting less obvious behaviors is?
4MrMind
Because any working system evolved from self-preservation would do that. It doesn't add any bit of information, although it's a prediction that has striking accuracy.
3ESRogs
That seems to have already conceded the point by acknowledging that our behaviors are determined by systems. No? It seems that the argument must be that some of our behaviors are determined and some are the result of free will -- I'm wondering if there's a principled defense of this distinction.
0MrMind
They way I see it is this: if pressing a button of your choice is not an expression of free-will, then nothing is, because otherwise you can just say that free-will determines whatever in the brain is determined by quantum noise, so that it becomes an empty concept. That said, it's true that we don't know very much about the inner working of the brain, but I believe that we know enough to say that it doesn't store and uses quantum bits for elaboration. But even before invoking that, Lisbet-like experiments directly link free-will with available neuronal data: I'm not saying that it's a direct refutation, but it's a possible direct refutation. My pet-peeves is the author not acknowledging the conclusion, instead saying that the experiments were not impressive enough to constitute a refutation of his claim.
1[anonymous]
It is completely not about being more or less impressive. If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data. The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI. Aaronson explicitly locates any Knightian noise left in the system to be at the microscopic level; see the third paragraph under section 3. TL;DR: 2.12 is about forestalling a bad counterargument (that being the heading of 2.12) and does not give evidence against Knightian upredictability.
1MrMind
Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that. Absolutely not. You can always add the two and get even more predictive power. Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn't mean that one or the other is wrong. That's exactly the point: if (and I reckon it's a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it's irrelevant in the uploading/emulation process. This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation). As far as I can see, the correct conclusion should have been: the experiments shows that it's possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people's behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
3Ronak
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring. However, if it is a minute earlier, we are forced to consider the possibility - even if we don't want to - that something contradicting classical ideas of free will is at work (though we can't throw out travel and processing time either).
0[anonymous]
That's why it wasn't the entirety of my comment. Sigh. This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability. Saying "absolutely not" when nobody's actually done the experiment yet (AFAIK) is disingenuous. If you actually believe this, then this conversation is completely pointless, and I'm annoyed that you've wasted my time.
-2Eugine_Nier
So would you have been wiling to draw the same conclusion from an experiment that predicted the button pushing 1 second before with 99.99999% probability by scanning the neurons in the arm?
0MrMind
As I said in another comment: no, because that doesn't add information, since pushing the button = neurons in the arm firing. The threshold is when the elaboration leaves the brain.
[-]Ronak-20

I like his causal answer to Newcomb's problem:

In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously).

... (read more)
4Manfred
Simple but misleading. This is because Newcomb's problem is not reliant on the predictor being perfectly accurate. All they need to do is predict you so well that people who one-box walk away with more expected utility than people who two-box. This is easy - even humans can predict other humans this well (though we kinda evolved to be good at it). So if it's still worth it to one-box even if you're not being copied, what good is an argument that relies on you being copied to work?
0Ronak
In response to this, I want to roll back to saying that while you may not actually be simulated, having the programming to one-box is what causes there to be a million dollars in there. But, I guess that's the basic intuition behind one-boxing/the nature of prediction anyway so nothing non-trivial is left (except the increased ability to explain it to non-LW people). Also, the calculation here is wrong.
-2Eugine_Nier
Ok, in that case, am I allowed to roll a dice to determine whether to one box?
0Manfred
Depends on the rules. Who do I look like, Gary Drescher? What sort of rules would you implement to keep Newcomb's problem interesting in the fact of coins that you can't predict?
-2Eugine_Nier
Why would I want to keep the problem interesting? I want to solve it.
5Ronak
Because the solution to the problem is worthless except to the extent that it establishes your position in an issue it's constructed to illuminate.
0TheOtherDave
It seems way simpler to leave out the "freely willed decision" part altogether. If we posit that the Predictor can reliably predict my future choice based on currently available evidence, it follows that my future choice is constrained by the current state of the world. Given that, what remains to be explained?
0Ronak
Yes, I agree with you - but when you tell some people that the question arises of what is in the big-money box after Omega leaves... and the answer is "if you're considering this, nothing." A lot of others (non-LW people) I tell this to say it doesn't sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with 'there is no free will'), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways). ** Anna Salamon's cutting argument is very good too but a) it doesn't make the closed-loop-seeming thing any less closed-loop-seeming and b) it's hard to understand for most people and I'm guessing it will look like garbage to people who don't default to compatibilist.
2TheOtherDave
I suppose. When dealing with believers in noncompatibilist free will, I typically just accept that on their view a reliable Predictor is not possible in the first place, and so they have two choices... either refuse to engage with the thought experiment at all, or accept that for purposes of this thought experiment they've been demonstrated empirically to be wrong about the possibility of a reliable Predictor (and consequently about their belief in free will). That said, I can respect someone refusing to engage with a thought experiment at all, if they consider the implications of the thought experiment absurd. As long as we're here, I can also respect someone whose answer to "Assume Predictor yadda yadda what do you do?" is "How should I know what I do? I am not a Predictor. I do whatever it is someone like me does in that situation; beats me what that actually is."
0Ronak
I usually deal with people who don't have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense. Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more. But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box. A similar analysis applies to the counterfactual mugging. Further, this argument actually creates immunity to the response 'I'll just find a qubit arbitrarily far back in time and use the measurement result to decide.' I think a self-respecting TDT would also have this immunity, but there's a lot to be said for finding out where theories fail - and Newcomb's problem (if you assume the argument about you-completeness) seems not to be such a place for CDT. Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as 'choose A that maximises %20=%20\Sigma_i%20P(A%20\rightarrow%20O_i)%20D(O_i)$) where D is the desirability function and P the probability function.'

But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus.

If he says:

"In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no."

and he's right, then LW consensus is religion (in other words, you made up your mind too early).

I'm not quite sure what you mean here. Do you mean that if he's right, then LW consensus is wrong, and that makes LW consensus a religion?

That seems both wrong and rather mean to both LW consensus and religion.

-2IlyaShpitser
LW consensus is not necessarily wrong, even if Scott is right. However, making up your mind on unsettled empirical questions (which is what LW had done if Scott is right) is a dangerous practice. ---------------------------------------- I found the phrasing "he then moves in a direction that's very far from any kind of LW consensus" broadly similar to "he's not accepting the Nicene Creed, good points though he may make." Is there even a non-iffy reason to say this about an academic paper?
6DanielVarga
I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That's also the reason I mentioned the tangential Eliezer reference.) It's beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let's just say that I used the term as a convenient reference point rather than a creed.
-4IlyaShpitser
Why?
3somervta
Presumably, he wanted some relatively quick way to tell people why he was posting it to lesswrong, and what they should expect from it.
5Luke_A_Somers
Either this is self-contradictory, or it means 'never be wrong'. If you're always right, you're making too few claims and therefore being less effective than you could be. Being wrong doesn't mean you're doing it wrong. As for iffiness, I read that phrase more as "Interesting argument ahead!"
-2IlyaShpitser
I think if you are making up your mind on unsettled empirical questions, you are a bad Bayesian. You can certainly make decisions under uncertainty, but you shouldn't make up your mind. And anyways, I am not even sure how to assign priors for the upload fidelity questions.
2Luke_A_Somers
In that case then you're the one who made the jump from 'goes against consensus' to 'this was assigned 0 probability'. If we all agreed that some proposition was 0.0001% likely, then claiming that this proposition is true would seem to me to be going against consensus.
1IlyaShpitser
Ok, what exactly is your posterior belief that uploads are possible? What would you say the average LW posterior belief of same? Where did this number come from? How much 'cognitive effort' is spent at LW thinking about the future where uploads are possible vs the future where uploads are not possible?
3Luke_A_Somers
To answer the last question first - not a heck of a lot, but some. It was buried in an 'impossible possible world', but lack of uploading was not what made it the impossible possible world, so that doesn't mean that it's considered impossible. To answer your questions: -- Somewhere around 99.5% that it's possible for me. The reasons for it to be possible are pretty convincing. -- I would guess that the median estimate of likelihood among active posters who even have an estimate would be above 95%, but that's a pretty wild guess. Taking the average would probably amount to a bit less than the fraction of people who think it'll work, so that's not very meaningful. My estimate of that is rough - I checked the survey, but the most applicable question was cryonics, and of course cryonics can be a bad idea even if uploading is possible (if you think that you'll end up being thawed instead of uploaded) And of course if you somehow think you could be healed instead of uploaded, it could go the other way. 60% were on the fence or in favor of getting cryonically preserved, which means they think that the total product of the cryo Drake equation is noticeable. Most cryo discussions I've seen here treat organization as the main problem, which suggests that a majority consider recovery a much less severe problem. Being pessimistic for a lower bound on that gives me 95%. -- The most likely to fail part of uploading is the scanning. Existing scanning technology can take care of anything as large as a dendrite (though in an unreasonably large amount of time). So, for uploading to be impossible, it would have to require either dynamical features or features which would necessarily be destroyed by any fixing process, and no other viable mechanism. * The former seems tremendously unlikely because personality can recover from some pretty severe shocks to the system like electrocution, anaerobic metabolic stasis, and inebriation (or other neurotoxins). I'd say that there being som
0IlyaShpitser
Ok -- thanks for a detailed response. To be honest, I think you are quibbling. If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon. I think a cynic would say you talk about the upload future more because its much nicer (e.g. you can conquer death!)
0Luke_A_Somers
These two statements clash very strongly. VERY strongly.
0IlyaShpitser
They don't. 99.5% is far too much.
0Luke_A_Somers
If you can predict the outcome of the empirical test with that degree of confidence or a higher one, then they're perfectly compatible. We're talking what's physically possible with any plan of action and physically possible capabilities, not merely what can be done with today's tech. The negative you're pushing is actually a very very strong nonexistence statement.
9paulfchristiano
I would guess that he thinks that the probability of this hypothetical---worlds in which brain scanning isn't possible---is pretty low (based on having discussed it briefly with him). I'm sure everyone around here thinks it is possible as well, it's just a question of how likely it is. It may be worth fleshing out the perspective even if it is relatively improbable. In particular, the probability that you can't get a functional human out of a brain scan seems extremely low (indeed, basically 0 if you interpret "brain scan" liberally), and this is the part that's relevant to most futurism. Whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state is more up for grabs, and I would be much more hesitant to bet against that at 100:1 odds. Again, I would guess that Scott takes a similar view.
7ScottAaronson
Hi Paul. I completely agree that I see no reason why you couldn't "get a functional human out of a brain scan" --- though even there, I probably wouldn't convert my failure to see such a reason into a bet at more than 100:1 odds that there's no such reason. (Building a scalable quantum computer feels one or two orders of magnitude easier to me, and I "merely" staked $100,000 on that being possible --- not my life or everything I own! :-) ) Now, regarding "whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state": well, I regard myself as sufficiently confused about what we even mean by that idea, and how we could decide its truth or falsehood in a publicly-verifiable way, that I'd be hesitant to accept almost ANY bet about it, regardless of the odds! If you like, I'm in a state of Knightian uncertainty, to whatever extent I even understand the question. So, I wrote the essay mostly just as a way of trying to sort out my thoughts.
2Viliam_Bur
If it is so easy, could someone please explain me the main idea in less than 85 pages? (Let's suppose that the scanned mind does not have to be an absolutely perfect copy; that differences as big as the difference between me now and me 1 second later are acceptable.)

Absolutely, here's the relevant quote:

"The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that

(1) encode everything relevant to memory and cognition,

(2) can be accurately modeled as performing a classical digital computation, and

(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?"

You could do worse things with your time than read the whole thing, in my opinion.

3Viliam_Bur
Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn't want to know his opinions on everything, only on the technical problems with scanning brains.) Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain? Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time. Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper... which is necessary for the predicted economical consequences of "ems". Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful. Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).
6IlyaShpitser
I don't presume to speak for Scott, but my interpretation is that it's not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren't the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning). For more detailed questions I am afraid you will have to read the paper.
5Ronak
No his thesis is that it is possible that even a maximal upload wouldn't be human in the same way. His main argument goes like this: a) There is no way to find out the universe's initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy. b) So we have to talk about uncertainty about wavefunctions - something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions). c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways). d) We define "non-free" as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively). e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true. My disagreements (well, not quite - more, why I'm still compatibilist after reading this): a) predictability is different from determinism - his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it's not deterministic, according to my interpretation of the word, we wouldn't have free will any more. b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly amo
0Ronak
For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.
2MrMind
Easily imagining worlds doesn't mean they are possible or even consistent, as per the p-zombie world. This is not argument against Aaronson paper in general, although I think it's far from correct, but against your deduction. Plus, I think there exist multiple, reasonable and independent arguments that favors LW consensus, and this is evidential weight against Aaronson paper, not the opposite.
3IlyaShpitser
I think he proposes an empirical question the answer to which influences whether e.g. uploading is possible. Do you think his question has already answered? Do you have links explaining this, if so?
-2MrMind
I have yet to read the full paper, so a full reply will have to wait. But I've already commented that he hand-waves a sensible argument against his thesis. so this is not promising.
-5Will_Newsome