shminux: Thanks so much for compiling these notes and quotes! But I should say that I thought the other LW thread was totally fine. Sure, lots of people strongly disagreed with me, but I'd be disappointed if LW readers didn't! And when one or two people who hadn't read the paper got things wrong, they were downvoted and answered by others who had. Kudos to LW for maintaining such a high-quality discussion about a paper that, as DanielVarga put it, "moves in a direction that's very far from any kind of LW consensus."
If someone is interested in freedom but does not think unpredictability is fundamental to freedom, they are unlikely to be very interested in engaging with a lengthy paper arguing for unpredictability. And the view that unpredictability is not fundamental to freedom is pretty widespread, especially among compatibilists. An unpredictable outcome seems a lot like a random outcome, and something being random seems quite different from it being up to me, from it being under my control. Now, of course, some people think anything predictable can't be free, but if so, the conclusion would seem to be that there is no such thing as freedom, since saying the predictable is unfree doesn't do anything to undermine the reasons for thinking the unpredictable is unfree.
Just as a quick point of information, these arguments are all addressed in Sections 2.2 and 3.1. In particular, while I share the common intuition that "random" is just as incompatible with "free" as "predictable" is, the crucial observation is that "unpredictable" does not in any way imply "random" (in the sense of governed by some knowable probability distribution). But there's a broader point. Suppose we accepted, for argument's sake, that unpredictability is not "fundamental to freedom" (whatever we take "freedom" to mean). Wouldn't the question of whether human choices are predictable or not remain interesting enough in its own right?
Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it's not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:
Suppose you just have the one freebit - it's your standard issue qubit, and you keep it in a box at absolute zero in case you need to make a decision. If this superintelligence can predict you perfectly except for this one qubit, why wouldn't it just assign a uniform probability distribution to the qubit's values and then simulate you for different qubit values to obtain a probability distribution?
I'm probably just confused about something.
The relevant passage of the essay (p. 65) goes into more detail than the paraphrase you quoted, but the short answer is: how does the superintelligence know it should assume the uniform distribution, and not some other distribution? For example, suppose someone tips it off about a third Earth, C, which is "close enough" to Earths A and B even if not microscopically identical, and in which you made the same decision as in B. Therefore, this person says, the probabilities should be adjusted to (1/3,2/3) rather than (1/2,1/2). It's not obvious whether the person is right---is Earth C really close enough to A and B?---but the superintelligence decides to give the claim some nonzero credence. Then boom, its prior is no longer uniform. It might still be close, but if there are thousands of freebits, then the distance from uniformity will quickly get amplified to almost 1.
Your prescription corresponds to E. T. Jaynes's "MaxEnt principle," which basically says to assume a uniform (or more generally, maximum-entropy) prior over any degrees of freedom that you don't understand. But the conceptual issues with MaxEnt are well-known: the uniform prior over what, exactl...
Why should human choices being randomized by some hypothetical primordial 'freebits' be any different in practice from them being randomized by the seething lottery-ball bounces of trillions of molecules at dozens of meters per second inside cells? That's pretty damn random.
In both cases, the question that interests me is whether an external observer could build a model of the human, by non-invasive scanning, that let it forecast the probabilities of future choices in a well-calibrated way. If the freebits or the trillions of bouncing molecules inside cells served only as randomization devices, then they wouldn't create any obstruction to such forecasts. So the relevant possibility here is that the brain, or maybe other complex systems, can't be cleanly decomposed into a "digital computation part" and a "microscopic noise part," such that the former sees the latter purely as a random number source. Again, I certainly don't know that such a decomposition is impossible, but I also don't know any strong arguments from physics or biology that assure us it's possible -- as they say, I hope future research will tell us more.
As y'all know, I agree with Hume (by way of Jaynes) that the error of projecting internal states of the mind onto the external world is an incredibly common and fundamental hazard of philosophy.
Probability is in the mind to start with; if I think that 103,993 has a 20% of being prime (I haven't tried it, but Prime Number Theorem plus it being not divisible by 2, 3, or 5 wild ballpark estimate) then this uncertainty is a fact about my state of mind, not a fact about the number 103,993. Even if there are many-worlds whose frequencies correspond to some uncertainties, that itself is just a fact; probability is in the map, not in the territory.
Then we have Knightian uncertainty, which is how I feel when I try to estimate AI timelines, i.e., when I query my brain on different occasions it returns different probability estimates, and I know there are going to be some effects which aren't on my causal map. This is a kind of doubly-subjective double-uncertainty. Of course you still have to turn it into betting odds on pain of violating von Neumann-Morgenstern; see also the Ellsberg paradox of inconsistent decision-making if ambiguity is given a special behavior.
Taking this doubly-map-le...
Hi Eliezer,
(1) One of the conclusions I came to from my own study of QM was that we can't always draw as sharp a line as we'd like between "map" and "territory." Yes, there are some things, like Stegosauruses, that seem clearly part of the "territory"; and others, like the idea of Stegosauruses, that seem clearly part of the "map." But what about (say) a quantum mixed state? Well, the probability distribution aspect of a mixed state seems pretty "map-like," while the quantum superposition aspect seems pretty "territory-like" ... but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.
(Since you approvingly mentioned Jaynes, I should quote the famous passage where he makes the same point: "But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature --- all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to uns...
Well, the probability distribution aspect of a mixed state seems pretty "map-like," while the quantum superposition aspect seems pretty "territory-like" ... but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.
I think the underlying problem here is that we're using the word "probability" to denote at least two different things, where those things are causally related in ways that keep them almost consistent with each other but not quite. Any system which obeys the axioms of Cox's theorem can potentially be called probability. The numbers representing subjective judgements of an idealized reasoner satisfy those axioms; call these reasoner subjective probabilities, P_r(event,reasoner). The numbers representing a quantum mixed state do too; call these quantum probabilities, P_q(event,observer).
For an idealized reasoner who knows everything about quantum physics, has unlimited computational power, and has some numbers from the quantum system to start with, these two sets of numbers can ...
"Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability."
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that's "inherently unknowable" enough for me! :-) Or to say it even more strongly: I don't actually care much whether someone chooses to regard the unknowability of such a fact as "part of the map" or "part of the territory" -- any more than, if a bear were chasing me, I'd worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable -- and if it isn't, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to arg...
Eliezer, with due respect, your comment consisted of re-iterating a bunch of basic arguments that Scott has seen many times before, without even attempting to engage with any of Scott's actual arguments. This seems a bit uncharitable...
"Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.
It's a few paragraphs up, where he says:
...Now, the creation of reliable memories and records is essentially always
Rancor commonly arises when STEM discussions in general, and discussions of quantum mechanics in particular, focus upon personal beliefs and/or personal aesthetic sensibilities, as contrasted with verifiable mathematical arguments and/or experimental evidence and/or practical applications.
In this regard, a pertinent quotation is the self-proclaimed "personal belief" that Scott asserts on page 46:
..."One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. ... I personally cannot believe that Nature would
This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels.
For this not to be the case would require a heck of a lot of new physics.
Also and separately, it seems very hard to me to prepare an experiment to test this assertion.
The point of dissolving the free will question was that it doesn't matter what physics we run on. There is in fact no physics which could possibly cause me to believe I had "free will" in the sense of somehow determining my actions outside of physics, because any method for determining my actions is a physical process. In every possible consistent physics, where the laws of physics are more or less constant, I will believe I have "free will" in the sense that the output of my brain corresponds to the algorithm I feel like I'm implementi...
This highly speculative paper has been discussed here before, but I found the discussion's quality rather disappointing. People generally took bits and pieces out of context and then mostly offered arguments already addressed in the paper. Truly the internet is the most powerful misinterpretation engine ever built. It's nice to see that Scott, who is no stranger to online adversity, is taking it in stride.
So I went through the paper and took notes, which I posted on my blog, but I am also attaching them below, in a hope that someone else here finds them useful. I initially intended to write up a comment in the other thread, but the size of it grew too large for a comment, so I am making this post. Feel free to downvote if you think it does not belong in Discussion (or for any other reason, of course).
TL;DR: The main idea of the paper is, as far as I can tell, that it is possible to construct a physical model, potentially related to the "free" part of the free will debate, where some events cannot be predicted at all, not even probabilistically, like it is done in Quantum Mechanics. Scott also proposes one possible mechanism for this "Knightian unpredictability": the not-yet-decohered parts of the initial state of the universe, such as the Cosmic Microwave Background radiation. He does not claim a position on whether the model is correct, only that it is potentially testable and thus shifts a small piece of the age-old philosophical debate on free will into the realm of physics.
For those here who say that the free-will question has been dissolved, let me note that the picture presented in the paper is one explicitly rejected by Eliezer, probably a bit hastily. Specifically in this diagram:
Eliezer says that the sequential picture on the left is the only correct one, whereas Scott offers a perfectly reasonable model which is better described by the picture on the right. To reiterate, there is a part of the past (Scott calls it "microfacts") which evolves reversibly and unitarily until some time in the future. Given that this part has not been measured yet, there is no way, not even probabilistically, to estimate its influence on some future event, when some of those microfacts interact with the rest of the world and decohere, thus affecting "macrofacts", potentially including human choices. This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels. If this model is correct, it may have significant consequences on whether a human mind can be successfully cloned and on whether an AI can be called sentient, or even how it can be made sentient.
My personal impression is that Scott's arguments are much better thought through than the speculations by Penrose in his books, but you may find otherwise. I also appreciate this paper for doing what mainstream philosophers are qualified and ought to do, but consistently fail to do: look at one of the Big Questions, chip away some small solvable piece of it, and offer this piece to qualified scientists.
Anyway, below are my notes and quotes. If you think you have found an obvious objection to some of the quotes, this is likely because I did not provide enough context, so please read the relevant section of the paper before pointing it out. It may also be useful to recite the Litany of a Bright Dilettante.
p.6. On QM's potentially limiting "an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems" : "In this essay I’ll argue strongly [...] that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no. And we don’t yet know which kind we live in."
p. 7. "The [...] idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong."
"the situation seems different if we set aside the “will” part of free will, and consider only the “free” part."
"I’ll use the term freedom, or Knightian freedom, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors. [..] we lack a reliable way even to quantify using probability distributions."
p.8. "I tend to see Knightian unpredictability as a necessary condition for free will. In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe “free will” to the system. Why not admit that we now fully understand what makes this system tick?"
p.12. "from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is." -- professional philosophers would do well to keep this in mind. Of course, once you break off such answerable part, it tends to leave the realm of philosophy and become a natural science of one kind or another. Maybe something useful professional philosophers could do is to look for "answerable parts", break them off and pass along to the experts in the subject matter. And maybe look for the answers in the natural sciences and see how they help sculpt the "unanswerable riddles".
p.14. Weak compatibilism: "My perspective embraces the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly “compatibilist.” On the other hand, I care whether our choices can actually be mechanically predicted—not by hypothetical Laplace demons but by physical machines. I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions)."
p.19. Importance of copyability: "the problem with this response [that you are nothing but your code] is simply that it gives up on science as something agents can use to predict their future experiences. The agents wanted science to tell them, “given such and- such physical conditions, here’s what you should expect to see, and why.” Instead they’re getting the worthless tautology, “if your internal code causes you to expect to see X, then you expect to see X, while if your internal code causes you to expect to see Y , then you expect to see Y .” But the same could be said about anything, with no scientific understanding needed! To paraphrase Democritus, it seems like the ultimate victory of the mechanistic worldview is also its defeat." -- If a mind cannot be copied perfectly, then there is no such thing as your "code", i.e. an algorithm which can be run repeatedly.
p.20. Constrained determinism: "A form of “determinism” that applies not merely to our universe, but to any logically possible universe, is not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about."
p.21: Bell's theorem, quoting Conway and Kochen: "if there’s no faster than-light communication, and Alice and Bob have the “free will” to choose how to measure their respective particles, then the particles must have their own “free will” to choose how to respond to the measurements." -- the particles' "free will" is still constrained by the laws of Quantum Mechanics, however.
p.23. Multiple (micro-)past compatibilism: "multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past macrofacts are outside our ability to alter. [...] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts."
p.26. Singulatarianism: "all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion. If the brain is a “meat computer,” then given the right technology, why shouldn’t we be able to copy its program from one physical substrate to another? [...] given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds can’t be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish. If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are true by repeating that their ideas are crazy and weird. If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that makes it a fantasy."
p.27. Predictability of human mind: "I believe neuroscience might someday advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is “physically predictable by utside observers” in the same sense as a digital computer."
p.28. Em-ethics: "I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner." -- E.g. it's not immoral to stop a simulation which can be resumed or restored from a backup. (The cryonics implications are obvious.) "Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would." -- Again, this is a pretty transhumanist view, see the anti-deathist position of Eliezer Yudkowsky as expressed in HPMoR.
p.29. Probabilistic uncertainty vs Knightian uncertainty: "if we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and probabilistic predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics. [...] If we know a system’s quantum state , then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system. But if we don’t know the state, then itself can be thought of as subject to Knightian uncertainty."
On the source of this unquantifiable "Knightian uncertainty": "in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices. That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)"
p.30. "In economics, the “second type” of uncertainty—the type that can’t be objectively quantified using probabilities—is called Knightian uncertainty, after Frank Knight, who wrote about it extensively in the 1920s [49]. Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [87] under the name “black swans”)."
p.31. "I think that the free-will-is-incoherent camp would be right, if all uncertainty were probabilistic." Bayesian fundamentalism: "Bayesian probability theory provides the only sensible way to represent uncertainty. On that view, “Knightian uncertainty” is just a fancy name for someone’s failure to carry a probability analysis far enough."" Against the Dutch-booking argument for Bayesian fundamentalism: "A central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious."
p.32. Objective prior: "one can’t use Bayesianism to justify a belief in the existence of objective probabilities underlying all events, unless one is also prepared to defend the existence of an “objective prior.”"
Universal prior: "a distribution that assigns a probability proportional to 2^(−n) to every possible universe describable by an n-bit computer program." Why it may not be a useful "true" prior: "a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible. But that’s conceptually very different from an entity that already knows the probabilities."
p.34. Quantum no-cloning: "it’s possible to create a physical object that (a) interacts with the outside world in an interesting and nontrivial way, yet (b) effectively hides from the outside world the information needed to predict how the object will behave in future interactions."
p.35. Quantum teleportation answers the problem of "what to do with the original after you fax a perfect copy of you to be reconstituted on Mars": "in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself"
p.36. Freebit picture: "due to Knightian uncertainty about the universe’s initial quantum state, at least some of the qubits found in nature are regarded as freebits" making "predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically". Freebits are qubits because otherwise they could be measured without violating no-cloning. Observer-independence requirement: "it must not be possible (even in principle) to trace [the freebit's] causal history back to any physical process that generated [the freebit] according to a known probabilistic ensemble."
p.37. On existence of freebits: "In the actual universe, are there any quantum states that can’t be grounded in PMDs?" PMD, a "past macroscopic determinant" is a classical observable that would have let one non-invasively probabilistically predict the prospective freebit to arbitrary accuracy. This is the main question of the paper: can freebits from the initial conditions of the universe survive till present day and even affect human decisions?
p.38: CMB (cosmic microwave background radiation) is one potential example of freebits: detected CMB radiation did not interact with matter since the last scattering, roughly 380, 000 years after the Big Bang. Objections: a) last scattering is not initial conditions by any means, b) one can easily shield from CMB.
p.39. Freebit effects on decision-making: "what sorts of changes to [the quantum state of the entire universe] would or wouldn’t suffice to ... change a particular decision made by a particular human being? ... For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?" due to potential amplification of "microscopic fluctuations to macroscopic scale". Sort of a quantum butterfly effect.
p.40. Freebit amplification issues: amplification time and locality. Locality: the freebit only affects the person's actions, which mediates all other influences on the rest of the world. I.e. no direct freebit effect on anything else. On why these questions are interesting: "I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today. And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections."
p.41. Role of freebits: "freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore “free” in the sense that interests us." It could just a noise source, it can help "foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity."
p.42. "Freedom from the inside out": "isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?" "Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.
p.44. Harmonization problem: backward causality leads to all kinds of problems and paradoxes. Not an issue for the freebit model, as backward causality can point only to "microfacts", which do not affect any "macrofacts". "the causality graph with be a directed acyclic graph (a dag), with all arrows pointing forward in time, except for some “dangling” arrows pointing backward in time that never lead anywhere else." The latter is justified by "no-cloning". In other words, "for all the events we actually observe, we must seek their causes only to their past, never to their future."" -- This backward causality moniker seems rather unfortunate and misleading, given that it seems to replace the usual idea of discovery of some (micro)fact about the past with "a microfact is directly caused by a macrofact F to its future". "A simpler option is just to declare the entire concept of causality irrelevant to the microworld."
p.45. Micro/Macro distinction: A potential solution: "a “macrofact” is simply any fact of which the news is already propagating outward at the speed of light". I.e. an interaction turns microfact into a macrofact. This matches Zurek's einselection ideas.
p.47 Objections to freebits: 5.1: Humans are very predictable. "Perhaps, as Kane speculates, we truly exercise freedom only for a relatively small number of “self-forming actions” (SFAs)—that is, actions that help to define who we are—and the rest of the time are essentially “running on autopilot.”" Also note "the conspicuous failure of investors, pundits, intelligence analysts, and so on actually to predict, with any reliability, what individuals or even entire populations will do"
p.48. 5.2: The weather objection: How are brains different from weather? "brains seem “balanced on a knife-edge” between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality. [...] a single freebit could plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically."
p.49 5.3: The gerbil objection: if a brain or an AI is isolated from freebits except through a a gerbil in a box connected to it, then "the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a “capacity for freedom” it wouldn’t have had otherwise," in essence becoming the soul of the machine. "Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious." Potential reply: brain is not like AI in that "In the AI/gerbil system, the “intelligence” and “Knightian noise” components were cleanly separable from one another. [...] With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well." Now this comes to the issue of identity.
"Suppose the nanorobots do eventually complete their scan of all the “macroscopic, cognitively-relevant” information in your brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain. Would that simulation be you? If your “original” brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version? (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) [...] My conclusion is that either you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section 2.5, or else you can’t bear the same sort of “uninteresting” relationship to the “non-functional” degrees of freedom in your brain that the AI bore to the gerbil box."
p.51. The Initial-State Objection: "the notion of “freebits” from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics" because "it follows from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any “interesting” information". The reply is rather involved and discusses several new speculative ideas in physics. It boils down to "when discussing extreme situations like the Big Bang, it’s not okay to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them. And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that no one knows what sorts of correlations might have been present in the universe’s initial microstate."
p.52. The Wigner’s-Friend Objection: A macroscopic object "in a superposition of two mental states" requires freebits to make a separate "free decision" in each one, requiring 2^(number of states) freebits for independent decision making in each state.
Moreover "if the freebit picture is correct, and the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—the subject no longer has the “capacity for Knightian freedom,” and is now a “mechanistic,” externally-characterized physical system similar to a large quantum computer."
p.55. "what makes humans any different [from a computer]? According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we all exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a “practical” problem, caused by rapid decoherence? Here I reiterate the speculation put forward in Section 4.2: that the decoherence of a state should be considered “fundamental” and “irreversible,” precisely when [it] becomes entangled with degrees of freedom that are receding toward our de Sitter horizon at the speed of light, and that can no longer be collected together even in principle. That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above. But it plausibly can’t be avoided by any entity that we would currently recognize as “human."
p.56. Difference from Penrose: " I make no attempt to “explain consciousness.” Indeed, that very goal seems misguided to me, at least if “consciousness” is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses."
p.57. "instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never “really” enjoy fresh strawberries, but at most claim to enjoy them."
"the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents."
"I’m profoundly skeptical that any of the existing objective reduction [by minds] models are close to the truth. The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity."
"I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws"
p.61. Boltzmann brains: "By the time thermal equilibrium is reached, the universe will (by definition) have “forgotten” all details of its initial state, and any freebits will have long ago been “used up.” In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits. So, on this account, Boltzmann brains wouldn’t be “free,” even during their brief moments of existence."
p.62. What Happens When We Run Out of Freebits? "the number of freebits accessible to any one observer must be finite—simply because the number of bits of any kind is then upper-bounded by the observable universe’s finite holographic.entropy. [...] this should not be too alarming. After all, even without the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most s 10^122 “interesting events,” of any kind, before it settles into thermal equilibrium."
p.63. Indexicality: "indexical puzzle: a puzzle involving the “first-person facts” of who, what, where, and when you are, which seems to persist even after all the “third-person facts” about the physical world have been specified." This is similar to Knightian uncertainty: "For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically. Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, “here, this chunk is you; its experiences are your experiences.”"
Free will connection: Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it's not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:
"Maybe “youA” is the “real” you, and taking the new job is a defining property of who you are, much as Shakespeare “wouldn’t be Shakespeare” had he not written his plays. So maybe youB isn’t even part of your reference class: it’s just a faraway doppelg¨anger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but isn’t you. So maybe p = 1. Then again, maybe youB is the “real” you and p = 0. Ultimately, not even a superintelligence could calculate p without knowing something about what it means to be “you,” a topic about which the laws of physics are understandably silent." "For me, the appeal of this view is that it “cancels two philosophical mysteries against each other”: free will and indexical uncertainty".
p.65. Falsifiability: "If human beings could be predicted as accurately as comets, then the freebit picture would be falsified." But this prediction has "an unsatisfying, “god-of-the-gaps” character". Another: "chaotic amplification of quantum uncertainty locally and on "reasonable" timescales. Another: "consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain. [...] imagine that the photons’ quantum states cannot be altered, maintaining a spacetime history consistent with the laws of physics, without also altering classical degrees of freedom in the photons’ causal past. In that case, the freebit picture would once again fail."
p.68. Conclusions: "Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior— so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?"
"does the brain possess what one could call a clean digital abstraction layer : that is, a set of macroscopic degrees of freedom that (1) encode everything relevant to memory and cognition, (2) can be accurately modeled as performing a classical digital computation, and (3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure random number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, neither answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!"
In a world where a cloning device is possible the indexical questions "would no longer be metaphysical conundrums, but in some sense, just straightforward empirical questions about what you should expect to observe!"
p.69. Reason and mysticism. "but what do I really think?" "in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail, [...] I don’t have any sort of special intuition [...]. The arguments exhaust my intuition."