Are you a virtue ethicist at heart?

9 shminux 27 January 2014 10:20PM

Disclaimer: I am not a philosopher, so this post will likely seem amateurish to the subject matter experts. 

LW is big on consequentialism, utilitarianism and other quantifiable ethics one can potentially program into a computer to make it provably friendly. However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions. We may reevaluate our judgment later, based on laws and/or actual or expected usefulness, but the initial impulse still remains, even if overridden. To quote Casimir de Montrond, "Mistrust first impulses; they are nearly always good" (the quote is usually misattributed to Talleyrand).

Some examples:

  • Eliezer in a facebook post linked the article When Doing Good Means You’re Bad, which points out that people taking commission to raise a lot of money for charity are commonly considered less moral than those who raise much less but are not paid to do so ("tainted altruism"). 
  • This was brought up at a meetup: a pregnant woman in a dire financial situation who decides to have an abortion because she does not want a burden of raising a baby is judged harsher than a woman in a similar situation whose motivation is to avoid inflicting harsh life on the prospective child.
  • In real-life trolley problems even the committed utilitarians (like commanders during war time) are likely to hesitate before sacrificing lives to save more.

I am not sure how to classify religious fanaticism (or other bigotry), but it seems to require a heavy dose of virtue ethics (feeling righteous), in addition to following the (deontological) tenets of whichever belief, with some consequentialism (for the greater good) mixed in.

When I try to introspect my own moral decisions (like whether to tell the truth, or to cheat on a test, or to drive over the speed limit), I can usually find a grain of virtue ethics inside. It might be followed or overridden, sometimes habitually, but it is always there. Can you?

 

LINK: AI Researcher Yann LeCun on AI function

0 shminux 11 December 2013 12:29AM

Yann LeCun, now of Facebook, was interviewed by The Register. It is interesting that his view of AI is apparently that of a prediction tool:

"In some ways you could say intelligence is all about prediction," he explained. "What you can identify in intelligence is it can predict what is going to happen in the world with more accuracy and more time horizon than others."

rather than of a world optimizer. This is not very surprising, given his background in handwriting and image recognition. This "AI as intelligence augmentation" view appears to be prevalent among the AI researchers in general.

 

As an upload, would you join the society of full telepaths/empaths?

5 shminux 15 October 2013 08:59PM

I asked this question on IRC before and got some surprising answers.

Suppose, for the sake of argument, you get cryo-preserved and eventually wake up as an upload. Maybe meat->sim transfer ends up being much easier than sim->meat or meat->meat, or something. Further suppose that you are not particularly averse to a digital-only existence, at least not enough to specifically prohibit reviving you if this is the only option. Yet further suppose that sim-you is identical to meat-you for all purposes that meat-you cared about (including all your hidden desires and character faults). Let's also preemptively assume that any other attempts to fight this hypothetical have been satisfactorily resolved, just to get this out of the way.

Now, in the "real world", or at least in the simulation level we are at, there is no evidence that telepathy of any kind exists or is even possible. However, in the sim-world there is no technological reason it cannot be implemented in some way, for just thoughts, or just feelings, or both. There is a lot to be said for having this kind of connection between people (or sims). It gets rid of or marginalizes deception, status games, mis-communication-based biases and fallacies. On the other hand, your privacy disappears completely and so do any advantages over others the meat-you might want to retain in the digital world. And what you perceive as your faults are out there for everyone to see and feel.

As a new upload, you are informed that many "people" decided to get integrated into the telepathic society and appear to be happy about it, with few, if any, defections. There is also the group of those who opted out, and it looks basically like your "normal" mundane human society. There is only a limited and strictly monitored interaction between the two worlds to prevent exploitation/manipulation. 

Would you choose to get fully integrated or stay as human-like as possible? Feel free to suggest any other alternative (suicide, start a partially integrated society, etc.).

P.S. This topic has been rather extensively covered in science fiction, but I could not find a quality online discussion anywhere.

[LINK] Larry = Harry sans magic? Google vs. Death

23 shminux 18 September 2013 04:49PM

Google's announcement, Time magazine rather sensationalist headline.

In any case, it's nice to know that Google set its sights to "challenge ... aging and associated diseases". Apple's Tim Cook:

For too many of our friends and family, life has been cut short or the quality of their life is too often lacking. Art is one of the crazy ones who thinks it doesn’t have to be this way. 

One more step towards "world optimization".

[Link] AI advances: computers can be almost as funny as people

5 shminux 02 August 2013 06:41PM

"Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes."

From this paper:

Unsupervised joke generation from big data

Sasa Petrovic and David Matthews

The 51st Annual Meeting of the Association for Computational Linguistics - Short Papers (ACL Short Papers 2013) 
Sofia, Bulgaria, August 4-9, 2013

 


 

Abstract

Humor generation is a very hard problem. It is difficult to say exactly what makes a joke funny, and solving this problem algorithmically is assumed to require deep semantic understanding, as well as cultural and other contextual cues. We depart from previous work that tries to model this knowledge using ad-hoc manually created databases and labeled training examples. Instead we present a model that uses large amounts of unannotated data to generate I like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in. This is, to the best of our knowledge, the first fully unsupervised humor generation system. Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes.

 

From The Register:

It uses 2,000,000 noun-adjective pairs of words to draw up jokes "with an element of surprise", something the creators claim is key to good comedy.

...

 jokes calculated by the software include:

  • I like my relationships like I like my source code... open

How would not having free will feel to you?

4 shminux 20 June 2013 08:51PM

Given the spike in free-will debates on LW recently (blame Scott Aaronson), and the usual potentially answerable meta-question "Why do we think we have free will?", I am intrigued by a sub-question, "what would it feel like to have/not have free will?". The positive version of this question is not very interesting, almost everyone feels they have free will most all the time. The negative version is more interesting and I expect the answers to be more diverse. Here are a few off the top of my head, not necessarily mutually exclusive:

Epistemic:

  • Knowing that someone out there already predicts my behavior perfectly
  • Knowing that someone out there can predict my behavior perfectly, whether or not they actually bother doing it
  • Knowing that it is potentially possible to perfectly predict my behavior, even if I know that no one is doing it
  • Knowing that I am in a simulation 
  • Knowing that I am in a simulation where repeated runs with the same inputs give identical outcomes
  • ...?

Psychological:

  • Feeling constrained by the environment to act in certain ways 
  • Feeling constrained by the environment to act in certain unsatisfactory ways
  • Voices in my head compel me to do things 
  • Voices in my head compel me to do bad things 
  • Feeling unable to complete thoughts I would like to think through, as if someone censored them
  • ...?

Physical:

  • Observing myself act in ways I never intended to act, whether beneficial to me or not
  • Observing my arms/legs/mouth move as if externally controlled, and being unable to interfere
  • ...?

For me personally some of these are close to the feeling of "no free will" than others, but I am not sure if any single one crosses the boundary.

I am sure that there are different takes on the answers and on how to categorize them. I think it would be useful to collect some perspectives and maybe have a poll or several after.

Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine"

16 shminux 17 June 2013 05:11AM

This highly speculative paper has been discussed here before, but I found the discussion's quality rather disappointing. People generally took bits and pieces out of context and then mostly offered arguments already addressed in the paper. Truly the internet is the most powerful misinterpretation engine ever built. It's nice to see that Scott, who is no stranger to online adversity, is taking it in stride.

So I went through the paper and took notes, which I posted on my blog, but I am also attaching them below, in a hope that someone else here finds them useful. I initially intended to write up a comment in the other thread, but the size of it grew too large for a comment, so I am making this post. Feel free to downvote if you think it does not belong in Discussion (or for any other reason, of course). 

TL;DR: The main idea of the paper is, as far as I can tell, that it is possible to construct a physical model, potentially related to the "free" part of the free will debate, where some events cannot be predicted at all, not even probabilistically, like it is done in Quantum Mechanics. Scott also proposes one possible mechanism for this "Knightian unpredictability": the not-yet-decohered parts of the initial state of the universe, such as the Cosmic Microwave Background radiation. He does not claim a position on whether the model is correct, only that it is potentially testable and thus shifts a small piece of the age-old philosophical debate on free will into the realm of physics. 

For those here who say that the free-will question has been dissolved, let me note that the picture presented in the paper is one explicitly rejected by Eliezer, probably a bit hastily. Specifically in this diagram:

Eliezer says that the sequential picture on the left is the only correct one, whereas Scott offers a perfectly reasonable model which is better described by the picture on the right. To reiterate, there is a part of the past (Scott calls it "microfacts") which evolves reversibly and unitarily until some time in the future. Given that this part has not been measured yet, there is no way, not even probabilistically, to estimate its influence on some future event, when some of those microfacts interact with the rest of the world and decohere, thus affecting "macrofacts", potentially including human choices. This last speculative idea could be tested if it is shown that small quantum fluctuations can be chaotically amplified to macroscopic levels. If this model is correct, it may have significant consequences on whether a human mind can be successfully cloned and on whether an AI can be called sentient, or even how it can be made sentient. 

My personal impression is that Scott's arguments are much better thought through than the speculations by Penrose in his books, but you may find otherwise. I also appreciate this paper for doing what mainstream philosophers are qualified and ought to do, but consistently fail to do: look at one of the Big Questions, chip away some small solvable piece of it, and offer this piece to qualified scientists. 

Anyway, below are my notes and quotes. If you think you have found an obvious objection to some of the quotes, this is likely because I did not provide enough context, so please read the relevant section of the paper before pointing it out. It may also be useful to recite the Litany of a Bright Dilettante.


p.6. On QM's potentially limiting "an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems" : "In this essay I’ll argue strongly [...] that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no. And we don’t yet know which kind we live in."

p. 7. "The [...] idea—that of being “willed by you”—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was “really” yours, and hold the true decider to have been God, the universe, an impersonating demon, etc. I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong."

"the situation seems different if we set aside the “will” part of free will, and consider only the “free” part."

"I’ll use the term freedom, or Knightian freedom, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors. [..] we lack a reliable way even to quantify using probability distributions."

p.8. "I tend to see Knightian unpredictability as a necessary condition for free will. In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe “free will” to the system. Why not admit that we now fully understand what makes this system tick?"

p.12. "from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is." -- professional philosophers would do well to keep this in mind. Of course, once you break off such answerable part, it tends to leave the realm of philosophy and become a natural science of one kind or another. Maybe something useful professional philosophers could do is to look for "answerable parts", break them off and pass along to the experts in the subject matter. And maybe look for the answers in the natural sciences and see how they help sculpt the "unanswerable riddles".

p.14. Weak compatibilism: "My perspective embraces the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly “compatibilist.” On the other hand, I care whether our choices can actually be mechanically predicted—not by hypothetical Laplace demons but by physical machines. I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions)."

p.19. Importance of copyability: "the problem with this response [that you are nothing but your code] is simply that it gives up on science as something agents can use to predict their future experiences. The agents wanted science to tell them, “given such and- such physical conditions, here’s what you should expect to see, and why.” Instead they’re getting the worthless tautology, “if your internal code causes you to expect to see X, then you expect to see X, while if your internal code causes you to expect to see Y , then you expect to see Y .” But the same could be said about anything, with no scientific understanding needed! To paraphrase Democritus, it seems like the ultimate victory of the mechanistic worldview is also its defeat." -- If a mind cannot be copied perfectly, then there is no such thing as your "code", i.e. an algorithm which can be run repeatedly.

p.20. Constrained determinism: "A form of “determinism” that applies not merely to our universe, but to any logically possible universe, is not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about."

p.21: Bell's theorem, quoting Conway and Kochen: "if there’s no faster than-light communication, and Alice and Bob have the “free will” to choose how to measure their respective particles, then the particles must have their own “free will” to choose how to respond to the measurements." -- the particles' "free will" is still constrained by the laws of Quantum Mechanics, however.

p.23. Multiple (micro-)past compatibilism: "multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past macrofacts are outside our ability to alter. [...] our choices today might play a role in selecting one past from a giant ensemble of macroscopically-identical but microscopically-different pasts."

p.26. Singulatarianism: "all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion. If the brain is a “meat computer,” then given the right technology, why shouldn’t we be able to copy its program from one physical substrate to another? [...] given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds can’t be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish. If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are true by repeating that their ideas are crazy and weird. If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that makes it a fantasy."

p.27. Predictability of human mind: "I believe neuroscience might someday advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is “physically predictable by utside observers” in the same sense as a digital computer."

p.28. Em-ethics: "I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner." -- E.g. it's not immoral to stop a simulation which can be resumed or restored from a backup. (The cryonics implications are obvious.) "Deleting the last copy of an em in existence should be prosecuted as murder, not because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would." -- Again, this is a pretty transhumanist view, see the anti-deathist position of Eliezer Yudkowsky as expressed in HPMoR.

p.29. Probabilistic uncertainty vs Knightian uncertainty: "if we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and probabilistic predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics. [...] If we know a system’s quantum state , then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system. But if we don’t know the state, then itself can be thought of as subject to Knightian uncertainty."

On the source of this unquantifiable "Knightian uncertainty": "in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices. That source is uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)"

p.30. "In economics, the “second type” of uncertainty—the type that can’t be objectively quantified using probabilities—is called Knightian uncertainty, after Frank Knight, who wrote about it extensively in the 1920s [49]. Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [87] under the name “black swans”)."

p.31. "I think that the free-will-is-incoherent camp would be right, if all uncertainty were probabilistic." Bayesian fundamentalism: "Bayesian probability theory provides the only sensible way to represent uncertainty. On that view, “Knightian uncertainty” is just a fancy name for someone’s failure to carry a probability analysis far enough."" Against the Dutch-booking argument for Bayesian fundamentalism: "A central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious."

p.32. Objective prior: "one can’t use Bayesianism to justify a belief in the existence of objective probabilities underlying all events, unless one is also prepared to defend the existence of an “objective prior.”"

Universal prior: "a distribution that assigns a probability proportional to 2^(−n) to every possible universe describable by an n-bit computer program." Why it may not be a useful "true" prior: "a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible. But that’s conceptually very different from an entity that already knows the probabilities."

p.34. Quantum no-cloning: "it’s possible to create a physical object that (a) interacts with the outside world in an interesting and nontrivial way, yet (b) effectively hides from the outside world the information needed to predict how the object will behave in future interactions."

p.35. Quantum teleportation answers the problem of "what to do with the original after you fax a perfect copy of you to be reconstituted on Mars": "in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself"

p.36. Freebit picture: "due to Knightian uncertainty about the universe’s initial quantum state, at least some of the qubits found in nature are regarded as freebits" making "predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically". Freebits are qubits because otherwise they could be measured without violating no-cloning. Observer-independence requirement: "it must not be possible (even in principle) to trace [the freebit's] causal history back to any physical process that generated [the freebit] according to a known probabilistic ensemble."

p.37. On existence of freebits: "In the actual universe, are there any quantum states that can’t be grounded in PMDs?" PMD, a "past macroscopic determinant" is a classical observable that would have let one non-invasively probabilistically predict the prospective freebit to arbitrary accuracy. This is the main question of the paper: can freebits from the initial conditions of the universe survive till present day and even affect human decisions?

p.38: CMB (cosmic microwave background radiation) is one potential example of freebits: detected CMB radiation did not interact with matter since the last scattering, roughly 380, 000 years after the Big Bang. Objections: a) last scattering is not initial conditions by any means, b) one can easily shield from CMB.

p.39. Freebit effects on decision-making: "what sorts of changes to [the quantum state of the entire universe] would or wouldn’t suffice to ... change a particular decision made by a particular human being? ... For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?" due to potential amplification of "microscopic fluctuations to macroscopic scale". Sort of a quantum butterfly effect.

p.40. Freebit amplification issues: amplification time and locality. Locality: the freebit only affects the person's actions, which mediates all other influences on the rest of the world. I.e. no direct freebit effect on anything else. On why these questions are interesting: "I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today. And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections."

p.41. Role of freebits: "freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore “free” in the sense that interests us." It could just a noise source, it can help "foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity."

p.42. "Freedom from the inside out":  "isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?" "Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.

p.44. Harmonization problem: backward causality leads to all kinds of problems and paradoxes. Not an issue for the freebit model, as backward causality can point only to "microfacts", which do not affect any "macrofacts". "the causality graph with be a directed acyclic graph (a dag), with all arrows pointing forward in time, except for some “dangling” arrows pointing backward in time that never lead anywhere else." The latter is justified by "no-cloning". In other words, "for all the events we actually observe, we must seek their causes only to their past, never to their future."" -- This backward causality moniker seems rather unfortunate and misleading, given that it seems to replace the usual idea of discovery of some (micro)fact about the past with "a microfact is directly caused by a macrofact F to its future". "A simpler option is just to declare the entire concept of causality irrelevant to the microworld."

p.45. Micro/Macro distinction: A potential solution: "a “macrofact” is simply any fact of which the news is already propagating outward at the speed of light". I.e. an interaction turns microfact into a macrofact. This matches Zurek's einselection ideas.

p.47 Objections to freebits: 5.1: Humans are very predictable. "Perhaps, as Kane speculates, we truly exercise freedom only for a relatively small number of “self-forming actions” (SFAs)—that is, actions that help to define who we are—and the rest of the time are essentially “running on autopilot.”" Also note "the conspicuous failure of investors, pundits, intelligence analysts, and so on actually to predict, with any reliability, what individuals or even entire populations will do"

p.48. 5.2: The weather objection: How are brains different from weather? "brains seem “balanced on a knife-edge” between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality. [...] a single freebit could plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically."

p.49 5.3: The gerbil objection: if a brain or an AI is isolated from freebits except through a a gerbil in a box connected to it, then "the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a “capacity for freedom” it wouldn’t have had otherwise," in essence becoming the soul of the machine. "Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious." Potential reply:  brain is not like AI in that "In the AI/gerbil system, the “intelligence” and “Knightian noise” components were cleanly separable from one another. [...] With the brain, by contrast, it’s not nearly so obvious that the “Knightian indeterminism source” can be physically swapped out for a different one, without destroying or radically altering the brain’s cognitive functions as well." Now this comes to the issue of identity.

"Suppose the nanorobots do eventually complete their scan of all the “macroscopic, cognitively-relevant” information in your brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain. Would that simulation be you? If your “original” brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version? (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) [...] My conclusion is that either you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section 2.5, or else you can’t bear the same sort of “uninteresting” relationship to the “non-functional” degrees of freedom in your brain that the AI bore to the gerbil box."

p.51. The Initial-State Objection: "the notion of “freebits” from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics" because "it follows from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any “interesting” information". The reply is rather involved and discusses several new speculative ideas in physics. It boils down to "when discussing extreme situations like the Big Bang, it’s not okay to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them. And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that no one knows what sorts of correlations might have been present in the universe’s initial microstate."

p.52. The Wigner’s-Friend Objection: A macroscopic object "in a superposition of two mental states" requires freebits to make a separate "free decision" in each one, requiring 2^(number of states) freebits for independent decision making in each state.

Moreover "if the freebit picture is correct, and the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—the subject no longer has the “capacity for Knightian freedom,” and is now a “mechanistic,” externally-characterized physical system similar to a large quantum computer."

p.55.  "what makes humans any different [from a computer]? According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we all exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a “practical” problem, caused by rapid decoherence? Here I reiterate the speculation put forward in Section 4.2: that the decoherence of a state should be considered “fundamental” and “irreversible,” precisely when [it] becomes entangled with degrees of freedom that are receding toward our de Sitter horizon at the speed of light, and that can no longer be collected together even in principle. That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above. But it plausibly can’t be avoided by any entity that we would currently recognize as “human."

p.56. Difference from Penrose: " I make no attempt to “explain consciousness.” Indeed, that very goal seems misguided to me, at least if “consciousness” is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses."

p.57. "instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never “really” enjoy fresh strawberries, but at most claim to enjoy them."

"the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents."

"I’m profoundly skeptical that any of the existing objective reduction [by minds] models are close to the truth. The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity."

"I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws"

p.61. Boltzmann brains: "By the time thermal equilibrium is reached, the universe will (by definition) have “forgotten” all details of its initial state, and any freebits will have long ago been “used up.” In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits. So, on this account, Boltzmann brains wouldn’t be “free,” even during their brief moments of existence."

p.62. What Happens When We Run Out of Freebits? "the number of freebits accessible to any one observer must be finite—simply because the number of bits of any kind is then upper-bounded by the observable universe’s finite holographic.entropy. [...] this should not be too alarming. After all, even without the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most s 10^122 “interesting events,” of any kind, before it settles into thermal equilibrium."

p.63. Indexicality: "indexical puzzle: a puzzle involving the “first-person facts” of who, what, where, and when you are, which seems to persist even after all the “third-person facts” about the physical world have been specified." This is similar to Knightian uncertainty: "For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically. Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, “here, this chunk is you; its experiences are your experiences.”"

Free will connection: Take two heretofore identical Earths A and B in an infinite universe and are about to diverge based on your decision, and it's not impossible for a superintelligence to predict this decision, not even probabilistically, because it is based on a freebit:

"Maybe “youA” is the “real” you, and taking the new job is a defining property of who you are, much as Shakespeare “wouldn’t be Shakespeare” had he not written his plays. So maybe youB isn’t even part of your reference class: it’s just a faraway doppelg¨anger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but isn’t you. So maybe p = 1. Then again, maybe youB is the “real” you and p = 0. Ultimately, not even a superintelligence could calculate p without knowing something about what it means to be “you,” a topic about which the laws of physics are understandably silent." "For me, the appeal of this view is that it “cancels two philosophical mysteries against each other”: free will and indexical uncertainty".

p.65. Falsifiability: "If human beings could be predicted as accurately as comets, then the freebit picture would be falsified." But this prediction has "an unsatisfying, “god-of-the-gaps” character". Another: "chaotic amplification of quantum uncertainty locally and on "reasonable" timescales. Another: "consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain. [...] imagine that the photons’ quantum states cannot be altered, maintaining a spacetime history consistent with the laws of physics, without also altering classical degrees of freedom in the photons’ causal past. In that case, the freebit picture would once again fail."

p.68. Conclusions: "Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior— so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?"

"does the brain possess what one could call a clean digital abstraction layer : that is, a set of macroscopic degrees of freedom that (1) encode everything relevant to memory and cognition, (2) can be accurately modeled as performing a classical digital computation, and (3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure random number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, neither answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!"

In a world where a cloning device is possible the indexical questions "would no longer be metaphysical conundrums, but in some sense, just straightforward empirical questions about what you should expect to observe!"

p.69. Reason and mysticism. "but what do I really think?" "in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail,  [...] I don’t have any sort of special intuition [...]. The arguments exhaust my intuition."

Applied art of rationality: Richard Feynman steelmanning his mother's concerns

8 shminux 04 June 2013 05:31PM

First, imagine your parents disapproving of your first love. Imagine your mother inventing a whole whack of reasons why you shouldn't marry him/her. Now imagine being rational enough to acknowledge and address all her concerns while remaining a loving and caring son/daughter. If you can imagine, let alone do all that, you are better person than I am. But then I am not Feynman, who did just that in the following excerpt from the book Perfectly Reasonable Deviations from the Beaten Track. It is also a great example of Luminosity. Now, if you think that you can be that good, look through your replies on LW to people whose comment irk you in the worst way. How charitable were you? Granted, you probably don't care about anonymous online posters nearly as much as Feynman cared about his mother, but I suspect that caring about someone makes you more emotional, not less in your reply.

Comments by the book's author:

The following letter is in response to one from Lucille, Richard’s mother, in which she lovingly but forcefully outlined her concerns about Richard’s intent to marry Arline. Arline’s illness, she feared, would compromise not only his own health but his career. She was also concerned about the high cost of treatment (for oxygen, specialists, hospitalization, and so on).

Lucille suggested that his desire to marry stemmed from his desire to please someone he loved (“just as you used to occasionally eat spinach to please me”) and recommended that they stay “engaged.”

The letter itself:

With regard to (1) and (2) I went to see Prof. Smyth at Pop’s suggestion and the doctor here at the university.The doctor said I have less chance of getting T.B. in the sanatorium when visiting her than when I am walking around in the street. I think he was exaggerating (all this is in detail in a letter to Pop, so I won’t repeat it all here). He said T.B. is infectious but not contagious—I didn’t understand the distinction he made, however. Ask Dr. Sarrow. He said in sanatoriums the patients take care of their sputum by cups or Kleenex for the purpose, but on the streets people are careless and just spit all around and when it dries the germs float into the air. He said the germs are not floating around in the air in a sanatorium. He said a lot has been found out about this in the last 25, and in particular the last 10, years. I would be no danger to my students. Prof. Smyth didn’t see any objection from his point of view to hiring me if my wife is sick.

(3) If no one can make a budget for illness, how can I ever make enough to pay for it? How much is enough? Some guesses must be made and I guess I have enough. How much would you guess would be necessary?

(4) I wouldn’t be satisfied being engaged any longer. I want the burden and responsibility of being married.

(5) It really wasn’t hard at all.While I was out to lunch while waiting for somebody to come back to the courthouse in Trenton, I found myself singing—and I realized then that I really was very happy arranging things. It was, I suppose, the pleasure of arranging things for our life together—before she was sick we used to talk of the fun it would be going around ringing doorbells looking for a place to live—I guess it was similar to that idea.

I am not afraid of her parents—and if they don’t trust me with their daughter let them say so now. If they get sore at my mistakes later, it’s too late and it won’t bother me.You are right about my lack (4) of experience—I have no answer to that.

(6) The cost here again is a guess. I want to take the chance, however, that it will be sufficient. If it isn’t I’ll be in difficulty as you suggest.

(7) I’ve already been employed at Princeton for the next year. If I must go elsewhere, I’ll go where I’m needed most.

(8) I do want to get married. I also want to give someone I love what she wants—especially because at the same time I will be doing something I want. It is not at all like eating spinach—(also you misunderstood my motives as a small boy—I didn’t want you angry at me)—I didn’t like spinach.

(9) This is the problem we are discussing—I mean whether marriage is worse than engagement.

(10) I’m honestly sorry it makes you feel so bad. I bet it won’t be too heavy.

Why I want go get married;

It is not that I want to be noble. It is not that I think it’s the only right, honest and decent thing to do, under the circumstances. It is not that I made a promise five years ago—(under entirely different circumstances)—and that I don’t want to “back out” of the promise. That stuff is baloney. If anytime during the five years I thought I’d rather not go thru with it—promise or no promise I’d “back out” so fast it would make your head spin. I’m not dopey enough to tie up my whole life in the future because of some promise I made in the past—under different circumstances.

This decision to marry is a decision now and not one made five years ago.

I want to marry Arline because I love her—which means I want to take care of her.That is all there is to it. I want to take care of her.

I am anxious for the responsibilities and uncertainties of taking care of the girl I love.

I have, however, other desires and aims in the world. One of them is to contribute as much as to physics as I can.This is, in my mind, of even more importance than my love for Arline.

It is therefore especially fortunate that, as I can see (guess) my getting married will interfere very slightly, if at all with my main job in life. I am quite sure I can do both at once. (There is even the possibility that the consequent happiness of being married—and the constant encouragement and sympathy of my wife will aid in my endeavor—but actually in the past my love hasn’t affected my physics much, and I don’t really suppose it will be too great an assistance in the future.

Since I feel I can carry on my main job, and still enjoy the luxury of taking care of someone I love—I intend to be married shortly.

Does that explain anything?

Your Son.

R.P.F. PH.D.

 

P.S. I should have pointed out that I know I am taking chances getting married and may get into all kinds of pickles. I think the chances of major disasters are sufficiently small, and the gain to me and Putzie great enough, that the risk is well worth taking. Of course, this is just the point we are discussing—the magnitude of the risk—so I am saying nothing but simply asserting I think it is small. You think it is large, and therefore I was particularly anxious to have you tell me where you thought the pitfalls were—and you have pointed out a few new ones to me. I still feel the risk is worth taking—and the fact that we differ is due to our difference in background, experience and viewpoint. Please don’t worry that, by explaining your viewpoint, you have in any way pushed us further apart—you haven’t. I only hope that my marrying directly in the face of your disapproval and your better judgment won’t alienate you from me—because honestly, our judgments differ and I think you’re wrong. I honestly believe we (Putzie and I) will be better off married and nobody will be hurt by it.

 

[LINK] SMBC on human and alien values

3 shminux 29 May 2013 03:14PM

Zach nailed it, as usual, especially with his red-button punchline.

[LINK]s: Who says Watson is only a narrow AI?

4 shminux 21 May 2013 06:04PM

OK, so it covers only a few human occupations:

But the list is steadily growing.

Now, connect it with a self-driving AI, and your cab e-driver can make small talk, advise on a suspicious skin lesion, evaluate your investment portfolio and help you fix an issue with your smartphone, all while cheaply and efficiently getting you to your destination.

How long until it can evaluate verbal or written customer requirements and write better routine software than your average programmer?

 

View more: Prev | Next