[spoilers] EY's “A Girl Corrupted...?!” new story is an allegorical study of quantum immortality?
If you haven't read "A Girl Corrupted by the Internet is the Summoned Hero?!” yet, you should.
Spoilers ahead:
Continuing...
The Spell summons the hero with the best chance of defeating the Evil Emperor. This sounds like Quantum Immortality...
Specifically: Imagine the set of all possible versions of myself that are alive 50 years in the future, in the year 2066. My conscious observation at that point tends to summon the self most likely to be alive in 2066.
To elaborate: Computing all possible paths forward from the present moment to 2066 results in a HUGE set of possible future-selves that exist in 2066. But, some are more likely than others. For example, there will be a bunch of paths to a high-probability result, where I worked a generic middle-class job for years but don't clearly remember a lot of the individual days. There will also be a few paths where I do low-probability things. Thus, a random choice from that HUGE set will tend to pick a generic (high-probability) future self.
But, my conscious awareness observes one life path, not one discrete moment in the future. Computing all possible paths forward from the present moment to the end of the Universe results in a HUGE x HUGE set of possible life-paths, again with considerable overlap. My consciousness tends to pick a high-probability path.
In the story, a hero with a 100% probability of victory exists, so that hero is summoned. The hero observing their own probability of victory ensures they converge on a 100% probability of victory.
In real life, life paths with infinite survival time exist, so these life paths tend to be chosen. Observing one's own probability of infinite survival ensures convergence on 100% survival probability.
In the story, other characters set up conditions such that a desired outcome was the most likely one, by resolving to let a summoned hero with certain traits win easily.
In real life, an equivalent is the quantum suicide trick: resolving to kill oneself if certain conditions are not met ensures that the life path observed is one where those conditions are met.
In the story, a demon is summoned, and controlled when the demon refuses to fulfill its duty. Control of the demon was guaranteed by 100% probability of victory.
In real life, AI is like a demon, with the power to grant wishes but with perverse and unpredictable consequences that become worse with more powerful demons summoned. But, a guarantee of indefinite survival ensures that this demon will not end my consciousness. There are many ways this could go wrong. But, I desire to create as many copies of my mind as possible, but only in conditions where these copies could have lives at least as good as my own, so assuming I have some power to increase how quickly copies of my mind are created, and assuming I might be a mind-copy created in this way, this suggests that the most likely Universe for me to find myself in (out of the set of all possible Universes) is one in which the AI and I cooperate to create a huge number of long-lived copies of my mind.
tl;dr: AI Safety is guaranteed by Quantum Immortality. P.S. God's promise to Abraham that his descendants will be "beyond number" is fulfilled.
Estimate the Cost of Immortality
How much money would it take to engineer biological immortality for at least half of the world's population, within 20 years, with 99% confidence?
timeless quantum immortality
HPMOR and the Power of Consciousness
Throughout HPMOR, the author has included many fascinating details about how the real world works, and how to gain power. The Mirror of CEV seems like a lesson in what a true Friendly AI could look like and do.
I've got a weirder theory. (Roll for sanity...)
The entire story is plausible-deniability cover for explaining how to get the Law of Intention to work reliably.
(All quoted text is from HPMOR.)
This Mirror reflects itself perfectly and therefore its existence is absolutely stable.
"This Mirror" is the Mind, or consciousness. The only thing a Mind can be sure of is that it is a Mind.
The Mirror's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mirror
A Mind's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mind.
Showing any person who steps before it an illusion of a world in which one of their desires has been fulfilled.
The final property upon which most tales agree, is that whatever the unknown means of commanding the Mirror - of that Key there are no plausible accounts - the Mirror's instructions cannot be shaped to react to individual people...the legends are unclear on what rules can be given, but I think it must have something to do with the Mirror's original intended use - it must have something to do with the deep desires and wishes arising from within the person.
More specifically, the Mirror shows a universe that obeys a consistent set of physical laws. From the set of all wish-fulfillment fantasies, it shows a universe that could actually plausibly exist.
It is known that people and other objects can be stored therein
Actors store other minds within their own Mind. Engineers store physical items within their Mind. The Mirror is a Mind.
the Mirror alone of all magics possesses a true moral orientation
The Mind alone of all the stuff that exists possesses a true moral orientation.
If that device had been completed, the story claimed, it would have become an absolutely stable existence that could withstand the channeling of unlimited magic in order to grant wishes. And also - this was said to be the vastly harder task - the device would somehow avert the inevitable catastrophes any sane person would expect to follow from that premise.
An ideal Mind would grant wishes without creating catastrophes. Unfortunately, we're not quite ideal minds, even though we're pretty good.
Professor Quirrell made to walk away from the Mirrror, and seemed to halt just before reaching the point where the Mirror would no longer have reflected him, if it had been reflecting him.
My self-image can only go where it is reflected in my Mind. In other words, I can't imagine what it would be like to be a philosophical zombie.
Most powers of the Mirror are double-sided, according to legend. So you could banish what is on the other side of the Mirror instead. Send yourself, instead of me, into that frozen instant. If you wanted to, that is.
Let's interpret this scene: We've got a Mind/consciousness (the Mirror), we've got a self-image (Riddle) as well as the same spirit in a different self-image (Harry), and we've got a specific Extrapolated Volition instance in the mind (Dumbledore shown in the Mirror). This Extrapolated Volition instance is a consistent universe that could actually exist.
It sounds like the Process of the Timeless trap causes some Timeless Observer to choose one side of the Mirror as the real Universe, trapping the universe on the other side of the mirror in a frozen instant from the Timeless Observer's perspective.
The implication: the Mind has the power to choose which Universes it experiences from the set of all possible Universes extending from the current point.
All right, screw this nineteenth-century garbage. Reality wasn't atoms, it wasn't a set of tiny billiard balls bopping around. That was just another lie. The notion of atoms as little dots was just another convenient hallucination that people clung to because they didn't want to confront the inhumanly alien shape of the underlying reality. No wonder, then, that his attempts to Transfigure based on that hadn't worked. If he wanted power, he had to abandon his humanity, and force his thoughts to conform to the true math of quantum mechanics.
There were no particles, there were just clouds of amplitude in a multiparticle configuration space and what his brain fondly imagined to be an eraser was nothing except a gigantic factor in a wavefunction that happened to factorize, it didn't have a separate existence any more than there was a particular solid factor of 3 hidden inside the number 6, if his wand was capable of altering factors in an approximately factorizable wavefunction then it should damn well be able to alter the slightly smaller factor that Harry's brain visualized as a patch of material on the eraser -
Had to see the wand as enforcing a relation between separate past and future realities, instead of changing anything over time - but I did it, Hermione, I saw past the illusion of objects, and I bet there's not a single other wizard in the world who could have.
This seems like another giant hint about magical powers.
"I had wondered if perhaps the Words of False Comprehension might be understandable to a student of Muggle science. Apparently not."
The author is disappointed that we don't get his hints.
If the conscious mind was in reality a wish-granting machine, then how could I test this without going insane?
The Mirror of Perfect Reflection has power over what is reflected within it, and that power is said to be unchallengeable. But since the True Cloak of Invisibility produces a perfect absence of image, it should evade this principle rather than challenging it.
A method to test this seems to be to become aware of one's own ego-image (stand in front of the Mirror), vividly imagine a different ego-image without identifying with it (bring in a different personality containing the same Self under an Invisibility Cloak), suddenly switch ego-identification to the other personality (swap the Invisibility Cloak in less than a second), and then become distracted so the ego-switch becomes permanent (Dumbledore traps himself in the Mirror).
I can't think of a way to test this without sanity damage. Comments?
Wealth from Self-Replicating Robots
I have high confidence that economically-valuable self-replicating robots are possible with existing technology: initially, something similar in size and complexity to a RepRap, but able to assemble a copy of itself from parts ordered online with zero human interaction. This is important because more robots could provide the economic growth needed to solve many urgent problems. I've held this idea for long enough that I'm worried about being a crank, so any feedback is appreciated.
I care because to fulfill my naive and unrealistic dreams (not dying, owning a spaceship) I need the world to be a LOT richer. Specifically, naively assuming linear returns to medical research funding, a funding increase of ~10x (to ~$5 trillion/year, or ~30% of current USA GDP) is needed to achieve actuarial escape velocity (average lifespans currently increase by about 1 year each decade, so a 10x increase is needed for science to keep up with aging). The simplest way to get there is to have 10x as many machines per person.
My vision is that someone does for hardware what open-source has done for software: make useful tools free. A key advantage of software is that making a build or copying a program takes only one step. In software, you click "compile" and (hopefully) it's done and ready to test in seconds. In hardware, it takes a bunch of steps to build a prototype (order parts, screw fiddly bits together, solder, etc.). A week is an insanely short lead time for building a new prototype of something mechanical. 1-2 months is typical in many industries. This means that mechanical things have high marginal cost, because people have to build and debug them, and typically transport them for thousands of miles from factory to consumer.
Relevant previous research projects include trivial self-replication from pre-fabricated components and an overly-ambitious NASA-funded plan from the 1980s to develop the Moon using self-replicating robots. Current research funding tends to go toward bio-inspired systems, re-configurable systems using prefabricated cubes (conventionally-manufactured), or chemistry deceptively called "nanotech", all of which seem to miss the opportunity to use existing autonomous assembly technology with online ordering of parts to make things cheaper by getting rid of setup cost and building cost.
I envision a library/repository of useful robots for specific tasks (cleaning, manufacturing, etc.), in a standard format for download (parts list, 3D models, assembly instructions, etc.). Parts could be ordered online. A standard fabricator robot with the capability to identify and manipulate parts, and fasten them using screws, would verify that the correct parts were received, put everything together, and run performance checks. For comparison, the RepRap takes >9 hours of careful human labor to build. An initial self-replicating implementation would be a single fastener robot. It would spread by undercutting the price of competing robot arm systems. Existing systems sell for ~2x the cost of components, due to overhead for engineering, assembly, and shipping. This appears true for robots at a range of price points, including $200 robot arms using hobby servos and $40,000+ robot arms using optical encoders and direct-drive brushless motors. A successful system that undercut the price of conventionally-assembled hobby robots would provide a platform for hobbyists to create additional robots that could be autonomously built (e.g. a Roomba for 1/5 the price, due to not needing to pay the 5x markup for overhead and distribution). Once a beachhead is established in the form of a successful self-replicating assembly robot, market pressures would drive full automation of more products/industries, increasing output for everyone.
This is a very hard programming challenge, but the tools exist to identify, manipulate and assemble parts. Specifically, ROS is an open-source software library whose packages can be put together to solve tasks such as mapping a building or folding laundry. It's hard because it would require a lot of steps and a new combination of existing tools.
This is also a hard systems/mechanical challenge: delivering enough data and control bandwidth for observability and controllability, and providing lightweight and rigid hardware, so that the task for the software is possible rather than impossible. Low-cost components have less performance: a webcam has limited resolution, and hobby servos have limited accuracy. The key problem - autonomously picking up a screw and screwing it into a hole - has been solved years ago for assembly-line robots. Doing the same task with low-cost components appears possible in principle. A comparable problem that has been solved is autonomous construction using quadcopters.
Personally, I would like to build a robot arm that could assemble more robot arms. It would require, at minimum, a robot arm using hobby servos, a few webcams, custom grippers (for grasping screws, servos, and laser-cut sheet parts), custom fixtures (blocks with a cutout to hold two parts in place while the robot arm inserts a screw; ideally multiple robot arms would be used to minimize unique tooling but fixtures would be easier initially), and a lot of challenging code using ROS and Gazebo. Just the mechanical stuff, which I have the education for, would be a challenging months-long side project, and the software stuff could take years of study (the equivalent of a CS degree) before I'd have the required background to reasonably attempt it.
I'm not sure what to do with this idea. Getting a CS degree on top of a mechanical engineering degree (so I could know enough to build this) seems like a good career choice for interesting work and high pay (even if/when this doesn't work). Previous ideas like this I've had that are mostly outside my field have been unfeasible for reasons only someone familiar with the field would know. It's challenging to stay motivated to work on this, because the payoff is so distant, but it's also challenging not to work on this, because there's enough of a chance that this would work that I'm excited about it. I'm posting this here in the hopes someone with experience with industrial automation will be inspired to build this, and to get well-reasoned feedback.
Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations
A pattern of cognitive biases not yet discussed here are the biases due to having a narcissistic parent who seeks validation through the child’s academic achievements.
HPMOR clearly shows these biases: Harry's mother is narcissistic, impressed by education, and not particularly smart, and Harry does not realize how this affects his thinking.
Here is my evidence:
The Sorting Hat says Harry is driven by "the fear of losing your fantasy of greatness, of disappointing the people who believe in you" (Ch. 77). Psychology texts say that this fear is what children of a narcissistic parent usually feel. The child feels perpetually ignored because the narcissistic parent seeks validation from the child's accomplishments but refuses to actually listen to the child, spurring the child to ever greater heights of intellectual achievement.
The text supports this view: “Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy...given anything reasonable that he wanted, except, maybe, the slightest shred of respect” and “Petunia wrung her hands. She seemed to be on the verge of tears. "My love, I know I can't win arguments with you, but please, you have to trust me on this … I want my husband to, to listen to his wife who loves him, and trust her just this once - " (Ch. 1) describes a narcissistic, anxiously needy mother, an avoidant father, and a son whose parents provide for his physical needs but neglect his need for respect (ego). “If you conceived of yourself as a Good Parent, you would do it. But take a ten-year-old seriously? Hardly.” (Ch. 1)
Harry goes Dark when the connection to his family is threatened. For example: "The black rage began to drain away, as it dawned on him that...his family wasn't in danger [of legal separation]" (ch. 5) indicates that Harry went Dark even though no one’s life was threatened. The cost of Harry’s Dark Side is becoming an adult at a young age: Harry says, “Every time I call on it... it uses up my childhood.” (Ch. 91). This is consistent with spending nearly all free time studying (instead of wasting time with friends) to impress Harry’s mother.
Typically, children of narcissistic parents inherit either narcissistic or people-pleasing traits. I predicted that if my theory is correct then Harry would have a narcissistic personality. To test this, I found a list of personality traits that describe a narcissist (by Googling “children of narcissistic parents” and clicking the first link), and compared with Harry’s personality as described in HPMOR. I got a 100% match. Questions and answers are as follows:
1. Grandiose sense of self-importance? Check. Harry plans to “optimize” the entire Universe, expects to “do something really revolutionary and important” (Ch. 7), and is trying to “hurry up and become God” (Ch. 27).
2. Obsessed with himself? Check. He appears to only care about people who are smarter or more powerful than him -- people who can help him. He also has contempt for most students and their interests (Quidditch, etc.)
3. Goals are selfish? Check. Harry claims to want to save everyone, but he believes the best way to help others is to increase his own power most quickly. I address two possible objections below:
Harry’s involvement in the Azkaban breakout was selfish, because Harry could not risk losing Quirrell’s friendship: “ It was a bond that went beyond anything of debts owed, or even anything of personal liking, that the two of them were alone in the wizarding world” (Ch. 51). This, again, mirrors a child’s relationship with a narcissistic mother: the child cannot risk losing the mother’s protection. Harry also had selfish reasons for hearing Quirrell’s plan: “There was no advantage to be gained from not hearing it. And if it did reveal something wrong with Professor Quirrell, then it was very much to Harry's advantage to know it, even if he had promised not to tell anyone.” (Ch. 49)
Harry’s efforts to save Hermione are also selfish because Harry sees Hermione in the same way he sees his mother -- weak in many ways and bound by emotions and convention, but someone Harry must impress and protect. Harry’s statement that “it’s disrespectful to her, to think someone could only like her in that way” (ch. 91) makes sense because Harry is disgusted by the Oedipal implications. If Harry’s mother was not narcissistic, then Harry would not have worked so hard to impress Hermione and would have been less disgusted by the thought of being sexually attracted to her.
4. Troubles with normal relationships? Check. Harry is playing high-stakes mind games with the people he is closest to (Quirrell, Draco, Hermione, Dumbeldore), which is not normal friend behavior. Harry has contempt for nearly everyone else.
5. Becomes furious if criticized? Check. When Snape mocked Harry in Potions class, Harry tried to destroy Snape’s career. Quirrell explained, “When it looked like you might lose, you unsheathed your claws, heedless of the danger. You escalated, and then you escalated again” (Ch. 19).
6. Has fantasies of unbound success, power, intelligence, etc.? Check. Harry wants to conquer the entire Universe with the power of his intelligence, and has plans for how to fill an eternity, including to “...meet up with everyone else who was born on Old Earth to watch the Sun finally go out…” (Ch. 39).
7. Believes that he is special and should only be around other high-status people? Check. Harry avoids average students when possible, and certainly does not hang out with them for fun. “Note to self: The 75th percentile of Hogwarts students a.k.a. Ravenclaw House is not the world's most exclusive program for gifted children” (Ch. 12).
Harry’s association with the (presumably non-special) students in his army is not an exception because minimal text is devoted to Harry instructing them, while much text explains how powerful and high-status the students in the army have become. For Harry, it appears that the army is a tool to use and an opportunity to show off, not an opportunity to give back and help friends improve their skills for their own sake.
8. Requires extreme admiration for everything? Check. Harry takes anything less than admiration for his brilliance as an insult, and responds by striving for new levels of intellectual achievement and arrogance, until the others recognize his dominance. “And I bit a math teacher when she wouldn't accept my dominance” (Ch. 20). Quirrell’s lesson on how to lose described how to avoid making powerful enemies, not how to empathize and care for others -- the insatiable need for admiration is merely delayed and repressed, not corrected.
9. Feels entitled - has unreasonable expectations of special treatment? Check. Harry requires subservience from the school administration, and special magic items such as the time-turner. “McGonagall said, "but I do have a very special something else to give you. I see that I have greatly wronged you in my thoughts, Mr. Potter...this is an item which is ordinarily lent only to children who have already shown themselves to be highly responsible” (Ch. 14).
10. Takes advantage of others to further his own need? Check. Harry justifies his actions toward Draco by saying "I only used you in ways that made you stronger. That's what it means to be used by a friend." (Ch. 97)
11. Does not recognize the feelings of others? Check. One example is Harry not realizing how Neville felt about the prank on the train to Hogwarts. Another is Harry’s remarkably clueless question to Hermione, “Er, can I take it from this that you have been through puberty?" (Ch. 87) Harry has not learned empathy yet: “Harry flinched a little himself. Somewhere along the line he needed to pick up the knack of not phrasing things to hit as hard as he possibly could” (Ch. 86).
12. Envious or believes they are envied? Check. Quirrell said to Harry, “You have everything now that I wanted then. All that I know of human nature says that I should hate you. And yet I do not. It is a very strange thing.” (Ch. 74)
13. Behaves arrogantly? Check. “Minerva's body swayed with the force of that blow, with the sheer raw lese majeste. Even Severus looked shocked.” (Ch. 19) I can’t think offhand of a single instance when Harry is not arrogant.
Therefore, I conclude that Harry and Harry’s mother are both narcissistic. If you want further reading on this topic, look up "The Drama of the Gifted Child" by Dr. Alice Miller (Google for the .pdf) for a more detailed description of a child’s typical relationship with a narcissistic parent.
I am sharing this because it reveals a pattern of cognitive biases that many people (like me) who enjoyed HPMOR, and their parents, probably have. Specifically, there is a strong bias toward either narcissistic or people-pleasing habits, and a difficulty with recognizing and following one’s own desires (because the Universe, unlike a parent, never tells people what to do). One possible reason for studying science is to defend against a parent’s emotional neediness and refusal to provide ego-validation by building an impenetrable edifice of logical truth. Unfortunately, identifying the parent’s cognitive biases does not stop their criticism. A more pleasant strategy is to recognize the dynamic, mourn the warping of childhood by the controlling parenting, set appropriate boundaries in the future, and draw validation from following one’s own goals instead of an internalized parent’s goals.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.
I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.
Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.
"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.