Survey: What's the most negative*plausible cryonics-works story that you know?
Warning: people will be trying to be pessimistic here. Don't read this if you don't want to be reminded of scary outcomes.
Request: if you get an idea that you think might be too scary to post publicly even under the above warning, but you are willing to send it to me in a private message to aid in my personal decision-making, then please do :)
Motivation:
I like cryonics. According to my parents and grandmother, I started talking about building an AI to help with medical research to revive frozen dead people when I was about 10 years old, and my memory agrees. I began experimenting with freeing and unfreezing insects, and figured based on some positive results that it was physically possible to preserve life in a frozen state. Cool!
But now that I'm in middle of convincing some folks I know to sign up for cryonics, I want to do due-diligence on some of the vague, hard-to-verbalize aversions they have to doing it. This way, I can help them plan contingencies for / hedges against those aversions if possible, thereby making cryonics more viable for them, and maybe avoid accidentally persuading people do cryonics when it really isn't right for them (yes, I think that can actually happen).
There's already been a post on far negative outcomes, and another one on why cryonics maybe isn't worth it. But what I really want to do here is conduct an interactive survey to compute which disutilities should be taken most seriously when talking to a new person about cryonics, to avoid accidentally persuading them into making a wrong-for-them decision.
And for that, what I really want to ask is:
What's the most negative*plausible cryonics-works story that you know of?
Examples:
(1) A well-meaning but slightly-too-obsessed cryonics scientist wakes up some semblance of me in a semi-conscious virtual delirium for something like 1000 very unpleasant subjective years of tinkering to try recovering me. She eventually quits, and I never wake up again.
(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I'm not allowed to die for thousands of subjective years or more.
I think on reflection I'd consider (1) to be around 10x and maybe 100x more likely than (2)*, but depending on your preferences, you might find (2) to be more than 100x worse than (1), enough to make it account for the biggest chunk of disutility that can be attributed to any particular simple story or story-feature where cryonics works.
[* I would have said (1) was definitely more than 100x more likely before so many of my female friends have, over the years, mentioned that they were subject to some pretty scary sexual violence at some point in their dating lives.]
(Note: There's a separate question of whether the outcome is positive enough to be worth the money, which I'd rather discuss in a different thread.)
How to participate:
- Top-level comments = stories. Post your most negative*plausible story or story-feature as a top-level comment.
- A top-level upvote shall mean "essentially in my top-three". Upvote stories that you'd consider essentially the same as one of your top-two stories, ranked by negativity*probability. This means you can vote more than three times if your top stories get represented in variety of ways, so don't be shy.
- Lower-level comments = discussion! Let's disagree about the relative probabilities and negativities of things and maybe change some of our minds!
Thanks for playing :)
PS I hope folks use these ideas to come up with ways to decrease the likelihood that cryonics leads to negative outcomes, and not to cause or experience premature fears that derail productive conversations. So, please don't share/post this in ways where you think it might have the latter effect, but rather, use it as a part of a sane and thorough evaluation of all the pros and cons that one should reasonably consider in deciding whether cryonics working is on-net a positive outcome.
ETA -- What not to post:
Some non-examples of what this survey should contain...
- Examples where you don't get revived in any way. These scenarios factor into the "will cryonics work for me" question, a question of probability that does not depend on your values, which I'd prefer to discuss is a separate thread because probabilities are easier to converge on without distracting ourselves with values questions.
Deliberate Grad School
Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: "If you want to have a real positive impact on the world, grad school is a waste of time. It's better to use deliberate practice to learn whatever you need instead of working within the confines of an institution."
While I'd agree that grad school will not make you do good for the world, if you're a self-driven person who can spend time in a PhD program deliberately acquiring skills and connections for making a positive difference, I think you can make grad school a highly productive path, perhaps more so than many alternatives. In this post, I want to share some advice that I've been repeating a lot lately for how to do this:
- Find a flexible program. PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying. By contrast, sciences like biology and chemistry can require time-consuming laboratory work that you can't always speed through by being clever.
- Choose high-impact topics to learn about. AI safety and existential risk reduction are my favorite examples, but there are others, and I won't spend more time here arguing their case. If you can't make your thesis directly about such a topic, choosing a related more popular topic can give you valuable personal connections, and you can still learn whatever you want during the spare time a flexible program will afford you.
- Teach classes. Grad programs that let you teach undergraduate tutorial classes provide a rare opportunity to practice engaging a non-captive audience. If you just want to work on general presentation skills, maybe you practice on your friends... but your friends already like you. If you want to learn to win over a crowd that isn't particularly interested in you, try teaching calculus! I've found this skill particularly useful when presenting AI safety research that isn't yet mainstream, which requires carefully stepping through arguments that are unfamiliar to the audience.
- Use your freedom to accomplish things. I used my spare time during my PhD program to cofound CFAR, the Center for Applied Rationality. Alumni of our workshops have gone on to do such awesome things as creating the Future of Life Institute and sourcing a $10MM donation from Elon Musk to fund AI safety research. I never would have had the flexibility to volunteer for weeks at a time if I'd been working at a typical 9-to-5 or a startup.
- Organize a graduate seminar. Organizing conferences is critical to getting the word out on important new research, and in fact, running a conference on AI safety in Puerto Rico is how FLI was able to bring so many researchers together on its Open Letter on AI Safety. It's also where Elon Musk made his donation. During grad school, you can get lots of practice organizing research events by running seminars for your fellow grad students. In fact, several of the organizers of the FLI conference were grad students.
- Get exposure to experts. A top 10 US school will have professors around that are world-experts on myriad topics, and you can attend departmental colloquia to expose yourself to the cutting edge of research in fields you're curious about. I regularly attended cognitive science and neuroscience colloquia during my PhD in mathematics, which gave me many perspectives that I found useful working at CFAR.
- Learn how productive researchers get their work done. Grad school surrounds you with researchers, and by getting exposed to how a variety of researchers do their thing, you can pick and choose from their methods and find what works best for you. For example, I learned from my advisor Bernd Sturmfels that, for me, quickly passing a draft back and forth with a coauthor can get a paper written much more quickly than agonizing about each revision before I share it.
- Remember you don't have to stay in academia. If you limit yourself to only doing research that will get you good post-doc offers, you might find you aren't able to focus on what seems highest impact (because often what makes a topic high impact is that it's important and neglected, and if a topic is neglected, it might not be trendy enough land you good post-doc). But since grad school is run by professors, becoming a professor is usually the most salient path forward for most grad students, and you might end up pressuring yourself to follow that standards of that path. When I graduated, I got my top choice of post-doc, but then I decided not to take it and to instead try earning to give as an algorithmic stock trader, and now I'm a research fellow at MIRI. In retrospect, I might have done more valuable work during my PhD itself if I'd decided in advance not to do a typical post-doc.
That's all I have for now. The main sentiment behind most of this, I think, is that you have to be deliberate to get the most out of a PhD program, rather than passively expecting it to make you into anything in particular. Grad school still isn't for everyone, and far from it. But if you were seriously considering it at some point, and "do something more useful" felt like a compelling reason not to go, be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.
Please email me (lastname@thisdomain.com) if you have more ideas for getting the most out of grad school!
Willpower Depletion vs Willpower Distraction
I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:
Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.
Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:
When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.
While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:
Willpower is distractible.
Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.
So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:
- Thirst
- Hunger
- Sleepiness
- Physical fatigue (like from running)
- Physical discomfort (like from sitting)
- That specific-other-thing you want to do
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).
If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.
The last two bullets,
- Anxiety about willpower depletion
- Indignation at being asked for too much by bosses, partners, or experimenters...
are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.
Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...
All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".
CFAR is looking for a videographer for next Wednesday
Hi all, CFAR is looking for a videographer in the Bay Area to shoot and edit a 1-minute video introducing us. Do you know anyone?
Are coin flips quantum random to my conscious brain-parts?
Hello rationality friends! I have a question that I bet some of you have thought about...
I hear lots of people saying that classical coin flips are not "quantum random events", because the outcome is very nearly determined by thumb movement when I flip the coin. More precisely, one can stay that the state of my thumb and the state of the landed coin are strongly entangled, such that, say, 99% of the quantum measure of the coin flips outcomes my post-flip thumb observes all land heads.
First of all, I've never actually seen an order of magnitude estimate to support this claim, and would love it if someone here can provide or link to one!
Second, I'm not sure how strongly entangled my thumb movement is with my subjective experience, i.e., with the parts of my brain that consciously process the decision to flip and the outcome. So even if the coin outcome is almost perfectly determined by my thumb, it might not be almost perfectly determined by my decision to flip the coin.
For example, while the thumb movement happens, a lot of calibration goes on between my thumb, my motor cortex, and my cerebellum (which certainly affects but does not seem to directly process conscious experience), precisely because my motor cortex is unable to send, on its own, a precise and accurate enough signal to my thumb that achieves the flicking motion that we eventually learn to do in order to flip coins. Some of this inability is due to small differences in environmental factors during each flip that the motor cortex does not itself process directly, but is processed by the cerebellum instead. Perhaps some of this inability also comes directly from quantum variation in neuron action potentials being reached, or perhaps some of the aforementioned environmental factors arise from quantum variation.
Anyway, I'm altogether not *that* convinced that the outcome of a coin flip is sufficiently dependent on my decision to flip as to be considered "not a quantum random event" by my conscious brain. Can anyone provide me with some order of magnitude estimates to convince me either way about this? I'd really appreciate it!
ETA: I am not asking if coin flips are "random enough" in some strange, undefined sense. I am actually asking about quantum entanglement here. In particular, when your PFC decides for planning reasons to flip a coin, does the evolution of the wave function produce a world that is in a superposition of states (coin landed heads)⊗(you observed heads) + (coin landed tails)⊗(you observed tails)? Or does a monomial state result, either (coin landed heads)⊗(you observed heads) or (coin landed tails)⊗(you observed tails) depending on the instance?
At present, despite having been told many times that coin flips are not "in superpositions" relative to "us", I'm not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is somewhat testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce expected and measurable interference patterns. This is what we have failed to produce at a macroscopic level in attempts to produce visible superpositions. But I don't know if we fail to produce messier, less-visibly-self-interfering superpositions, which is why I am still wondering about this...
Any help / links / fermi estimates on this will be greatly appreciated!
[LINK] General-audience documentary on cosmology, anthropics, and superintelligence
If you have friends or family you'd like to get thinking about cosmology and the like, this might be a nice documentary to stir up curiosity. Despite clearly being aimed at a general audience, I thought this documentary -- including interviews of Tegmark and Bostrom --- did a surprisingly good job of talking about the beginning of the universe and our place in it:
http://www.youtube.com/watch?v=oyH2D4-tzfM
Also, even though I've had all these thoughts before, it still makes me more emotionally motivated to live long enough to see scientific advances on these questions.
The Relation Projection Fallacy and the purpose of life
I bet most people here have realized this explicitly or implicitly, but this comment has inspired me to write a short, linkable summary of this error pattern, with a name:
The Relation Projection Fallacy: a denotational error whereby one confuses an n-ary relation for an m-ary relation, where usually m<n.
Example instance: "Life has no purpose."
This is a troublesome phrase. Why? If you look at unobjectionable uses of the concept <purpose> --- also referenced by synonyms like "having a point" --- it is in fact a ternary relation.
Example non-instance: "The purpose of a doorstop is to stop doors."
Here, one can query "to whom?" and be returned the context "to the person who made it" or "to the person who's using it", etc. That is, the full denotation of "purpose" is always of the form "The purpose of X to Y is Z," where Y is often implicit or can take a wide range of values.
This has nothing to do with connotation... it's just how the concept <purpose> typically works as people use it. But to flog a dead horse, the purpose of a doorstop to a cat may be to make an amusing sound as it glides across the floor after the cat hits it. The value of Y always matters. There is no "true purpose" stored anywhere inside the doorstop, or even in the combination of the doorstop and the door it is stopping. To think otherwise is literally projecting, in the mathematical sense, a ternary relation, i.e., a subset of a product of three sets (objects)x(agents)x(verbs), into a product of two sets, (objects)x(verbs). But people often do this projection incorrectly, by either searching for a purpose that is intrinsic to the Doorstop or to Life, or by searching for a canonical value of "Y" like "The Great Arbiter of Purpose", both of which are not to be found, at least to their satisfaction when they utter the phrase "Life has no purpose."
Likewise, the relation "has a purpose" is typically a binary relation, because again, we can always ask "to whom?". "<That doorstop> has a purpose to <me>."
In some form, this realization is of course the cause of many schools of thought taking the name "relativist" on many different issues. But I find that people over-use the phrase "It's all relative" to connote "It's all meaningless" or "there is no answer". Which is ironic, because meaning itself is a ternary relation! Its typical denotation is of the form "The meaning of X to Y is Z", like in
- "The meaning of <the sound 'owe'> to <French people> is <liquid water>" or
- "The meaning of <that pendant> to <your mother> is <a certain undescribed experience of sentimentality>".
Realizing this should NOT result in a cascade of bottomless relativism where nothing means anything! In fact, the first time I had this thought as a kid, I arrived at the connotationally pleasing conclusion "My life can have as many purposes as there are agents for it to have a purpose to."
Indeed, the meaning of <"purpose"> to <humans> is <a certain ternary functional relationship between objects, agents, and verbs>, and the meaning of <"meaning"> to <humans> is <a certain ternary relationship between syntactic elements, people generating or perceiving them, and referents>.
When I found LessWrong, I was happy to find that Eliezer wrote on almost exactly this realization in 2-Place and 1-Place Words, but sad that the post had few upvotes -- only 14 right now. So in case it was too long, or didn't have a snappy enough name, I thought I'd try giving the idea another shot.
ETA: In the special case of talking to someone wondering about the purpose of life, here is how I would use this observation in the form of an argument:
First of all, you may be lacking satisfaction in your life for some reason, and framing this to yourself in philosophical terms like "Life has no purpose, because <argument>." If that's true, it's quite likely that you'd feel differently if your emotional needs as a social primate were being met, and in that sense the solution is not an "answer" but rather some actions that will result in these needs being met.
Still, that does not address the <argument>. So because "What is s the purpose of life?" may be a hard question, let's look at easier examples of purpose and see how they work. Notice how they all have someone the purpose is to? And how that's missing in your "purpose of life" question? Because of that, you could end up feeling one of two ways:
(1) Satisfied, because now you can just ask "What could be the purpose of my life to <my friends, my family, myself, the world at large, etc>", and come up with answers, or
(2) Unsatisfied, because there is no agent to ask about such that the answer would seem important enough to you.
And I claim that whether you end up at (1) or (2) is probably more a function of whether your social primate emotional needs are being met than any particular philosophical argument.
That being said, if you believe this argument, the best thing to do for someone lacking a sense of purpose is probably not to just say the argument, but to help them start satisfying their emotional needs, and have this argument mainly to satisfy their sense of curiosity or nagging intellectual doubts about the issue.
Narrative, self-image, and self-communication
Related to: Cached selves, Why you're stuck in a narrative, The curse of identity
Outline: Some back-story, Pondering the mechanics of self-image, The role of narrative, Narrative as a medium for self-communication.
tl;dr: One can have a self-image that causes one to neglect the effects of self-image. And, since we tend to process our self-images somewhat in the context of a narrative identity, if you currently make zero use of narrative in understanding and affecting how you think about yourself, it may be worth adjusting upward. All this seems to have been the case for me, and is probably part of what makes HPMOR valuable.
Some back-story
Starting when I was around 16 and becoming acutely annoyed with essentialism, I prided myself on not being dependent on a story-like image of myself. In fact, to make sure I wasn't, I put a break command in my narrative loop: I drafted a story in my mind about a hero who was able to outwit his foes by being less constrained by narrative than they were, and I identified with him whenever I felt a need-for-narrative coming on. Batman's narrator goes for something like this in the Dark Knight when he <select for spoiler-> abandons his heroic image to take the blame for Harvey Dent's death.
I think this break command was mostly a good thing. It helped me to resolve cognitive dissonance and overcome the limitations of various cached selves, and I ended up mostly focussed on whether my beliefs were accurate and my desires were being fulfilled. So I still figure it's a decent first-order correction to being over-constrained by narrative.
But, I no longer think it's the only decent solution. In fact, understanding the more subtle mechanics of self-image — what affects our self schemas, what they affect, and how — was something I neglected for a long time because I saw self-image as a solved problem. Yes, I developed a cached view of myself as unaffected by self-image constraints. I would have been embarassed to notice such dependencies, so I didn't. The irony, eh?
I'm writing this because I wouldn't be surprised to find others here developing, or having developed, this blind spot...
Credence calibration game FAQ
Hey rationality friends, I just made this FAQ for the credence calibration game. So if you have people you'd like to introduce to it --- for example, to get them used to thinking of belief strengths as probabilities --- now is a good time :)
Voting is like donating thousands of dollars to charity
Summary: People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000. So I figure something on the order of $1000 is a reasonable evaluation (after all, I'm writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that's be enough to make me do it.
Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.
I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!
Time for a Fermi estimate
Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.
I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)