All of Turgurth's Comments + Replies

I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version:

I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization.

So, the standard advice is: if you are young, healthy with a long life expectancy, ... (read more)

2ImmortalRationalist
If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Check out this FDA speculation.

Scott Alexander comments here.

Taken. Wasn't bothered by the length -- could be even longer next time.

I took the survey, and wanted it to be longer.

I wanted to love this post, but stylistic issues got in the way.

It read too much like a gwern essay: certainly interesting, but in need of a summary and a guide for how it is practically applicable. A string of highlights and commentary with no clear underlying organization and conclusion is not optimally useful.

That being said, I appreciate you taking the time to create this post, as well as your call for constructive criticism.

It's not specifically rationalist, but Dune is what first comes to mind for "smart characters that win", at least in the first book.

Well, does time permit?

"Not being able to get the future exactly right doesn’t mean you don’t have to think about it."

--Peter Thiel

Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.

0brazil84
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live? I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.

That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.

There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insuranc... (read more)

0brazil84
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you. Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions. Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy. Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company. So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach. (There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?) I agree with this to an extent.

I don't think it's been asked before on Less Wrong, and it's an interesting question.

It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.

The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist... (read more)

1brazil84
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference. Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death. It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor. Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later. So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best. Does that sound about right?

I just want to mention how much I appreciate these threads: this is my most trusted source of media recommendations! Thank you to all involved.

Good idea. (Note, if you haven't seen the film, here's a spoiler-heavy synopsis).

My line of thought:

Gur pngnfgebcuvp (yngre erirnyrq gb or rkvfgragvny) evfx bs gur svyz vf gur svpgvbany bar bs tvnag zbafgref, be xnvwh, nggnpxvat uhzna cbchyngvba pragref, ohg greebevfz hfvat shgher jrncbaf, nagvovbgvp erfvfgnapr, pyvzngr punatr, rgp. pna pyrneyl or fhofgvghgrq va.

Tvnag zbafgref pna jbex nf na rfcrpvnyyl ivfpreny qenzngvmngvba bs pngnfgebcuvp evfxf, nf abg bayl Cnpvsvp Evz ohg gur bevtvany, harqvgrq, Wncnarfr-ynathntr irefvba bs gur svyz Tbwven (gur frzvany... (read more)

Pacific Rim pleasantly surprised me. I could list the manifold ways this movie dramatizes how to correctly deal with catastrophic risks, but I don't want to spoil it for you.

Plus it is awesome, in both senses of the word.

0James_Miller
Yes, and it's kid safe. My 8-year-old loved it.
4NancyLebovitz
I'm curious about your line of thought. Would you be willing to post it in rot13?
2[anonymous]
Sorry, who did what correctly now? Do you mean the guys who decided that the best way to deal with big slow monsters coming out of the Mariana Trench is to let them reach coastal cities and then send giant robots to wrestle with them? With pilots inside? And who later figured out that nuking the portal would be a good idea, but decided to use the same giant robots (still with pilots inside) as a delivery system? All the while having perfect undersea radio communication, which we don't have? I'd say there was only one sane person in the movie, the guy who was selling kaiju parts as cures for delicate diseases. He'd probably be able to save the world without the military and make a profit in the process.
0[anonymous]
Indeed, the movie had at least one character who showed how to correctly deal with that particular risk. He'd probably be able to deal with it alone even if the military wasn't involved. Buy boats, kill kaijus at sea, sell their parts as quack cures for delicate conditions, repeat. Though more realistically, the military would get involved. They would cordon off a chunk of the ocean and use it for some really fun target practice. On some days they'd invite the CNN over. Big slow monsters appearing at a monitored spot in the middle of the ocean aren't a threat to mankind, they're more like a surprise gift. It would take a special kind of intelligence to let the monsters reach coastal cities, and then send giant robots to wrestle with them. With pilots inside. Yeah.

Google Translate gets me "flight of death" or "wants death". "Flight of death" might refer to AK. More interestingly, "wants death" would make no sense in reference to himself wanting death, but it would make sense in reference to Voldemort wanting the deaths of others. There's some possible support for your interpretation there.

Your heuristic for getting the news checks out in my experience, so that seems worth trying.

I wouldn't be surprised if we've both seen plenty of Snowden/NSA on Hacker News.

Thanks for the links.

And while I agree with you that quitting the news would likely be intellectually hygienic and emotionally healthy, it would probably also work as an anti-akrasia tactic in the specific case of cutting out something I often turn to to avoid actual work. Similar to the "out of sight, out of mind" principle, but more "out of habit, out of mind".

2Vladimir_Golovin
Mainstream news are a dopamine loop magnified by an intermittent reinforcement schedule. You keep clicking for more and checking the sources every 10 minutes. Plus you can't break out of the loop intellectually because the news content switches you from the 'intellectual mode' into the 'tribal mode' or even the 'imminent danger' mode. In the absence of mainstream news, technical news alone were never that addictive to me.

Thanks for doing this, though I suggest moving it to the Discussion section in the hope of getting more responses there.

I'm curious (nonjudgementally): do you get your news now from non-mainstream sources, or do you stay away from news altogether? I ask because I am considering trying this anti-akrasia tactic myself, but am unsure regarding the details.

8Vladimir_Golovin
I don't read mainstream news sources, and I don't participate in social networks, but I do read technical, professional and scientific news. Here's how I get the news: If a mainstream story is important, I'll hear about it from co-workers or family. Also, high-magnitude stories (e.g. Snowden / NSA, or yesterday's 5 year sentence for AlexeI Navalny) usually appear on non-mainstream news sources. The point of quitting news is not stopping being aware of what happens around you. The point is to avoid their negative effects (scrambling the mind, incorrectly perceiving the environment as more dangerous than it is / overestimating the probability of dangerous events happening to me, cortisol release, etc). Here are some good articles on the topic (you may recognize some of the authors): * http://www.guardian.co.uk/media/2013/apr/12/news-is-bad-rolf-dobelli * http://joel.is/post/31582795753/the-power-of-ignoring-mainstream-news * http://www.aaronsw.com/weblog/001226 Also, I don't think quitting news is an anti-akrasia tactic. It's more similar to hygiene, or not eating fast food.

This sounds promising. As a LWian living many states away, I'd love to see a synopsis posted if it's not too much trouble. There is a hunger for more instrumental rationality on this website.

To add to Principle #5, in a conversational style: "if something exists, that something can be quantified. Beauty, love, and joy are concrete and measurable; you just fail at it. To be fair, you lack the scientific and technological means of doing so, but - failure is failure. You failing at quantification does not devalue something of value."

I suppose the first step would be being more instrumentally rational about what I should be instrumentally rational about. What are the goals that are most worth achieving, or, what are my values?

Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.

1NancyLebovitz
What do you want to be more rational about?
1MrMind
Reading "Diaminds" holds the promise to be on the track of making me a better rationalist, but so far I cannot say that with certainty, I'm only at the second chapter (source: recommendation here on LW, also the first chapter is dedicated to explaining the methodology, and the authors seem to be good rationalists, very aware of all the involved bias). Also "dual n-back training" via dedicated software improves short term memory, which seems to have a direct impact on our fluid intelligence (source: vaguely remembered discussion here on LW, plus the bulletproofexec blog).

Try some exposure therapy to whatever it is you're often afraid of. Can't think of what you're often afraid of? I'd be surprised if you're completely immune to every common phobia.

Very interested.

Also, here's a bit of old discussion on the topic I found interesting enough to save.

If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.

I don't think there are any such community pressures, as long as a summary accompanies the link.

I recently noticed "The Fable of the Dragon-Tyrant" under the front page's Featured Articles section, which caused me to realize that there's more to Featured Articles than the Sequences alone. This particular article (an excellent one, by the way) is also not from Less Wrong itself, yet is obviously relevant to it; it's hosted on Nick Bostrom's personal site.

I'm interested in reading high-quality non-Sequences articles (I'm making my way through the Sequences separately using the [SEQ RERUN] feature) relevant to Less Wrong that I might have miss... (read more)

5Douglas_Knight
The featured articles are controlled by the wiki, and thus the history is accessible, if awkward.

Michaelcurzi's How to avoid dying in a car crash is relevant. Bentarm's comment on that thread makes an excellent point regarding coronary heart disease.

There is also Eliezer Yudkowsky's You Only Live Twice and Robin Hanson's We Agree: Get Froze on cryonics.

I have a few questions, and I apologize if these are too basic:

1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?

2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?

3) Is there much ten... (read more)

8CarlShulman
Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient. Yes. They spend more time on it, relatively speaking. Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.

One possible alternative would be choosing to appear in the Americas.