That's great and all, but with all due respect:
Fuck. That. Noise.
Regardless of the odds of success and what the optimal course of action actually is, I would be very hard pressed to say that I'm trying to "help humanity die with dignity". Regardless of what the optimal action should be given that goal, on an emotional level, it's tantamount to giving up.
Before even getting into the cost/benefit of that attitude, in the worlds where we do make it out alive, I don't want to look back and see a version of me where that became my goal. I also don't think that if that was my goal, that I would fight nearly as hard to achieve it. I want a catgirl volcano lair not "dignity". So when I try to negotiate with my money brain to expend precious calories, the plan had better involve the former, not the latter. I suspect that something similar applies to others.
I don't want to hear about genre-saviness from the defacto-founder of the community that gave us HPMOR!Harry and the Comet King after he wrote this post. Because it's so antithetical to the attitude present in those characters and posts like this one.
I also don't want to hear about second-order effects when, as best as I can tell, the att...
I think there's an important point about locus of control and scope. You can imagine someone who, early in life, decides that their life's work will be to build a time machine, because the value of doing so is immense (turning an otherwise finite universe into an infinite one, for example). As time goes on, they notice being more and more pessimistic about their prospects of doing that, but have some block against giving up on an emotional level. The stakes are too high for doomerism to be entertained!
But I think they overestimated their locus of control when making their plans, and they should have updated as evidence came in. If they reduced the scope of their ambitions, they might switch from plans that are crazy because they have to condition on time travel being possible to plans that are sane (because they can condition on actual reality). Maybe they just invent flying cars instead of time travel, or whatever.
I see this post as saying: "look, people interested in futurism: if you want to live in reality, this is where the battle line actually is. Fight your battles there, don't send bombing runs behind miles of anti-air defenses and wonder why you don't seem to be getting any...
"To win any battle, you must fight as if you are already dead.” — Miyamoto Musashi.
I don't in fact personally know we won't make it. This may be because I'm more ignorant than Eliezer, or may be because he (or his April first identity, I guess) is overconfident on a model, relative to me; it's hard to tell.
Regardless, the bit about "don't get psychologically stuck having-to-(believe/pretend)-it's-gonna-work seems really sane and healthy to me. Like falling out of an illusion and noticing your feet on the ground. The ground is a more fun and joyful place to live, even when things are higher probability of death than one is used to acting-as-though, in my limited experience. More access to creativity near the ground, I think.
But, yes, I can picture things under the heading "ineffective doomerism" that seem to me like they suck. Like, still trying to live in an ego-constructed illusion of deferral, and this time with "and we die" pasted on it, instead of "and we definitely live via such-and-such a plan."
I think I have more access to all of my emotional range nearer the ground, but this sentence doesn't ring true to me.
The ground is a more fun and joyful place to live, even when things are higher probability of death than one is used to acting-as-though, in my limited experience.
As cheesy as it is, this is the correct response. I'm a little disappointed that Eliezer would resort to doomposting like this, but at the same time it's to be expected from him after some point. The people with remaining energy need to understand his words are also serving a personal therapeautic purpose and press on.
Some people can think there's next to no chance and yet go out swinging. I plan to, if I reach the point of feeling hopeless.
Yeah -- I love AI_WAIFU's comment, but I love the OP too.
To some extent I think these are just different strategies that will work better for different people; both have failure modes, and Eliezer is trying to guard against the failure modes of 'Fuck That Noise' (e.g., losing sight of reality), while AI_WAIFU is trying to guard against the failure modes of 'Try To Die With More Dignity' (e.g., losing motivation).
My general recommendation to people would be to try different framings / attitudes out and use the ones that empirically work for them personally, rather than trying to have the same lens as everyone else. I'm generally a skeptic of advice, because I think people vary a lot; so I endorse the meta-advice that you should be very picky about which advice you accept, and keep in mind that you're the world's leading expert on yourself. (Or at least, you're in the best position to be that thing.)
Cf. 'Detach the Grim-o-Meter' versus 'Try to Feel the Emotions that Match Reality'. Both are good advice in some contexts, for some people; but I think there's some risk from taking either strategy too far, especially if you aren't aware of the other strategy as a viable option.
I interpreted AI_WAIFU as pushing back against a psychological claim ('X is the best attitude for mental clarity, motivation, etc.'), not as pushing back against a AI-related claim like P(doom). Are you interpreting them as disagreeing about P(doom)? (If not, then I don't understand your comment.)
If (counterfactually) they had been arguing about P(doom), I'd say: I don't know AI_WAIFU's level of background. I have a very high opinion of Eliezer's thinking about AI (though keep in mind that I'm a co-worker of his), but EY is still some guy who can be wrong about things, and I'm interested to hear counter-arguments against things like P(doom). AGI forecasting and alignment are messy, pre-paradigmatic fields, so I think it's easier for field founders and authorities to get stuff wrong than it would be in a normal scientific field.
The specific claim that Eliezer's P(doom) is "informed by conversations with dozens of people about the problem" (if that's what you were claiming) seems off to me. Like, it may be technically true under some interpretation, but (a) I think of Eliezer's views as primarily based on his own models, (b) I'd tentatively guess those models are much more based on things like 'reading textbooks' and 'thinking things through himself' than on 'insights gleaned during back-and-forth discussions with other people', and (c) most people working full-time on AI alignment have far lower P(doom) than Eliezer.
Ah, I see. I think Eliezer has lots of relevant experience and good insights, but I still wouldn't currently recommend the 'Death with Dignity' framing to everyone doing good longtermist work, because I just expect different people's minds to work very differently.
Agreed. Also here’s the poem that goes with that comment:
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light.Wild men who caught and sang the sun in flight,
And learn, too late, they grieved it on its way,
Do not go gentle into that good night.Grave men, near death, who see with blinding sight
Blind eyes could blaze like meteors and be gay,
Rage, rage against the dying of the light.And you, my father, there on the sad height,
Curse, bless, me now with your fierce tears, I pray.
Do not go gentle into that good night.
Rage, rage against the dying of the light.
I totally empathize with Eliezer, and I’m afraid that I might be similarly burned out if I had been trying this for as long.
But that’s not who I want to be. I want to be Harry who builds a rocket to escape Azkaban, the little girl that faces the meteor with a baseball bat, and the...
Makes me think of the following quote. I'm not sure how much I agree with or endorse it, but it's something to think about.
The good fight has its own values. That it must end in irrevocable defeat is irrelevant.
— Isaac Asimov, It's Been A Good Life
Most fictional characters are optimised to make for entertaining stories, hence why "generalizing from fictional evidence" is usually a failure-mode. The HPMOR Harry and the Comet King were optimized by two rationalists as examples of rationalist heroes — and are active in allegorical situations engineered to say something that rationalists would find to be “of worth” about real world problems.
They are appealing precisely because they encode assumptions about what a real-world, rationalist “hero” ought to be like. Or at least, that's the hope. So, they can be pointed to as “theses” about the real world by Yudkowsky and Alexander, no different from blog posts that happen to be written as allegorical stories, and if people found the ideas encoded in those characters more convincing than the ideas encoded in the present April Fools' Day post, that's fair enough.
Not necessarily correct on the object-level, but, if it's wrong, it's a different kind of error from garden-variety “generalizing from fictional evidence”.
As fictional characters popular among humans, what attitude is present in them is evidence for what sort of attitude humans like to see or inhabit. As author of those characters, Yudkowsky should be aware of this mechanism. And empirically, people with accurate beliefs and positive attitudes outperform people with accurate beliefs and negative attitudes. It seems plausible Yudkowsky is aware of this as well.
"Death with dignity" reads as an unnecessarily negative attitude to accompany the near-certainty of doom. Heroism, maximum probability of catgirls, or even just raw log-odds-of-survival seem like they would be more motivating than dignity without sacrificing accuracy.
Like, just substitute all instances of 'dignity' in the OP with 'heroism' and naively I would expect this post to have a better impact(/be more dignified/be more heroic), except insofar it might give a less accurate impression of Yudkowsky's mood. But few people have actually engaged with him on that front.
Seeing this post get so strongly upvoted makes me feel like I'm going crazy.
This is not the kind of content I want on LessWrong. I did not enjoy it, I do not think it will lead me to be happier or more productive toward reducing x-risk, I don't see how it would help others, and it honestly doesn't even seem like a particularly well done version of itself.
Can people help me understand why they upvoted?
For whatever it is worth, this post along with reading the unworkable alignment strategy on the ELK report has made me realize that we actually have no idea what to do and has finally convinced me to try to solve alignment, I encourage everyone else to do the same. For some people knowing that the world is doomed by default and that we can't just expect the experts to save it is motivating. If that was his goal, he achieved it.
Certainly for some people (including you!), yes. For others, I expect this post to be strongly demotivating. That doesn’t mean it shouldn’t have been written (I value honestly conveying personal beliefs and are expressing diversity of opinion enough to outweigh the downsides), but we should realistically expect this post to cause psychological harm for some people, and could also potentially make interaction and PR with those who don’t share Yudkowsky’s views harder. Despite some claims to the contrary, I believe (through personal experience in PR) that expressing radical honesty is not strongly valued outside the rationalist community, and that interaction with non-rationalists can be extremely important, even to potentially world-saving levels. Yudkowsky, for all of his incredible talent, is frankly terrible at PR (at least historically), and may not be giving proper weight to its value as a world-saving tool. I’m still thinking through the details of Yudkowsky’s claims, but expect me to write a post here in the near future giving my perspective in more detail.
I don't think "Eliezer is terrible at PR" is a very accurate representation of historical fact. It might be a good representation of something else. But it seems to me that deleting Eliezer from the timeline would probably result in a world where far far fewer people were convinced of the problem. Admittedly, such questions are difficult to judge.
I think "Eliezer is bad at PR" rings true in the sense that he belongs in the cluster of "bad at PR"; you'll make more correct inferences about Eliezer if you cluster him that way. But on historical grounds, he seems good at PR.
Eliezer is "bad at PR" in the sense that there are lots of people who don't like him. But that's mostly irrelevant. The people who do like him like him enough to donate to his foundation and all of the foundations he inspired.
My personal experience is that the people who actively dislike Eliezer are specifically the people who were already set on their path; they dislike Eliezer mostly because he's telling them to get off that path.
I could be wrong, however; my personal experience is undoubtedly very biased.
I’ll tell you that one of my brothers (who I greatly respect) has decided not to be concerned about AGI risks specifically because he views EY as being a very respected “alarmist” in the field (which is basically correct), and also views EY as giving off extremely “culty” and “obviously wrong” vibes (with Roko’s Basilisk and EY’s privacy around the AI boxing results being the main examples given), leading him to conclude that it’s simply not worth engaging with the community (and their arguments) in the first place. I wouldn’t personally engage with what I believe to be a doomsday cult (even if they claim that the risk of ignoring them is astronomically high), so I really can’t blame him.
I’m also aware of an individual who has enormous cultural influence, and was interested in rationalism, but heard from an unnamed researcher at Google that the rationalist movement is associated with the alt-right, so they didn’t bother looking further. (Yes, that’s an incorrect statement, but came from the widespread [possibly correct?] belief that Peter Theil is both alt-right and has/had close ties with many prominent rationalists.) This indicates a general lack of control of the narrative surrounding the movement, and likely has directly led to needlessly antagonistic relationships.
That's putting it mildly.
The problems are well known. The mystery is why the community doesn't implement obvious solutions. Hiring PR people is an obvious solution. There's a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.
If the community really and literally is about saving the world, then having a constant stream of people who are put off, or even becoming enemies is incrementally making the world more likely to be destroyed. So surely it's an important problem to solve? Yet the community doesn't even like discussing it. It's as if maintaining some sort of purity, or some sort of impression that you don't make mistakes is more important than saving the world.
Presumably you mean this post.
If the community really and literally is about saving the world, then having a constant stream of people who are put off, or even becoming enemies is incrementally making the world more likely to be destroyed. So surely it's an important problem to solve? Yet the community doesn't even like discussing it. It's as if maintaining some sort of purity, or some sort of impression that you don't make mistakes is more important than saving the world.
I think there are two issues.
First, some of the 'necessary to save the world' things might make enemies. If it's the case that Bob really wants there to be a giant explosion, and you think giant explosions might kill everyone, you and Bob are going to disagree about what to do, and Bob existing in the same information environment as you will constrain your ability to share your preferences and collect allies without making Bob an enemy.
Second, this isn't an issue where we can stop thinking, and thus we need to continue doing things that help us think, even if those things have costs. In contrast, in a situation where you know what plan you need to implement, you can now drop lots of your ability to think in order ...
Eliezer is extremely skilled at capturing attention. One of the best I've seen, outside of presidents and some VCs.
However, as far as I've seen, he's terrible at getting people to do what he wants.
Which means that he has a tendency to attract people to a topic he thinks is important but they never do what he thinks should be done- which seems to lead to a feeling of despondence.
This is where he really differs from those VCs and presidents- they're usually far more balanced.
For an example of an absolute genius in getting people to do what he wants, see Sam Altman.
I very much had the same experience, making me decide to somewhat radically re-orient my life.
I primarily upvoted it because I like the push to 'just candidly talk about your models of stuff':
I think we die with slightly more dignity - come closer to surviving, as we die - if we are allowed to talk about these matters plainly. Even given that people may then do unhelpful things, after being driven mad by overhearing sane conversations. I think we die with more dignity that way, than if we go down silent and frozen and never talking about our impending death for fear of being overheard by people less sane than ourselves.
I think that in the last surviving possible worlds with any significant shred of subjective probability, people survived in part because they talked about it; even if that meant other people, the story's antagonists, might possibly hypothetically panic.
Also because I think Eliezer's framing will be helpful for a bunch of people working on x-risk. Possibly a minority of people, but not a tiny minority. Per my reply to AI_WAIFU, I think there are lots of people who make the two specific mistakes Eliezer is warning about in this post ('making a habit of strategically saying falsehoods' and/or 'making a habit of adopting optimistic assumptions on the ...
Given how long it took me to conclude whether these were Eliezer's true thoughts or a representation of his predicted thoughts in a somewhat probable future, I'm not sure whether I'd use the label "candid" to describe the post, at least without qualification.
While the post does contain a genuinely useful way of framing near-hopeless situations and a nuanced and relatively terse lesson in practical ethics, I would describe the post as an extremely next-level play in terms of its broader purpose (and leave it at that).
I... upvoted it because it says true and useful things about how to make the world not end and proposes an actionable strategy for how to increase our odds of survival while relatively thoroughly addressing a good number of possible objections. The goal of LessWrong is not to make people happier, and the post outlines a pretty clear hypothesis about how it might help others (1. by making people stop working on plans that condition on lots of success in a way that gets ungrounded from reality, 2. by making people not do really dumb unethical things out of desperation).
Ditto.
Additionally, the OP seems to me good for communication: Eliezer had a lot of bottled up thoughts, and here put them out in the world, where his thoughts can bump into other people who can in turn bump back into him.
AFAICT, conversation (free, open, "non-consequentialist" conversation, following interests and what seems worth sharing rather than solely backchaining from goals) is one of the places where consciousness and sanity sometimes enter. It's right there next to "free individual thought" in my list of beautiful things that are worth engaging in and safeguarding.
I upvoted it because I think it's true and I think that this is a scenario where 'epistemic rationality' concerns trump 'instrumental rationality' concerns.
I upvoted it because I wish I could give Eliezer a hug that actually helps make things better, and no such hug exists but the upvote button is right there.
I strong-upvoted this post because I read a private draft by Eliezer which is a list of arguments why we're doomed. The private draft is so informative that, if people around me hadn't also read and discussed it, I would have paid several months of my life to read it. It may or may not be published eventually. This post, being a rant, is less useful, but it's what we have for now. It's so opaque and confusing that I'm not even sure if it's net good, but if it's 5% as good as the private document it still far surpasses my threshold for a strong upvote.
EDIT: it may or may not be published eventually
Upvoted because it's important to me to know what EY thinks the mainline-probability scenario looks like and what are the implications.
If that's what he and MIRI think is the mainline scenario, then that's what I think is the mainline scenario, because their quality of reasoning and depth of insight seems very high whenever I have an opportunity to examine it.
Personally, I am not here (or most other places) to "enjoy myself" or "be happier". Behind the fool's licence of April 1, the article seems to me to be saying true and important things. If I had any ideas about how to solve the AGI problem that would pass my shoulder Eliezer test, I would be doing them all the more. However, lacking such ideas, I only cultivate my garden.
I have a weird bias towards truth regardless of consequences, and upvoted out of emotional reflex. Also I love Eliezer's writing and it is a great comfort to me to have something fun to read on the way to the abyss.
I disagree with Eliezer about half the time, including about very fundamental things, but I strongly upvoted the post, because that attitude gives both the best chance of success conditional on the correct evaluation of the problem, and it does not kill you if the evaluation is incorrect and the x-risk in question is an error in the model. It is basically a Max EV calculation for most reasonable probability distributions.
I upvoted the post despite disagreeing with it (I believe the success probability is ~ 30%). Because, it seems important for people to openly share their beliefs in order to maximize our collective ability to converge on the truth. And, I do get some potentially valuable information from the fact that this is what Yudkowsky beliefs (even while disagreeing).
Hi, I'm always fascinated by people with success probabilities that aren't either very low or 'it'll probably be fine'.
I have this collection of intuitions (no more than that):
(1) 'Some fool is going to build a mind',
(2) 'That mind is either going to become a god or leave the fools in position to try again, repeat',
(3) 'That god will then do whatever it wants'.
It doesn't seem terribly relevant these days, but there's another strand that says:
(4) 'we have no idea how to build minds that want specific things' and
(5) 'Even if we knew how to build a mind that wanted a specific thing, we have no idea what would be a good thing' .
These intuitions don't leave me much room for optimism, except in the sense that I might be hopelessly wrong and, in that case, I know nothing and I'll default back to 'it'll probably be fine'.
Presumably you're disagreeing with one of (1), (2), or (3) and one of (4) or (5).
Which ones and where does the 30% from?
I believe that we might solve alignment in time and aligned AI will protect us from unaligned AI. I'm not sure how to translate it to your 1-3 (the "god" will do whatever it wants, but it will want what we want so there's no problem). In terms of 4-5, I guess I disagree with both or rather disagree that this state of ignorance will necessarily persist.
Shouldn't someone (some organization) be putting a lot of effort and resources into this strategy (quoted below) in the hope that AI timelines are still long enough for the strategy to work? With enough resources, it should buy at least a few percentage of non-doom probability (even now)?
Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers).
For starters, why aren't we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community? For example, on their workplace benefits page, OpenPhil says:
We offer a family forming benefit that supports employees and their partners with expenses related to family forming, such as fertility treatment, surrogacy, or adoption. This benefit is available to all eligible employees, regardless of age, sex, sexual orientation, or gender identity.
Seems a small step from there to making "we cover IVF for anyone who wants (even if your fertility is fine) + LifeView polygenic scores" into a standard part of the alignment-research-agency benefits package. Of course, LifeView only offers health scores, but they will also give you the raw genetic data. Processing this genetic data yourself, DIY style, could be made easier -- maybe there could be a blog post describing how to use an open-source piece of software and where to find the latest version of EA3, and so forth.
All this might be a lot of trouble for (if you are pessimistic about PGT's potential) a rather small benefit. We are not talking Von Neumanns here. ...
Embryo selection for intelligence does not require government permission to do. You can do it right now. You only need the models and the DNA. I've been planning on releasing a website that allows people to upload genetic data they get from LifeView for months, but I haven't gotten around to finishing it for the same reason I think that others aren't.
Part of me wants to not post this just because I want to be the first to make the website, but that seems immoral, so, here.
Both cloning and embryo selection are not illegal in many places, including the US. (This article suggests that for cloning you may have to satisfy the FDA's safety concerns, which perhaps ought to be possible for a well-resourced organization.) And you don't have to raise them specifically for AI safety work. I would probably announce that they will be given well-rounded educations that will help them solve whatever problems that humanity may face in the future.
The first thing I can remember is that I learned at age 3 that I would die someday, and I cried about it. I got my hopes up about radical technological progress (including AGI and biotech) extending lifespan as a teenager, and I lost most of that hope (and cried the most I ever had in my life) upon realizing that AGI probably wouldn't save us during our lifetime, alignment was too hard.
In some sense this outcome isn't worse than what I thought was fated at age 3, though. I mean, if AGI comes too soon, then I and my children (if I have them) won't have the 70-80 year lifetimes I expected, which would be disappointing; I don't think AGI is particularly likely to be developed before my children die, however (minority opinion around here, I know). There's still some significant chance of radical life extension and cognitive augmentation from biotech assisted by narrow AI (if AGI is sufficiently hard, which I think it is, though I'm not confident). And as I expressed in another comment, there would be positive things about being replaced by a computationally massive superintelligence solving intellectual problems beyond my comprehension; I think that would comfort me if I were in th...
Mostly scalable blockchain systems at this point, I have some writing on the problem hosted at gigascaling.net.
The sort of thing I write about on my blog. Examples:
Just a reminder to everyone, and mostly to myself:
Not flinching away from reality is entirely compatible with not making yourself feel like shit. You should only try to feel like shit when that helps.
The anime protagonist just told everyone that there's no hope. I don't have a "don't feel like shit" button. Not flinching away from reality and not feeling like shit are completely incompatible in this scenario given my mental constitution. There are people who can do better, but not me.
I'm going to go drinking.
Given that, then yes, feeling like shit plus living-in-reality is your best feasible alternative.
Curling up into a ball and binge drinking till the eschaton probably is not though: see Q1.
It sounds like Eliezer is confident that alignment will fail. If so, the way out is to make sure AGI isn’t built. I think that’s more realistic than it sounds
1. LessWrong is influential enough to achieve policy goals
Right now, the Yann LeCun view of AI is probably more mainstream, but that can change fast.
LessWrong is upstream of influential thinkers. For example:
- Zvi and Scott Alexander read LessWrong. Let’s call folks like them Filter #1
- Tyler Cowen reads Zvi and Scott Alexander. (Filter #2)
- Malcolm Gladwell, a mainstream influencer, reads Tyler Cowen every morning (Filter #3)
I could’ve made a similar chain with Ezra Klein or Holden Karnofsky. All these chains put together is a lot of influence
Right now, I think Eliezer’s argument (AI capabilities research will destroy the world) is blocked at Filter #1. None of the Filter #1 authors have endorsed it. Why should they? The argument relies on intuition. There’s no way for Filter #1 to evaluate it. I think that’s why Scott Alexander and Holden Karnofsky hedged, neither explicitly endorsing nor rejecting the doom theory.
Even if they believed Eliezer, Filter #1 authors need to communicate more than an intuition to Filter #2. Imagin...
I tend to agree that Eliezer (among others) underestimates the potential value of US federal policy. But on the other hand, note No Fire Alarm, which I mostly disagree with but which has some great points and is good for understanding Eliezer's perspective. Also note (among other reasons) that policy preventing AGI is hard because it needs to stop every potentially feasible AGI project but: (1) defining 'AGI research' in a sufficient manner is hard, especially when (2) at least some companies naturally want to get around such regulations, and (3) at least some governments are likely to believe there is a large strategic advantage to their state 'getting AGI first,' and arms control for software is hard because states wouldn't think they could trust each other and verifying compliance would probably be very invasive so states would be averse to such verification. Eliezer has also written about why he's pessimistic about policy elsewhere, though I don't have a link off the top of my head.
Eliezer gives alignment a 0% chance of succeeding. I think policy, if tried seriously, has >50%. So it's a giant opportunity that's gotten way too little attention
I'm optimistic about policy for big companies in particular. They have a lot to lose from breaking the law, they're easy to inspect (because there's so few), and there's lots of precedent (ITAR already covers some software). Right now, serious AI capabilities research just isn't profitable outside of the big tech companies
Voluntary compliance is also a very real thing. Lots of AI researchers are wealthy and high-status, and they'd have a lot to lose from breaking the law. At the very least, a law would stop them from publishing their research. A field like this also lends itself to undercover enforcement
I think an agreement with China is impossible now, because prominent folks don't even believe the threat exists. Two factors could change the art of the possible. First, if there were a widely known argument about the dangers of AI, on which most public intellectual agreed. Second, since the US has a technological lead, it could actually be to their advantage.
Look at gain of function research for the result of a government moratorium on research. At first Baric feared that the moratorium would end his research. Then the NIH declared that his research isn't officially gain of function and continued funding him.
Regulating gain of function research away is essentially easy mode compared to AI.
A real Butlerian jihad would be much harder.
How about if you solve a ban on gain-of-function research first, and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
foolish optimists to have their dangerous optimism broken
I’m pretty confused about your confidence in your assertion here. Have you spoken to people who’ve lead successful government policy efforts, to ground this pessimism? Why does the IAEA exist? How did ARPA-E happen? Why is a massive subsidy for geothermal well within the Overton Window and thus in a bill Joe Manchin said he would sign?
Gain of function research is the remit of a decades-old incumbent bureaucracy (the NIH) that oversees bio policy, and doesn’t like listening to outsiders. There’s no such equivalent for AI; everyone in the government keeps asking “what should we do” and all the experts shrug or disagree with each other. What if they mostly didn’t?
Where is your imagined inertia/political opposition coming from? Is it literally skepticism that senators show up for work every day? What if I told you that most of them do and that things with low political salience and broad expert agreement happen all the time?
We probably have a ban on gain-of-function research in the bag, since it seems relatively easy to persuade intellectuals of the merits of the idea.
Is this the case? Like, we had a moratorium on federal funding (not even on doing it, just whether or not taxpayers would pay for it), and it was controversial, and then we dropped it after 3 years.
You might have thought that it would be a slam dunk after there was a pandemic for which lab leak was even a plausible origin, but the people who would have been considered most responsible quickly jumped into the public sphere and tried really hard to discredit the idea. I think this is part of a general problem, which is that special interests are very committed to an issue and the public is very uncommitted, and that balance generally favors the special interests. [It's Peter Daszak's life on the line for the lab leak hypothesis, and a minor issue to me.] I suspect that if it ever looks like "getting rid of algorithms" is seriously on the table, lots of people will try really hard to prevent that from becoming policy.
And to this I reply: Obviously, the measuring units of dignity are over humanity’s log odds of survival—the graph on which the logistic success curve is a straight line. A project that doubles humanity’s chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity.
Joking aside, this sort of objective function is interesting, and incoherent due to being non-VNM. E.g. if there's a lottery between 0.1% chance of survival and 1% chance of survival, then how this lottery compares to a flat 0.5% chance of survival depends on the order in which the lottery is resolved. A priori, (50% of 0.1%, 50% of 1%) is equivalent to 0.55%, which is greater than 0.5%. On the other hand, the average log-odds (after selecting an element of this lottery) is 0.5 * log(0.1%) + 0.5 * log(1%) < log(0.5%).
This could lead to "negative VOI" situations where we avoid learning facts relevant to survival probability, because they would increase the variance of our odds, and that reduces expected log-odds since log is convex.
It's also unclear whether to treat different forms of uncertainty differently, e.g. is logical uncertainty treated differently from indexical/quantum uncertainty?
This could make sense as a way of evaluating policies chosen at exactly the present time, which would be equivalent to simply maximizing P(success). However, one has to be very careful with exactly how to evaluate odds to avoid VNM incoherence.
First-order play for log-probability over short-term time horizons, as a good idea in real life when probabilities are low, arises the same way as betting fractions of your bankroll arises as a good idea in real life, by:
That is, the pseudo-myopic version of your strategy is to bet fractions of your bankroll to win fractions of your bankroll. You don't take a bet with 51% probability of doubling your bankroll and 49% probability of bankruptcy, if you expect more opportunities to bet later, there aren't later opportunities that just give you lump-sum gains, and there's a point beyond which money starts to saturate for you.
Hmm. It seems like if you really expected to be able to gain log-odds in expectation in repeated bets, you'd immediately update towards a high probability, due to conservation of expected evidence. But maybe a more causal/materialist model wouldn't do this because it's a fairly abstract consideration that doesn't have obvious material support.
I see why "improve log-odds" is a nice heuristic for iteratively optimizing a policy towards greater chance of success, similar to the WalkSAT algorithm, which solves a constraint problem by changing variables around to reduce the number of violated constraints (even though the actual desideratum is to have no violated constraints); this is a way of "relaxing" the problem in a way that makes iterative hill-climbing-like approaches work more effectively.
Relatedly, some RL approaches give rewards for hitting non-victory targets in a game (e.g. number of levels cleared or key items gained), even if the eventual goal is to achieve a policy that beats the entire game.
I think possibly the key conceptual distinction you want to make is between short-term play and long-term play. If I deliberately assume an emotional stance, often a lot of the benefit to be gained therefrom is how it translates long-term correct play into myopic play for the emotional reward, assuming of course that the translation is correct. Long-term, you play for absolute probabilities. Short-term, you chase after "dignity", aka stackable log-odds improvements, at least until you're out of the curve's basement.
I feel like this comment in particular is very clarifying with regards to the motivation of this stance. The benefit is that this imports recommendations of the ideal long-run policy into the short-run frame from which you're actually acting.
I think that should maybe be in the post somewhere.
Do you think the decision heuristic Eliezer is (ambiguously jokingly) suggesting gives different policy recommendations from the more naive "maxipok" or not? If so, where might they differ? If not, what's your guess as to why Eliezer worded the objective differently from Bostrom? Why involve log-probabilities at all?
I read this as being "maxipok", with a few key extensions:
Minor meta note: others are free to disagree, but I think it would be useful if this comment section were a bit less trigger-happy about downvoting comments into the negatives.
I'm normally pretty gung-ho about downvotes, but in this case I think there's more-than-usual value in people sharing their candid thinking, and too much downvoting can make people feel pressured to shape their words and thoughts in ways that others would approve of.
I'm more optimistic than Yudkowsky[1], and I want to state what I think are the reasons for the different conclusions (I'm going to compare my own reasoning to my understanding of Yudkowsky's reasoning, and the latter might be flawed), in a nutshell.
MIRI have been very gung-ho about using logic and causal networks. At the same time they mostly ignored learning theory.
I'll remark in passing that I disagree with this characterization of events. We looked under some street lights where the light was better, because we didn't think that others blundering around in the dark were really being that helpful - including because of the social phenomenon where they blundered around until a bad solution Goodharted past their blurry filters; we wanted to train people up in domains where wrong answers could be recognized as that by the sort of sharp formal criteria that inexperienced thinkers can still accept as criticism.
That was explicitly the idea at the time.
Thanks for responding, Eliezer.
I'm not sure to what extent you mean that (i) your research programme was literally a training exercise for harder challenges ahead vs (ii) your research programme was born of despair: looking under a street light had a better chance of success even though the keys were not especially likely to be there.
If you mean (i), then what made you give up on this plan? From my perspective, the training exercise played its role and perhaps outlived its usefulness, why not move on beyond it?
If you mean (ii), then why such pessimism from the get-go? I imagine you reasoning along the lines of: developing the theory of rational agency is a difficult problem with little empirical feedback in early stages, hence it requires nigh impossible precision of reasoning. But, humanity actually has a not-bad track record in this type of questions in the last century. VNM, game theory, the Church-Turing thesis, information theory, complexity theory, Solomonoff induction: all these are examples of similar problems (creating a mathematical theory starting from an imprecise concept without much empirical data to help) in which we made enormous progress. They also look like they a...
While we happen to be on the topic: can I ask whether (a) you've been keeping up with Vanessa's work on infra-Bayesianism, and if so, whether (b) you understand it well enough to have any thoughts on it? It sounds (and has sounded for quite a while) like Vanessa is proposing this as an alternative theoretical foundation for agency / updating, and also appears to view this as significantly more promising than the stuff MIRI has been doing (as is apparent from e.g. remarks like this):
Optimism about deep learning: There has been considerable progress in theoretical understanding of deep learning. This understanding is far from complete, but also the problem doesn't seem intractable. I think that we will have pretty good theory in a decade, more likely than not[...]
Yudkowsky seems to believe we are pretty far from a good theory of rational agents. On the other hand, I have a model of how this theory will look like, and a concrete pathway towards constructing it.
Ideally I (along with anyone else interested in this field) would be well-placed to evaluate Vanessa's claims directly; in practice it seems that very few people are able to do so, and consequently infra-Bayesianism has received...
Sent this to my dad, who is an old man as far outside the rationalist bubble as you could possibly be. Doesn't even know why we're worried about AGI, but he replied:
No one gets out alive. No one. You should pray.
Somehow it helped me cope.
I mean this completely seriously: now that MIRI has changed to the Death With Dignity strategy, is there anything that I or anyone on LW can do to help with said strategy, other than pursue independent alignment research? Not that pursuing alignment research is the wrong thing to do, just that you might have better ideas.
I've always thought that something in the context of mental health would be nice.
The idea that humanity is doomed is pretty psychologically hard to deal with. Well, it seems that there is a pretty wide range in how people respond psychologically to it, from what I can tell. Some seem to do just fine. But others seem to be pretty harmed (including myself, not that this is about me; ie. this post literally brought me to tears). So, yeah, some sort of guidance for how to deal with it would be nice.
Plus it'd serve the purpose of increasing the productivity of and mitigating the risk of burnout for AI researchers, thus increasing humanities supposedly slim chances of "making it". This seems pretty nontrivial to me. AI researchers deal with this stuff on a daily basis. I don't know much about what sort of mental states are common for them, but I'd guess that something like 10-40% of them are hurt pretty badly. In which case better mental health guidance would yield pretty substantial improvements in productivity. Unfortunately, I think it's quite the difficult thing to "figure out" though, and for that reason I suspect it isn't worth sinking too many resources into.
I mean, I'd like to see a market in dignity certificates, to take care of generating additional dignity in a distributed and market-oriented fashion?
Do they perchance have significant downsides if they fail? Just wildly guessing, here. I'm a lot more cheerful about Hail Mary strategies that don't explode when the passes fail, and take out the timelines that still had hope in them after all.
Is it not obvious to you that this constitutes dying with less dignity, or is it obvious but you disagree that death with dignity is the correct way to go?
So - just to be very clear here - the plan is that you do the bad thing, and then almost certainly everybody dies anyways even if that works?
I think at that level you want to exhale, step back, and not injure the reputations of the people who are gathering resources, finding out what they can, and watching closely for the first signs of a positive miracle. The surviving worlds aren't the ones with unethical plans that seem like they couldn't possibly work even on the open face of things; the real surviving worlds are only injured by people who imagine that throwing away their ethics surely means they must be buying something positive.
Fine. What do you think about the human-augmentation cluster of strategies? I recall you thought along very similar lines circa ~2001.
I don't think we'll have time, but I'd favor getting started anyways. Seems a bit more dignified.
People's revealed choice in tenaciously staying alive and keeping others alive suggests otherwise. This everyday observation trumps all philosophical argument that fire does not burn, water is not wet, and bears do not shit in the woods.
I think many of the things that you might want to do in order to slow down tech development are things that will dramatically worsen human experiences, or reduce the number of them. Making a trade like that in order to purchase the whole future seems like it's worth considering; making a trade like that in order to purchase three more years seems much more obviously not worth it.
Never even THINK ABOUT trying a hail mary if it also comes with an increased chance of s-risk. I'd much rather just die.
Speaking of which, one thing we should be doing is keeping a lookout for opportunities to reduce s-risk (with dignity) ... I haven't yet been convinced that s-risk reduction is intractable.
This is an example of what EY is talking about I think -- as far as I can tell all the obvious things one would do to reduce s-risk via increasing x-risk are the sort of supervillian schemes that are more likely to increase s-risk than decrease it once secondary effects and unintended consequences etc. are taken into account. This is partly why I put the "with dignity" qualifier in. (The other reason is that I'm not a utilitarian and don't think our decision about whether to do supervillian schemes should come down to whether we think the astronomical long-term consequences are slightly more likely to be positive than negative.)
I think we are on the same page here. I would recommend not creating AGI at all in that situation, but I agree that creating a completely unaligned one is better than creating an s-risky one. https://arbital.com/p/hyperexistential_separation/
I imagine that WW3 would be an incredibly strong pressure, akin to WW2, which causes governments to finally sit up and take notice of AI.
And then spend several trillion dollars running Manhattan Project Two: Manhattan Harder, racing each other to be the first to get AI.
And then we die even faster, and instead of being converted into paperclips, we're converted into tiny American/Chinese flags
Q6: Hey, this was posted on April 1st. All of this is just an April Fool’s joke, right?
A: Why, of course! Or rather, it’s a preview of what might be needful to say later, if matters really do get that desperate. You don’t want to drop that on people suddenly and with no warning.
In my own accounting I'm going to consider this a lie (of the sort argued against in Q4) in possible worlds where Eliezer in fact believes things are this desperate, UNLESS there is some clarification by Eliezer that he didn't mean to imply that things aren't nearly this desperate.
Reasons to suspect Eliezer may think it really is this desperate:
Lies are intended to deceive. If I say I'm a teapot, and everyone knows I'm not a teapot, I think one shouldn't use the same word for that as for misrepresenting one's STD status.
Doubly true on April 1st, which is among its other uses, an unusually good day to say things that can only be said with literally false statements, if you'd not be a liar.
What is the proposition you believe "everyone knows" in this case? (The proposition that seemed ambiguous to me was "Eliezer believes alignment is so unlikely that going for dying with dignity on the mainline is the right approach").
If someone says X on April Fools, then says "April fools, X is false, Y is true instead!", and they disbelieve Y (and it's at least plausible to some parties that they believe Y), that's still a lie even though it's April Fools, since they're claiming to have popped out of the April Fools context by saying "April Fools".
I think this isn't the first time I've seen the April Fools-April Fools joke, where someone says "True thing, April Fools, Lie!", but I agree that this is 'bad form' in some way.
I had been, midway thru the post, intending to write a comment that was something like "hmm, does posting this on April 1st have more or less dignity than April 2nd?", and then got to that point. My interpretation is something like: if someone wants to dismiss Eliezer for reasons of their psychological health, Eliezer wants them to have an out, and the best out he could give them was "he posted his main strategic update on April 1st, so it has to be a joke, and he confirmed that in his post." But of course this out involves some sort of detachment between their beliefs and his beliefs.
In my terminology, it's a 'collusion' instead of a 'lie'; the sort of thing where I help someone believe that I liked their cooking because they would rather have that conclusion than be correlated with reality. The main difference between them is something like "whose preferences are being satisfied"; lies are typically win-lose to a much larger degree than collusions are. [Or, like, enough people meta-prefer collusions for them to be common, but meta-prefer not lying such that non-collusion lying is relatively rare.]
Not lying. Saying a thing in such a way that it's impossible to tell whether he believes it or not, and doing that explicitly.
Seems a very honest thing to do to me, if you have a thing you want to say, but do not want people to know whether you believe it or not. As to the why of that, I have no idea. But I do not feel deceived.
Do you think it was clear to over 90% of readers that the part where he says "April fools, this is just a test!" is not a statement of truth?
Also this comment:
Eliezer, do you have any advice for someone wanting to enter this research space at (from your perspective) the eleventh hour?
I don't have any such advice at the moment. It's not clear to me what makes a difference at this point.
I suspect that some people would be reassured by hearing bluntly, "Even though we've given up hope, we're not giving up the fight."
Nah, I think of myself as working to win the future, not as having given up the fight in any sense.
I wasn't optimistic going in to this field; a little additional doominess isn't going to suddenly make me switch from 'very pessimistic but trying to win the future' to 'very pessimistic and given up'.
As AI_WAIFU eloquently put it: Fuck That Noise. 😊
So AI will destroy the planet and there’s no hope for survival?
Why is everyone here in agreement that AI will inevitably kill off humanity and destroy the planet?
Sorry I’m new to LessWrong and clicked on this post because I recognized the author’s name from the series on rationality.
Why is everyone here in agreement that…
We’re not. There’s a spread of perspectives and opinions and lack-of-opinions. If you’re judging from the upvotes, might be worth keeping in mind that some of us think “upvote” should mean “this seems like it helps the conversation access relevant considerations/arguments” rather than “I agree with the conclusions.”
Still, my shortest reply to “Why expect there’s at least some risk if an AI is created that’s way more powerful than humanity?” is something like: “It seems pretty common-sensical to think that alien entities that’re much more powerful than us may pose risk to us. Plus, we thought about it longer and that didn’t make the common sense of that ‘sounds risky’ go away, for many/most of us.”
The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.
Sorry, but what do aliens have to do with AI?
Part of the reason the spread seems small is that people are correctly inferring that this comment section is not a venue for debating the object-level question of Probability(doom via AI), but rather for discussing EY's viewpoint as written in the post. See e.g. https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-ama-discussion for more of a debate.
+1 for asking the 101-level questions! Superintelligence, “AI Alignment: Why It’s Hard, and Where to Start”, “There’s No Fire Alarm for Artificial General Intelligence”, and the “Security Mindset” dialogues (part one, part two) do a good job of explaining why people are super worried about AGI.
"There's no hope for survival" is an overstatement; the OP is arguing "successfully navigating AGI looks very hard, enough that we should reconcile ourselves with the reality that we're probably not going to make it", not "successfully navigating AGI looks impossible / negligibly likely, such that we should give up".
If you want specific probabilities, here's a survey I ran last year: https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results. Eliezer works at MIRI (as do I), and MIRI views tended to be the most pessimistic.
+1 for asking the 101-level questions!
Yes! Very much +1! I've been hanging around here for almost 10 years and am getting value from the response to the 101-level questions.
Superintelligence, “AI Alignment: Why It’s Hard, and Where to Start”, “There’s No Fire Alarm for Artificial General Intelligence”, and the “Security Mindset” dialogues (part one, part two) do a good job of explaining why people are super worried about AGI.
Honestly, the Wait But Why posts are my favorite and what I would recommend to a newcomer. (I was careful to say things like "favorite" and "what I'd recommend to a newcomer" instead of "best".)
Furthermore, I feel like Karolina and people like them who are asking this sort of question deserve an answer that doesn't require an investment of multiple hours of effortful reading and thinking. I'm thinking something like three paragraphs. Here is my attempt at one. Take it with the appropriate grain of salt. I'm just someone who's hung around the community for a while but doesn't deal with this stuff professionally.
Think about how much smarter humans are than dogs. Our intelligence basically gives us full control over them. We're technologically superior. Plu...
I think the Wait But Why posts are quite good, though I usually link them alongside Luke Muehlhauser's reply.
For example, 0% was mentioned in a few places
It's obviously not literally 0%, and the post is explicitly about 'how do we succeed?', with a lengthy discussion of the possible worlds where we do in fact succeed:
[...] The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality. Positive model violations do ever happen, but it's much less likely that somebody's specific desired miracle that "we're all dead anyways if not..." will happen; these people have just walked out of the reality where any actual positive miracles might occur. [...]
The whole idea of 'let's maximize dignity' is that it's just a reframe of 'let's maximize the prob...
Eliezer seemed quite clear to me when he said (paraphrased) "we are on the left side of the logistical success curve, where success is measured in significant digits of leading 0s you are removing from your probability of success". The whole post seems to clearly imply that Eliezer thinks that marginal dignity is possible, which he defines as a unit of logistical movement on the probability of success. This clearly implies the probability is not literally 0, but it does clearly argue that the probability (on a linear scale) can be rounded to 0.
Personally, I took it to be 0% within an implied # of significant digits, perhaps in the ballpark of three.
This sequence covers some chunk of it, though it does already assume a lot of context. I think this sequence is the basic case for AI Risk, and doesn't assume a lot of context.
I disagree with most of the empirical claims in this post, and dislike most of the words.
But I do like the framework of valuing actions based on log odds in doom reduction. Some reasons I like it:
Likelihood ratios can be easier to evaluate than absolute probabilities insofar as you can focus on the meaning of a single piece of evidence, separately from the context of everything else you believe.
Suppose we're trying to pick a multiple-dial combination lock (in the dark, where we can't see how many dials there are). If we somehow learn that the first digit is 3, we can agree that that's bits of progress towards cracking the lock, even if you think there's three dials total (such that we only need 6.64 more bits) and I think there's 10 dials total (such that we need 29.88 more bits).
Similarly, alignment pessimists and optimists might be able to agree that reinforcement learning from human feedback techniques are a good puzzle piece to have (we don't have to write a reward function by hand! that's progress!), even if pessimists aren't particularly encouraged overall (because Goodhart's curse, inner-misaligned mesa-optimizers, &c. are still going to kill you).
More intuitive illustration with no logarithms: your plane crashed in the ocean. To survive, you must swim to shore. You know that the shore is west, but you don't know how far.
The optimist thinks the shore is just over the horizon; we only need to swim a few miles and we'll almost certainly make it. The pessimist thinks the shore is a thousand miles away and we will surely die. But the optimist and pessimist can both agree on how far we've swum up to this point, and that the most dignified course of action is "Swim west as far as you can."
Suppose that Eliezer thinks there is a 99% risk of doom, and I think there is a 20% risk of doom.
Suppose that we solve some problem we both think of as incredibly important, like we find a way to solve ontology identification and make sense of the alien knowledge a model has about the world and about how to think, and it actually looks pretty practical and promising and suggests an angle of attack on other big theoretical problems and generally suggests all these difficulties may be more tractable than we thought.
If that's an incredible smashing success maybe my risk estimate has gone down from 20% to 10%, cutting risk in half.
And if it's an incredible smashing success maybe Eliezer thinks that risk has gone down from 99% to 98%, cutting risk by ~1%.
I think there are basically just two separate issues at stake:
Something that would be of substantial epistemic help to me is if you (Eliezer) would be willing to estimate a few conditional probabilities (coarsely, I’m not asking you to superforecast) about the contributors to P(doom). Specifically:
For example, it seems plausible that a large fraction of your P(doom) is derived from your belief that P(10 year timelines) is large and both P(insufficient time for any alignment scheme| <10 year timelines ) and P(insufficient time for the viability of consensus-requiring governance schemes | <10 year timelines) are small. OR it could be that even given 15-20 year timelines, your probability of a decent alignment scheme emerging is ~equally small, and that fact dominates all your prognoses. It’s probably some mix of both, but the ratios are important.
Why would others care? Well, from an epistemic “should I defer to someone who’s thought about it more than me” perspective, I consider you a muc...
I have to assume you’ve thought of these schemes, and so I can’t tell whether you think they won’t work because you’re confident in short timelines or because of your inside view that “alignment is hard and 5,000 people working for ~15 years are still <10% likely to make meaningful progress and buy themselves more time to do more work”.
I don't know Eliezer's views here, but the latter sounds more Eliezer-ish to my ears. My Eliezer-model is more confident that alignment is hard (and that people aren't currently taking the problem very seriously) than he is confident about his ability to time AGI.
I don't know the answer to your questions, but I can cite a thing Eliezer wrote in his dialogue on biology-inspired AGI timelines:
...I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them. What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people see
Hey Rob, thanks for your reply. If it makes you guys feel better, you can rationalize the following as my expression of the Bargaining stage of grief.
I don't know Eliezer's views here, but the latter sounds more Eliezer-ish to my ears. My Eliezer-model is more confident that alignment is hard (and that people aren't currently taking the problem very seriously) than he is confident about timing AGI.
Consider me completely convinced that alignment is hard, and that a lot of people aren't taking it seriously enough, or are working on the wrong parts of the problem. That is fundamentally different from saying that it's unlikely to be solved even if we get 100 as many people working on it (albeit for a shorter time), especially if you believe that geniuses are outliers and thus that the returns on sampling for more geniuses remain large even after drawing many samples (especially if we've currently sampled <500 over the lifetime of the field). To get down to <1% probability of success, you need a fundamentally different argument structure. Here are some examples.
This is the best counter-response I’ve read on the thread so far, and I’m really interested what responses will be. Commenting here so I can easily get back to this comment in the future.
Commenting here so I can easily get back to this comment in the future.
FWIW if you click the three vertical dots at the top right of a comment, it opens a dropdown where you can "Subscribe to comment replies".
Don't worry, Eliezer, there's almost certainly a configuration of particles where something that remembers being you also remembers surviving the singularity. And in that universe your heroic attempts were very likely a principal reason why the catastrophe didn't happen. All that's left is arguing about the relative measure.
Even if things don't work that way, and there really is a single timeline for some incomprehensible reason, you had a proper pop, and you and MIRI have done some fascinating bits of maths, and produced some really inspired philosophical essays, and through them attracted a large number of followers, many of whom have done excellent stuff.
I've enjoyed everything you've ever written, and I've really missed your voice over the last few years.
It's not your fault that the universe set you an insurmountable challenge, or that you're surrounded by the sorts of people who are clever enough to build a God and stupid enough to do it in spite of fairly clear warnings.
Honestly, even if you were in some sort of Groundhog Day setup, what on earth were you supposed to do? The ancients tell us that it takes many thousands of years just to seduce Andie MacDowell, and that doesn't even look hard.
Yeah, the thought that I'm not really seeing how to win even with a single restart has been something of a comforting one. I was, in fact, handed something of an overly hard problem even by "shut up and do the impossible" standards.
Groundhog Day loop I'd obviously be able to do it, as would Nate Soares or Paul Christiano, if AGI failures couldn't destroy my soul inside the loop. Possibly even Yann Lecun could do it; the first five loops or so would probably be enough to break his optimism, and with his optimism broken and the ability to test and falsify his own mistakes nonfatally he'd be able to make progress.
It's not that hard.
I suppose every eight billion deaths (+ whatever else is out there) you get a bug report, and my friend Ramana did apparently manage to create a formally verified version of grep, so more is possible than my intuition tells me.
But I do wonder if that just (rather expensively) gets you to the point where the AI keeps you alive so you don't reset the loop. That's not necessarily a win.
--
Even if you can reset at will, and you can prevent the AI from stopping you pressing the reset, it only gets you as far as a universe where you personally think you've won. The rest of everything is probably paperclips.
--
If you give everyone in the entire world death-or-reset-at-will powers, then the bug reports stop being informative, but maybe after many many loops then just by trial and error we get to a point where everyone has a life they prefer over reset, and maybe there's a non-resetting path from there to heaven despite the amount of wireheading that will have gone on?
--
But actually I have two worries even in this case:
One is that human desires are just fundamentally incompatible in the sense that someone will always press the button and the whole thing just keeps looping. There's probably so...
I'm curious about what continued role you do expect yourself to have. I think you could still have a lot of value in helping train up new researchers at MIRI. I've read you saying you've developed a lot of sophisticated ideas about cognition that are hard to communicate, but I imagine could be transmitted easier within MIRI. If we need a continuing group of sane people to be on the lookout for positive miracles, would you still take a relatively active role in passing on your wisdom to new MIRI researchers? I would genuinely imagine that being in more direct mind-to-mind contact with you would be useful, so I hope you don't become a hermit.
The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality.
Can you talk more about this? I'm not sure what actions you want people to take based on this text.
What is the difference between a strategy that is dignified and one that is a clever scheme?
I may be misunderstanding, but I interpreted Eliezer as drawing this contrast:
I actually feel calmer after reading this, thanks. It's nice to be frank.
For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.
After nearly 300 years of not solving the so-called Fermat's last problem, many were skeptical that's even (humanely) possible. Some, publicly so. Especially some of those, who were themselves unable to find a solution, after years of trying.
Now, something even much more important is at stake. Namely, how to prevent AI to kill us all. The more important, but maybe also even (much?) easier problem, after all.
It's not over yet.
... but it discards all concerns outside of that. "If I regret my planet's death then I regret it, and it's beneath my dignity to pretend otherwise" does not imply that there might not be other values you could achieve during the time available.
Another way to put that, perhaps, is that "knowing we did everything we could" doesn't seem particularly dignified. Not if you had no meaningful expectation it could work. Extracting whatever other, potentially completely unrelated, value you could from the remaining available time would seem a lot more dignified to me than continuing on something you truly think is futile.
Extracting whatever other, potentially completely unrelated, value you could from the remaining available time would seem a lot more dignified to me than continuing on something you truly think is futile.
The amount of EV at stake in my (and others') experiences over the next few years/decades is just too small compared to the EV at stake in the long-term future. The "let's just give up" intuition makes sense in a regime where we're comparing stakes that differ by 10x, or 100x, or 1000x; I think its appeal in this case comes from the fact that it's hard to emotionally appreciate how much larger the quantities we're talking about are.
(But the stakes don't change just because it's subjectively harder to appreciate them; and there's nothing dignified about giving up prematurely because of an arithmetic error.)
I think that utilitarianism and actual human values are in different galaxies (example of more realistic model). There's no way I would sacrifice a truly big chunk of present value (e.g. submit myself to a year of torture) to increase the probability of a good future by something astronomically small. Given Yudkowsky's apparent beliefs about success probabilities, I might have given up on alignment research altogether[1].
On that other hand, I don't inside-view!think the success probability is quite that small, and also the reasoning error that leads to endorsing utilitarianism seems positively correlated with the reasoning error that leads to extremely low success probability. Because, if you endorse utilitarianism then it generates a lot of confusion about the theory of rational agents, which makes you think there are more unsolved questions than there really are[2]. In addition, value learning seems more hopeless than it actually is.
I have some reservations about posting this kind of comments, because they might be contributing to shattering the shared illusion of utilitarianism and its associated ethos, an ethos whose aesthetics I like and which seems to motivate people to do go...
I think that utilitarianism and actual human values are in different galaxies (example of more realistic model). There's no way I would sacrifice a truly big chunk of present value (e.g. submit myself to a year of torture) to increase the probability of a good future by something astronomically small.
I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)
'Produce ten planets worth of thriving civilizations, with certainty' feels a lot more tempting to me than 'produce an entire universe of thriving civilizations, with tiny probability', but I think that's because I have a hard time imagining large quantities and because of irrational, non-endorsed attachment to certainties, not because of a deep divergence between my values and utilitarianism.
I do think my values are very non-utilitarian in tons of ways. A utilitarian would have zero preference for their own well-being over anyone else's, would care just as much for strangers as for friends and partners, etc. This obvi...
[edit: looks like Rob posted elsethread a comment addressing my question here]
I'm a bit confused by this argument, because I thought MIRI-folk had been arguing against this specific type of logic. (I might be conflating a few different types of arguments, or might be conflating 'well, Eliezer said this, so Rob automatically endorses it', or some such).
But, I recall recommendations to generally not try to get your expected value from multiplying tiny probabilities against big values, because a) in practice that tends to lead to cognitive errors, b) in many cases people were saying things like "x-risk is a small probability of a Very Bad Outcome", when the actual argument was "x-risk is a big probability of a Very Bad Outcome."
(Right now maybe you're maybe making a different argument, not about what humans should do, but about some underlying principles that would be true if we were better at thinking about things?)
I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)
I think that "resolve" is often a lie we tell ourselves to explain the discrepancies between stated and revealed preferences. I concede that if you took that pill, it would be evidence against my position (but, I believe you probably would not).
A nuance to keep in mind is that reciprocity can be a rational motivation to behave more altruistically that you otherwise would. This can come about from tit-for-tat / reputation systems, or even from some kind of acausal cooperation. Reciprocity is effectively moving us closer to utilitarianism, but certainly not all the way there.
So, if I'm weighing the life of my son or daughter against an intergalatic network of civilizations, which I never heard of before and never going to hear about after, and which wouldn't even reciprocate in a symmetric scenario, I'm choosing my child for sure.
I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)
I'd do this to save ten planets of worth of thriving civilizations, but doing it to produce ten planets worth of thriving civilizations seems unreasonable to me. Nobody is harmed by preventing their birth, and I have very little confidence either way as to whether their existence will wind up increasing the average utility of all lives ever eventually lived.
I assign much lower value than a lot of people here to some vast expansionist future... and I suspect that even if I'm in the minority, I'm not the only one.
It's not an arithmetic error.
It's not Pascal's mugging in the senses described in the first posts about the problem:
...[...] I had originally intended the scenario of Pascal's Mugging to point up what seemed like a basic problem with combining conventional epistemology with conventional decision theory: Conventional epistemology says to penalize hypotheses by an exponential factor of computational complexity. This seems pretty strict in everyday life: "What? for a mere 20 bits I am to be called a million times less probable?" But for stranger hypotheses about things like Matrix Lords, the size of the hypothetical universe can blow up enormously faster than the exponential of its complexity. This would mean that all our decisions were dominated by tiny-seeming probabilities (on the order of 2-100 and less) of scenarios where our lightest action affected 3↑↑4 people... which would in turn be dominated by even more remote probabilities of affecting 3↑↑5 people...
[...] Unfortunately I failed to make it clear in my original writeup that this was where the problem came from, and that it was general to situations beyond the Mugger. Nick Bostrom's writeup of Pascal's Mugging
Separate from the specific claims, it seems really unhelpful to say something like this in such a deliberately confusing, tongue-in-cheek way. It's surely unhelpful strategically to be so unclear, and it also just seems mean-spirited to blur the lines between sarcasm and sincerity in such a bleak and also extremely confident write-up, given that lots of readers regard you as an authority and take your thoughts on this subject seriously.
I’ve heard from three people who have lost the better part of a day or more trying to mentally disengage from this ~shitpost. Whatever you were aiming for, it's hard for me to imagine how this hasn't missed the mark.
This reminds me of Joe Rogan's interview with Elon Musk. This section has really stuck with me:
Joe Rogan
So, what happened with you where you decided, or you took on a more fatalistic attitude? Like, was there any specific thing, or was it just the inevitability of our future?
Elon Musk
I try to convince people to slow down. Slow down AI to regulate AI. That's what's futile. I tried for years, and nobody listened.
Joe Rogan
This seems like a scene in a movie-
Elon Musk
Nobody listened.
Joe Rogan
... where the the robots are going to fucking takeover. You're freaking me out. Nobody listened?
Elon Musk
Nobody listened.
Joe Rogan
No one. Are people more inclined to listen today? It seems like an issue that's brought up more often over the last few years than it was maybe 5-10 years ago. It seemed like science fiction.
Elon Musk
Maybe they will. So far, they haven't. I think, people don't -- Like, normally, the way that regulations work is very slow. it's very slow indeed. So, usually, it will be something, some new technology. It will cause damage or death. There will be an outcry. There will be an investigation. Years will pass. There will be some sort of insights committee. There will be rul...
It has been asked before, but I will ask again because I prefer clear answers: is Death With Dignity MIRI’s policy?
What I’m seeing from my outside perspective is a MIRI authority figure stating that this is the policy.
(I ran this comment by Eliezer and Nate and they endorsed it.)
My model is that the post accurately and honestly represents Eliezer's epistemic state ('I feel super doomy about AI x-risk'), and a mindset that he's found relatively motivating given that epistemic state ('incrementally improve our success probability, without getting emotionally attached to the idea that these incremental gains will result in a high absolute success probability'), and is an honest suggestion that the larger community (insofar as it shares his pessimism) adopt the same framing for the sake of guarding against self-deception and motivated reasoning.
The parts of the post that are an April Fool's Joke, AFAIK, are the title of the post, and the answer to Q6. The answer to Q6 is a joke because it's sort-of-pretending the rest of the post is an April Fool's joke. The title is a joke because "X's new organizational strategy is 'death with dignity'" sounds sort of inherently comical, and doesn't really make sense (how is that a "strategy"? believing p(doom) is high isn't a strategy, and adopting a specific mental framing device isn't really a "strategy" either). (I'm even more confused by how th...
The issue is that Eliezer appears to think, but without any follow-up, that most other approaches to AI alignment distinct from MIRI's, including ones that otherwise draw inspiration from the rationality community, will also fail to bear fruit. Like, the takeaway isn't other alignment researchers should just give up, or just come work for MIRI...?, but then what is it?
From the AGI interventions discussion we posted in November (note that "miracle" here means "surprising positive model violation", not "positive event of negligible probability"):
...Anonymous
At a high level one thing I want to ask about is research directions and prioritization. For example, if you were dictator for what researchers here (or within our influence) were working on, how would you reallocate them?
Eliezer Yudkowsky
The first reply that came to mind is "I don't know." I consider the present gameboard to look incredibly grim, and I don't actually see a way out through hard work alone. We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like "Trying to die with more dig
It seems like you're claiming that it's obvious on consequentialist grounds that it is immoral to rob banks. While I have not robbed any banks, I do not see how to arrive at a general conclusion to this effect under the current regime, and one of my most trusted friends may have done so at one point. But I'm not sure how to identify our crux. Can you try to explain your reasoning?
Based on occasional conversations with new people, I would not be surprised if a majority of people who got into alignment between April 2022 and April 2023 did so mainly because of this post. Most of them say something like "man, I did not realize how dire the situation looked" or "I thought the MIRI folks were on it or something".
Suggestion: you should record a cover version of Guantanamera.
Yo soy un hombre sincero
De donde crece la palma,
Yo soy un hombre sincero
De donde crece la palma,
Y antes de morirme quiero
Echar mis versos del alma.
[...]
No me pongan en lo oscuro
A morir como un traidor
No me pongan en lo oscuro
A morir como un traidor
Yo soy bueno y como bueno
Moriré de cara al sol.
And then Nate can come in playing the trumpet solo.
If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I'm thinking about how to characterize "information" in problems where one agent is trying to deceive another. If A successfully deceives B, what is the "information gain" for B? He thinks he knows more about the world; does this mean that information gain cannot be measured from the inside?
The sentence you quote actually sounds like a Harry sentence to me. Specifically the part where doing an unethical thing causes the good people to not be able to trust each other and work together any more, which is a key part of the law of good.
“The humans, I think, knew they were doomed. But where another race would surrender to despair, the humans fought back with even greater strength. They made the Minbari fight for every inch of space. In my life, I have never seen anything like it. They would weep, they would pray, they would say goodbye to their loved ones and then throw themselves without fear or hesitation at the very face of death itself. Never surrendering. No one who saw them fighting against the inevitable could help but be moved to tears by their courage…their stubborn nobility. When they ran out of ships, they used guns. When they ran out of guns, they used knives and sticks and bare hands. They were magnificent. I only hope, that when it is my time, I may die with half as much dignity as I saw in their eyes at the end. They did this for two years. They never ran out of courage. But in the end…they ran out of time.”
—Londo Mollari
Potential typos:
Also, I didn't understand parts of the sentence which ends like this:
taking things at face value and all sorts of extreme forces that break things and that you couldn't full test under before facing them.
In terms of "miracles" - do you think they look more like some empirical result or some new genius comes up with a productive angle? Though I am inclined, even as a normie, to believe that human geniuses are immeasurably disappointing, you have sown a lot of seeds - and alignmentpilled a lot of clever people - and presumably some spectacularly clever people. Maybe some new prodigy will show up. My timelines are short - like less than 10 years wouldn't surprise me - but the kids you alignmentpilled in 2008-2015 will be reaching peak productivity in the next few years. If it's going to happen, it might happen soon.
Both seem unlikely, probably about equally unlikely... maybe leaning slightly more towards the empirical side, but you wouldn't even get those without an at least slightly genius looking into it, I think.
The following is an edited partial chat transcript of a conversation involving me, Benquo, and an anonymous person (X). I am posting it in the hope that it has enough positive value-of-information compared to the attentional cost to be of benefit to others. I hope people can take this as "yay, I got to witness a back-room conversation" rather than "oh no, someone has made a bunch of public assertions that they can't back up"; I think it would be difficult and time-consuming to argue for all these points convincingly, although I can explain to some degree...
I began reading this charitably (unaware of whatever inside baseball is potentially going on, and seems to be alluded to), but to be honest struggled after "X" seemed to really want someone (Eliezer) to admit they're "not smart"? I'm not sure why that would be relevant.
I think I found these lines especially confusing, if you want to explain:
Some ways of giving third parties Bayesian evidence that you have some secret without revealing it:
I'm not against "tenure" in this case. I don't think it makes sense for people to make their plans around the idea that person X has secret Y unless they have particular reason to think secret Y is really important and likely to be possessed by person X (which is related to what you're saying about trusting opinions and taking instructions). In particular, outsiders should think there's ~0 chance that a particular AI researcher's secrets are important enough here to be likely to produce AGI without some sort of evidence. Lots of people in the AI field say they have these sorts of secrets and many have somewhat impressive AI related accomplishments, they're just way less impressive than what would be needed for outsiders to assign a non-negligible chance to possession of enough secrets to make AGI, given base rates.
I find I strongly agree that -- in case of this future happening -- it is extremely important that as little people as possible give up on their attempts to perceive the world as it really is, even if that world might literally 'we failed, humanity is going to die out, the problem is way too hard and there's no reasonable chance we'll make it in time'. It seems to me like especially in scenarios like this, we'd need people to keep trying, and to stay (or become) dignified in order to have any chance at still solving the problem at all.
I'm a total newcomer ...
Thanks for the advice, I appreciate it. I'm one of these people who have very firm rules about not lying though. Then again, I did manage to get a vaccine despite parents objecting using the second option, so I suppose it'll be worth a try :)
I highly recommend people watch Connor talk about his interpretation of this post.
He talks about how Eliezer is a person who managed to access many anti memes that slid right off our heads.
What is an anti meme you might ask?
By their very nature they resist being known or integrated into your world model. You struggle to remember them. Just like how memes are sticky, and go viral anti memes are slippery and struggle to gain traction.
They could be extraordinarily boring. They could be facts about yourself that your ego protects you from really grasp...
It seems nearly everyone is unduly pessimistic about the potential positive consequences of not being able to 'solve' the alignment problem.
For example,
A not entirely aligned AI could still be valuable and helpful. It's not inevitable such entities will turn into omnicidal maniacs. And they may even take care of some thorny problems that are intractable for 'fully aligned' entities.
Also, there's the possibility that in the future such AI's will be similar to doting dog owners and humans similar to the dogs, not entirely a bad tradeoff for dogs in present dog owner-dog relationships. And I don't see why that necessarily must be something to wail and cry about.
And so on...
I mean this entirely legitimately and with no hostility: have you "Read The Sequences"? People are taking a similar view here because this community is founded in part on some ideas that tend to lead towards that view. Think of it like how you'd expect a lot of people at a Christian summer camp to agree that god exists: there is a sampling bias here (I'm not claiming that it's wrong, only that it should be expected).
Considering all humans dead, do you still think it's going to be the boring paperclip kind of AGI to eat all reachable resources? Any chance that inscrutable large float vectors and lightspeed coordination difficulties will spawn godshatter AGI shards that we might find amusing or cool in some way? (Value is fragile notwithstanding)
Wouldn't convergent instrumental goals include solving math problems, analyzing the AI's history (which includes ours), engineering highly advanced technology, playing games of the sort that could be analogous to situations showing up with alien encounters or subsystem value drift, etc, using way more compute and better cognitive algorithms than we have access to, all things that are to a significant degree interesting to us, in part because they're convergent instrumental goals, i.e. goals that we have as well because they help for achieving our other goals (and might even be neurologically encoded similarly to terminal goals, similar to the way AlphaGo encodes instrumental value of a board position similar to the way it encodes terminal value of a board position)?
I would predict that many, many very interesting books could be written about the course of a paperclip maximizer's lifetime, way more interesting books than the content of all books written so far on Earth, in large part due to it having much more compute and solving more difficult problems than we do.
(My main criticism of the "fragility of value" argument is that boredom isn't there for "random" reasons, attaining nove...
Not sure there's anybody there to see it. Definitely nobody there to be happy about it or appreciate it. I don't consider that particularly worthwhile.
There would still exist approximate Solomonoff inductors compressing sense-data, creating meta-self-aware world-representations using the visual system and other modalities ("sight"), optimizing towards certain outcomes in a way that tracks progress using signals integrated with other signals ("happiness")...
Maybe this isn't what is meant by "happiness" etc. I'm not really sure how to define "happiness". One way to define it would be the thing having a specific role in a functionalist theory of mind; there are particular mind designs that would have indicators for e.g. progress up a utility gradient, that are factored into a RL-like optimization system; the fact that we have a system like this is evidence that it's to some degree a convergent target of evolution, although there likely exist alternative cognitive architectures that don't have a direct analogue due to using a different set of cognitive organs to fulfill that role in the system.
There's a spectrum one could draw along which the parameter varied is the degree to which one believes that mind architectures different than one's own are valuable; the most egoist point on the spectrum would be believing that only the cogni...
I think that people who imagine "tracking progress using signals integrated with other signals" feels anything like happiness feels inside to them - while taking that imagination and also loudly insisting that it will be very alien happiness or much simpler happiness or whatever - are simply making a mistake-of-fact, and I am just plain skeptical that there is a real values difference that would survive their learning what I know about how minds and qualia work. I of course fully expect that these people will loudly proclaim that I could not possibly know anything they don't, despite their own confusion about these matters that they lack the skill to reflect on as confusion, and for them to exchange some wise smiles about those silly people who think that people disagree because of mistakes rather than values differences.
Trade opportunities are unfortunately ruled out by our inability to model those minds well enough that, if some part of them decided to seize an opportunity to Defect, we would've seen it coming in the past and counter-Defected. If we Cooperate, we'll be nothing but CooperateBot, and they, I'm afraid, will be PrudentBot, not FairBot.
and they, I'm afraid, will be PrudentBot, not FairBot.
This shouldn't matter for anyone besides me, but there's something personally heartbreaking about seeing the one bit of research for which I feel comfortable claiming a fraction of a point of dignity, being mentioned validly to argue why decision theory won't save us.
(Modal bargaining agents didn't turn out to be helpful, but given the state of knowledge at that time, it was worth doing.)
Sorry.
It would be dying with a lot less dignity if everyone on Earth - not just the managers of the AGI company making the decision to kill us - thought that all you needed to do was be CooperateBot, and had no words for any sharper concepts than that. Thank you for that, Patrick.
But sorry anyways.
"Qualia" is pretty ill-defined, if you try to define it you get things like "compressing sense-data" or "doing meta-cognition" or "having lots of integrated knowledge" or something similar, and these are convergent instrumental goals.
If you try to define qualia without having any darned idea of what they are, you'll take wild stabs into the dark, and hit simple targets that are convergently instrumental; and if you are at all sensible of your confusion, you will contemplate these simple-sounding definitions and find that none of them particularly make you feel less confused about the mysterious redness of red, unless you bully your brain into thinking that it's less confused or just don't know what it would feel like to be less confused. You should in this case trust your sense, if you can find it, that you're still confused, and not believe that any of these instrumentally convergent things are qualia.
I don't know how everyone else on LessWrong feels but I at least am getting really tired of you smugly dismissing others' attempts at moral reductionism wrt qualia by claiming deep philosophical insight you've given outside observers very little reason to believe you have. In particular, I suspect if you'd spent half the energy on writing up these insights that you've spent using the claim to them as a cudgel you would have at least published enough of a teaser for your claims to be credible.
But here Yudkowsky gave a specific model for how qualia, and other things in the reference class "stuff that's pointing at something but we're confused about what", is mistaken for convergently instrumental stuff. (Namely: pointers point both to what they're really trying to point to, but also somewhat point at simple things, and simple things tend to be convergently instrumental.) It's not a reduction of qualia, and a successful reduction of qualia would be much better evidence that an unsuccessful reduction of qualia is unsuccessful, but it's still a logical relevant argument and a useful model.
Summary: The ambiguity as to how much of the above is a joke appears it may be for Eliezer or others to have plausible deniability about the seriousness of apparently extreme but little-backed claims being made. This is after a lack of adequate handling on the part of the relevant parties of the impact of Eliezer’s output in recent months on various communities, such as rationality and effective altruism. Virtually none of this has indicated what real, meaningful changes can be expected in MIRI’s work. As MIRI’s work depends in large part on the commu...
I found this post to be extremely depressing and full of despair. It has made me seriously question what I personally believe about AI safety, whether I should expect the world to end within a century or two, and if I should go full hedonist mode right now.
I've come to the conclusion that it is impossible to make an accurate prediction about an event that's going to happen more than three years from the present, including predictions about humanity's end. I believe that the most important conversation will start when we actually get close to developing ear...
If in, let's say, two years it turns out that we have not faced serious danger from hostile AI, will Eliezer then agree that his reasoning went down the wrong path and AI danger is substantially less than he had claimed? (Without explanations such as "well, we're still in danger but you have to wait a little longer to see it")
Because the higher your confidence in X, the more you are wrong if not-X turns out to be the case.
My timelines are mostly (substantially over 50%) not that short. Can you tell me where you got the impression they were?
If twenty years from now you and I are both are still alive and free, I will happily say "You were right Jiro, I was wrong."
Nitpick: maybe aligned and unaligned superintelligences acausally trade across future branches? If so, maybe on the mainline we're left with a very small yet nonzero fraction of the cosmic endowment, a cosmic booby prize if you will?
"Booby prize with dignity" sounds like a bit of an oxymoron...
There is another very important component of dying with dignity not captured by the probability of success: the badness of our failure state. While any alignment failure would destroy much of what we care about, some alignment failures would be much more horrible than others. Probably the more pessimistic we are about winning, the more we should focus on losing less absolutely (e.g. by researching priorities in worst-case AI safety).
One possible way to increase dignity at the point of death could be shifting the focus from survival (seeing how unlikely it is) to looking for ways to influence what replaces us.
Getting killed by a literal paperclip maximizer seems less preferable compared to being replaced by something pursing more interesting goals
You have done incredible, beautiful, courageous work I have immense appreciation and admiration for. And I recognise the pain, fear and frustration that must have brought you to this point.
But I hate, no, loathe, everything about this text. The unjustified assumptions behind it, the impact it will have on the perception of alignment research, the impact it will have on alignment researchers.
I do not want to do stuff that would have me remembered by nonexistent surviving observers as someone who valiantly tried and dramatically warned, and looked good...
The main issue with AGI Alignment is that the AGI is more intelligent than us, meaning that making it stay within our values requires both perfect knowledge of our values and some understanding of how to constrain it to share them.
If this is truly an intractable problem, it still seems that we could escape the dilemma by focusing on efforts in Intelligence Augmentation, e.g. through Mind Uploading and meaningful encoding/recoding of digitized mind-states. Granted, it currently seems that we will develop AGI before IA, but if we could shift focus enough to reverse this trend, then AGI would not be an issue, as we ourselves would have superior intelligence to our creations.
Leave it to EY to take a perfectly good 4/1 joke post and turn it into an actually instructive dialogue.
die with slightly more dignity than would otherwise be counterfactually obtained.
But counterfactuals aren't true!
https://www.lesswrong.com/posts/dhGGnB2oxBP3m5cBc/can-counterfactuals-be-true
Personally I don't blame it that much on people (that is, those who care), because maybe the problem is simply intractable. This paper by Roman Yalmpolskiy is what has convinced me the most about it:
https://philpapers.org/rec/YAMOCO
It basically asks the question: is it really possible to be able to control something much more intelligent than ourselves and which can re-write its own code?
Actually I wanna believe that it is, but we'd need something on the miracle level, as well as way more people working on it. As well as way more time. It's virtually impos...
So is this MIRI's policy or not? I want to know whether or not Eliezer Yudkowsky has given up, independent of the current state of the world. I also want to know what Eliezer Yudkowsky's survival prognosis is because he's thought a lot more about this than I have.
The article argues that even with 0.x % of success you shouldn't give up because taking action to increase that is still valuable for higher dignity.
I agree that the problem of the alignment of human values with artificial intelligence values is in practice unsolvable. Except in a very particular case, that is when the artificial intelligence and the human are the same thing. That is, to stop developing AI on dry hardware and just develop it in wet brains, for what Elon Musk's Neuralink approach could be a step in the right direction.
we should shift the focus of our efforts to helping humanity die with with slightly more dignity.
Typo fix ->
"we should shift the focus of our efforts to helping humanity die with slightly more dignity."
(Has no one really noticed this extra "with"? It's in the first paragraph tl'dr...)
If there were an expected utility argument for risking everything on an improbable assumption, you'd get to make exactly one of them, ever.
Related: Safe Haven by Mark Spitznagel, on compounding and why CAGR != EV.
Also and in practice? People don't just pick one comfortable improbability to condition on. They go on encountering unpleasant facts true on the mainline, and each time saying, "Well, if that's true, I'm doomed, so I may as well assume it's not true," and they say more and more things like this. If you do this it very rapidly drives down the probability mass of the 'possible' world you're mentally inhabiting. Pretty soon you're living in a place that's nowhere near reality.
Holy shit, you nailed it hard.
I had a conversation about this exact subject in the Paris SSC meetup, and I was frustrated for exactly the reasons you mention.
In other contexts (for example, when talking about euthanasia), "dying with dignity" is simply equivalent to dying without great suffering. This is, it seems to me, because dying has a high correlation with suffering intensely, and with enough suffering, identity (or rather, the illusion of identity) is destroyed, since with enough suffering, anyone would betray their ideas, their family, their country, their ethics, etc. trying anything to relieve it, even if it doesn't help. With enough suffering nothing remains recognizable, either physically or mentall...
"It's sad that our Earth couldn't be one of the more dignified planets that makes a real effort, correctly pinpointing the actual real difficult problems and then allocating thousands of the sort of brilliant kids that our Earth steers into wasting their lives on theoretical physics. But better MIRI's effort than nothing."
To be fair, a lot of philosophers and ethicist have been trying to discover what does "good" mean and how humans should go about aligning with it.
Furthermore, a lot of effort has gone into trying to align goals and incentives ...
I wonder if “Solving the alignment problem” seems impossible given the currently invested resources, we should rather focus on different angles of approaches.
The basic premise here seems to be
Not solving the alignment problem → Death by AGI
However, I do not think this is quite right or at least not the full picture. While it is certainly true that we need to solve the alignment problem in order to thrive with AGI
Thriving with AGI → Solving the alignment problem
the implication is not bidirectional, as we could solve the alignment problem and still create an...
I and two of my friends are on the precipice of our careers right now. We are senior CS majors at MIT, and next year we're all doing our Master's here and have been going back and forth on what to pick.
All of us have heavily considered AI of course. I'm torn between that and Distributed Systems/Cryptography things to do Earn to Give. I've been mostly on the AI side of things until today.
This post has singlehandedly convinced two of us (myself included) to not work on AI or help with AI alignment, as if Eliezer, an expert in that field. is correct, th...
I don't think "Eliezer is wrong about something" implies that the field is ridiculous or easily solved. Many people in the field (including the plurality with meaningful AI experience) disagree with Eliezer's basic perspective and think he is wildly overconfident.
I basically agree with Eliezer that if arguments for doom look convincing then you should focus on something more like improving the log odds of survival, and preparing to do the hard work to take advantage of the actual kind of miracles that might actually occur, rather than starting to live in fantasy-land where things are not as they seem. But "let's just assume this whole alignment thing isn't real" is a particularly extreme instance of the behavior that Eliezer is criticizing.
So to get there, I think you'd have to both reject Eliezer's basic perspective, and take Eliezer as representing the entire field so strongly that rejecting his perspective means rejecting the whole field. I don't think that's reasonable, unless your interest in AI safety was primarily driven by deference to Eliezer. If so, I do think it makes sense to stop deferring and just figure out how you want to help the world, and it's reasonable if that'...
Shouldn't we implement a loud strategy?
One of the biggest problems is that we haven't been able to reach and convince a lot of people. This would be most easily done with a more efficient route. I think of someone who already knows the importance of this issue the to a certain level and has high power to act. I am talking about Elon Musk. If we show him the dangerous state of the problem, he would be convinced to take a more important focus. It is aligned with his mindset.
If one of the most wealthy and influential persons of this planet already cares about...
I think of someone who already knows the importance of this issue the to a certain level and has high power to act. I am talking about Elon Musk.
This community was instrumental in getting Elon to be concerned about AI risk in 2014. We succeeded in getting him to take the problem seriously, and he, for instance, donated 10 million to FLI in 2015. But he also explicitly created OpenAI which is widely (though not universally) regarded to have been one of the most damaging moves in this space ever, swamping all other positive contributions.
It seems not uncommon for outreach attempts like this to backfire similarly. Overall the net impact of this community on AI timelines has been to shorten them (aside from Elon, a substantial number of people have done capabilities research specifically because they were compelled by the arguments for AI risk).
This isn't a knock down argument against any outreach at all, but you can imagine that living through those attempts and their consequences might have soured some of the leadership on naive attempts to "raise the alarm."
A loud strategy is definitely mandatory. Just not too loud with the masses. Only loud with the politicians, tech leaders and researchers.
One way to characterize the thing that happened over the last ten years is: "oh no, we have a two-part argument: 1) AI very powerful and 2) that could be very dangerous. If we tell politicians about that, they might only hear the first part and make it worse. Let's tell smart nerds instead." Then, as Eli points out, as far as we can tell on net the smart nerds heard the first part and not the second.
In my opinion we just haven't done a very good job.
I mean, I agree that we've failed at our goal. But "haven't done a very good job" implies to me something like "it was possible to not fail", which, unclear?
We've seen plenty of people jump on the AI safety bandwagon.
Jumping on the bandwagon isn't the important thing. If anything, it's made things somewhat worse; consider the reaction to this post if MIRI were 'the only game in town' for alignment research as opposed to 'one lab out of half a dozen.' "Well, MIRI's given up," someone might say, "good thing ARC, DeepMind, OpenAI, Anthropic, and others are still on the problem." [That is, it's easy to have a winner's curse of sorts, where the people most optimistic about a direction will put in work on that direction for the longest, and so you should expect most work to be done by the overconfident instead of the underconfident.]
Like, if someone says "I work on AI safety, I make sure that people can't use language or image generation models to make child porn" or "I work on AI safety, I make sure that algorithms don't discriminate against underprivileged minorities", I both believe 1) they are trying to make these systems not do a th...
How about a clear, brief and correct strategy?
I really second this. If we've got clear arguments as to why the end of the world is nigh, then I don't know them and I'd appreciate a link. And yes I have read the fucking Sequences. Every last word. And I love them to bits they are the 21st century's most interesting literature by a very long way.
I'd be stunned if someone suddenly pulled a solution out of their ass, but it wouldn't cause me an existential crisis like seeing an order 2 element in an order 9 group would.
Whenever I try to convince someone about AI risk, it takes about a year to get them to realise that AI might be dangerous, and then once they do they go off on one about quantum consciousness or something.
Some people just snap straight to the 'oh shit' moment like I did, but there are not many. (got me a couple already this year, but it's not much of a harvest)
And I'm talking about clever people whose intelligence I respect more than my own in other contexts. I don't even bother trying with normies.
I do wonder if it would have been the same if you were trying to convince people that nuclear bombs would be dangerous and all you had was a vague idea of ho...
I think I'm maybe too sleepy to deal with grammar, but I can't tell what the non-bolded Q6s mean.
Am I meant to take this as a joke from you or not? Obviously, whether or not you're joking doesn't have the power to make this true or not, but I don't understand whether or not MIRI actually is standing by this or not (I am reading your "yes, this is a joke" as sarcasm for some reason).
Edit: Fixed a single minor typo (from your -> from you). Also, the score bumped down by 5, which is odd and I want to note as a neat fact about the thread over time.
I haven't bothered to read the entire post, but I wanted to chime in to express my support for this proposal.
It would be good if you could summarise your strongest argument in favour of your conclusion "no alignement = bad for humanity".
Things are rarely black or white, I don't see an AI partially aligned as necessaly a bad thing.
As an example, consider the partial alignement between a child and his parent. A parent is not simply fulfilling every desire of the child, but only a subset.
I'm surprised that in a post about dying with dignity and its comments, the word suffer / suffering is found zero times. Can someone explain it?
I see many prominent leaders in tech making an active effort to discredit anyone who believe we should invest in preventing an AI takeover as "Doomers". As someone new to this site, this was the first post I came across when googling "AI Risk lesswrong". Irregardless of whether it is true I don't think it's impact will be a net positive and it may well contribute towards making a doomsday scenario more likely.
I suspect that the majority of people who come across this article when linked from elsewhere will likely only read the TLDR or title.
tl;dr: It's obvious at this point that humanity isn't going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.
Well, let's be frank here. MIRI didn't solve AGI alignment and at least knows that it didn't. Paul Christiano's incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olah's transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.
Management will then ask what they're supposed to do about that.
Whoever detected the warning sign will say that there isn't anything known they can do about that. Just because you can see the system might be planning to kill you, doesn't mean that there's any known way to build a system that won't do that. Management will then decide not to shut down the project - because it's not certain that the intention was really there or that the AGI will really follow through, because other AGI projects are hard on their heels, because if all those gloomy prophecies are true then there's nothing anybody can do about it anyways. Pretty soon that troublesome error signal will vanish.
When Earth's prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.
That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained.
Consider the world if Chris Olah had never existed. It's then much more likely that nobody will even try and fail to adapt Olah's methodologies to try and read complicated facts about internal intentions and future plans, out of whatever enormous inscrutable tensors are being integrated a million times per second, inside of whatever recently designed system finished training 48 hours ago, in a vast GPU farm that's already helpfully connected to the Internet.
It is more dignified for humanity - a better look on our tombstone - if we die after the management of the AGI project was heroically warned of the dangers but came up with totally reasonable reasons to go ahead anyways.
Or, failing that, if people made a heroic effort to do something that could maybe possibly have worked to generate a warning like that but couldn't actually in real life because the latest tensors were in a slightly different format and there was no time to readapt the methodology. Compared to the much less dignified-looking situation if there's no warning and nobody even tried to figure out how to generate one.
Or take MIRI. Are we sad that it looks like this Earth is going to fail? Yes. Are we sad that we tried to do anything about that? No, because it would be so much sadder, when it all ended, to face our ends wondering if maybe solving alignment would have just been as easy as buckling down and making a serious effort on it - not knowing if that would've just worked, if we'd only tried, because nobody had ever even tried at all. It wasn't subjectively overdetermined that the (real) problems would be too hard for us, before we made the only attempt at solving them that would ever be made. Somebody needed to try at all, in case that was all it took.
It's sad that our Earth couldn't be one of the more dignified planets that makes a real effort, correctly pinpointing the actual real difficult problems and then allocating thousands of the sort of brilliant kids that our Earth steers into wasting their lives on theoretical physics. But better MIRI's effort than nothing. What were we supposed to do instead, pick easy irrelevant fake problems that we could make an illusion of progress on, and have nobody out of the human species even try to solve the hard scary real problems, until everybody just fell over dead?
This way, at least, some people are walking around knowing why it is that if you train with an outer loss function that enforces the appearance of friendliness, you will not get an AI internally motivated to be friendly in a way that persists after its capabilities start to generalize far out of the training distribution...
To be clear, nobody's going to listen to those people, in the end. There will be more comforting voices that sound less politically incongruent with whatever agenda needs to be pushed forward that week. Or even if that ends up not so, this isn't primarily a social-political problem, of just getting people to listen. Even if DeepMind listened, and Anthropic knew, and they both backed off from destroying the world, that would just mean Facebook AI Research destroyed the world a year(?) later.
But compared to being part of a species that walks forward completely oblivious into the whirling propeller blades, with nobody having seen it at all or made any effort to stop it, it is dying with a little more dignity, if anyone knew at all. You can feel a little incrementally prouder to have died as part of a species like that, if maybe not proud in absolute terms.
If there is a stronger warning, because we did more transparency research? If there's deeper understanding of the real dangers and those come closer to beating out comfortable nonrealities, such that DeepMind and Anthropic really actually back off from destroying the world and let Facebook AI Research do it instead? If they try some hopeless alignment scheme whose subjective success probability looks, to the last sane people, more like 0.1% than 0? Then we have died with even more dignity! It may not get our survival probabilities much above 0%, but it would be so much more dignified than the present course looks to be!
Now of course the real subtext here, is that if you can otherwise set up the world so that it looks like you'll die with enough dignity - die of the social and technical problems that are really unavoidable, after making a huge effort at coordination and technical solutions and failing, rather than storming directly into the whirling helicopter blades as is the present unwritten plan -
- heck, if there was even a plan at all -
- then maybe possibly, if we're wrong about something fundamental, somehow, somewhere -
- in a way that makes things easier rather than harder, because obviously we're going to be wrong about all sorts of things, it's a whole new world inside of AGI -
- although, when you're fundamentally wrong about rocketry, this does not usually mean your rocket prototype goes exactly where you wanted on the first try while consuming half as much fuel as expected; it means the rocket explodes earlier yet, and not in a way you saw coming, being as wrong as you were -
- but if we get some miracle of unexpected hope, in those unpredicted inevitable places where our model is wrong -
- then our ability to take advantage of that one last hope, will greatly depend on how much dignity we were set to die with, before then.
If we can get on course to die with enough dignity, maybe we won't die at all...?
In principle, yes. Let's be very clear, though: Realistically speaking, that is not how real life works.
It's possible for a model error to make your life easier. But you do not get more surprises that make your life easy, than surprises that make your life even more difficult. And people do not suddenly become more reasonable, and make vastly more careful and precise decisions, as soon as they're scared. No, not even if it seems to you like their current awful decisions are weird and not-in-the-should-universe, and surely some sharp shock will cause them to snap out of that weird state into a normal state and start outputting the decisions you think they should make.
So don't get your heart set on that "not die at all" business. Don't invest all your emotion in a reward you probably won't get. Focus on dying with dignity - that is something you can actually obtain, even in this situation. After all, if you help humanity die with even one more dignity point, you yourself die with one hundred dignity points! Even if your species dies an incredibly undignified death, for you to have helped humanity go down with even slightly more of a real fight, is to die an extremely dignified death.
"Wait, dignity points?" you ask. "What are those? In what units are they measured, exactly?"
And to this I reply: Obviously, the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line. A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity.
But if enough people can contribute enough bits of dignity like that, wouldn't that mean we didn't die at all? Yes, but again, don't get your hopes up. Don't focus your emotions on a goal you're probably not going to obtain. Realistically, we find a handful of projects that contribute a few more bits of counterfactual dignity; get a bunch more not-specifically-expected bad news that makes the first-order object-level situation look even worse (where to second order, of course, the good Bayesians already knew that was how it would go); and then we all die.
With a technical definition in hand of what exactly constitutes dignity, we may now consider some specific questions about what does and doesn't constitute dying with dignity.
Q1: Does 'dying with dignity' in this context mean accepting the certainty of your death, and not childishly regretting that or trying to fight a hopeless battle?
Don't be ridiculous. How would that increase the log odds of Earth's survival?
My utility function isn't up for grabs, either. If I regret my planet's death then I regret it, and it's beneath my dignity to pretend otherwise.
That said, I fought hardest while it looked like we were in the more sloped region of the logistic success curve, when our survival probability seemed more around the 50% range; I borrowed against my future to do that, and burned myself out to some degree. That was a deliberate choice, which I don't regret now; it was worth trying, I would not have wanted to die having not tried, I would not have wanted Earth to die without anyone having tried. But yeah, I am taking some time partways off, and trying a little less hard, now. I've earned a lot of dignity already; and if the world is ending anyways and I can't stop it, I can afford to be a little kind to myself about that.
When I tried hard and burned myself out some, it was with the understanding, within myself, that I would not keep trying to do that forever. We cannot fight at maximum all the time, and some times are more important than others. (Namely, when the logistic success curve seems relatively more sloped; those times are relatively more important.)
All that said: If you fight marginally longer, you die with marginally more dignity. Just don't undignifiedly delude yourself about the probable outcome.
Q2: I have a clever scheme for saving the world! I should act as if I believe it will work and save everyone, right, even if there's arguments that it's almost certainly misguided and doomed? Because if those arguments are correct and my scheme can't work, we're all dead anyways, right?
A: No! That's not dying with dignity! That's stepping sideways out of a mentally uncomfortable world and finding an escape route from unpleasant thoughts! If you condition your probability models on a false fact, something that isn't true on the mainline, it means you've mentally stepped out of reality and are now living somewhere else instead.
There are more elaborate arguments against the rationality of this strategy, but consider this quick heuristic for arriving at the correct answer: That's not a dignified way to die. Death with dignity means going on mentally living in the world you think is reality, even if it's a sad reality, until the end; not abandoning your arts of seeking truth; dying with your commitment to reason intact.
You should try to make things better in the real world, where your efforts aren't enough and you're going to die anyways; not inside a fake world you can save more easily.
Q2: But what's wrong with the argument from expected utility, saying that all of humanity's expected utility lies within possible worlds where my scheme turns out to be feasible after all?
A: Most fundamentally? That's not what the surviving worlds look like. The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality. Positive model violations do ever happen, but it's much less likely that somebody's specific desired miracle that "we're all dead anyways if not..." will happen; these people have just walked out of the reality where any actual positive miracles might occur.
Also and in practice? People don't just pick one comfortable improbability to condition on. They go on encountering unpleasant facts true on the mainline, and each time saying, "Well, if that's true, I'm doomed, so I may as well assume it's not true," and they say more and more things like this. If you do this it very rapidly drives down the probability mass of the 'possible' world you're mentally inhabiting. Pretty soon you're living in a place that's nowhere near reality. If there were an expected utility argument for risking everything on an improbable assumption, you'd get to make exactly one of them, ever. People using this kind of thinking usually aren't even keeping track of when they say it, let alone counting the occasions.
Also also, in practice? In domains like this one, things that seem to first-order like they "might" work... have essentially no chance of working in real life, to second-order after taking into account downward adjustments against optimism. AGI is a scientifically unprecedented experiment and a domain with lots of optimization pressures some of which work against you and unforeseeable intelligently selected execution pathways and with a small target to hit and all sorts of extreme forces that break things and that you couldn't fully test before facing them. AGI alignment seems like it's blatantly going to be an enormously Murphy-cursed domain, like rocket prototyping or computer security but worse.
In a domain like, if you have a clever scheme for winning anyways that, to first-order theoretical theory, totally definitely seems like it should work, even to Eliezer Yudkowsky rather than somebody who just goes around saying that casually, then maybe there's like a 50% chance of it working in practical real life after all the unexpected disasters and things turning out to be harder than expected.
If to first-order it seems to you like something in a complicated unknown untested domain has a 40% chance of working, it has a 0% chance of working in real life.
Also also also in practice? Harebrained schemes of this kind are usually actively harmful. Because they're invented by the sort of people who'll come up with an unworkable scheme, and then try to get rid of counterarguments with some sort of dismissal like "Well if not then we're all doomed anyways."
If nothing else, this kind of harebrained desperation drains off resources from those reality-abiding efforts that might try to do something on the subjectively apparent doomed mainline, and so position themselves better to take advantage of unexpected hope, which is what the surviving possible worlds mostly look like.
The surviving worlds don't look like somebody came up with a harebrained scheme, dismissed all the obvious reasons it wouldn't work with "But we have to bet on it working," and then it worked.
That's the elaborate argument about what's rational in terms of expected utility, once reasonable second-order commonsense adjustments are taken into account. Note, however, that if you have grasped the intended emotional connotations of "die with dignity", it's a heuristic that yields the same answer much faster. It's not dignified to pretend we're less doomed than we are, or step out of reality to live somewhere else.
Q3: Should I scream and run around and go through the streets wailing of doom?
A: No, that's not very dignified. Have a private breakdown in your bedroom, or a breakdown with a trusted friend, if you must.
Q3: Why is that bad from a coldly calculating expected utility perspective, though?
A: Because it associates belief in reality with people who act like idiots and can't control their emotions, which worsens our strategic position in possible worlds where we get an unexpected hope.
Q4: Should I lie and pretend everything is fine, then? Keep everyone's spirits up, so they go out with a smile, unknowing?
A: That also does not seem to me to be dignified. If we're all going to die anyways, I may as well speak plainly before then. If into the dark we must go, let's go there speaking the truth, to others and to ourselves, until the end.
Q4: Okay, but from a coldly calculating expected utility perspective, why isn't it good to lie to keep everyone calm? That way, if there's an unexpected hope, everybody else will be calm and oblivious and not interfering with us out of panic, and my faction will have lots of resources that they got from lying to their supporters about how much hope there was! Didn't you just say that people screaming and running around while the world was ending would be unhelpful?
A: You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.
There are, I think, people whose minds readily look for and find even the slightly-less-than-totally-obvious considerations of expected utility, what some might call "second-order" considerations. Ask them to rob a bank and give the money to the poor, and they'll think spontaneously and unprompted about insurance costs of banking and the chance of getting caught and reputational repercussions and low-trust societies and what if everybody else did that when they thought it was a good cause; and all of these considerations will be obviously-to-them consequences under consequentialism.
These people are well-suited to being 'consequentialists' or 'utilitarians', because their mind naturally sees all the consequences and utilities, including those considerations that others might be tempted to call by names like "second-order" or "categorical" and so on.
If you ask them why consequentialism doesn't say to rob banks, they reply, "Because that actually realistically in real life would not have good consequences. Whatever it is you're about to tell me as a supposedly non-consequentialist reason why we all mustn't do that, seems to you like a strong argument, exactly because you recognize implicitly that people robbing banks would not actually lead to happy formerly-poor people and everybody living cheerfully ever after."
Others, if you suggest to them that they should rob a bank and give the money to the poor, will be able to see the helped poor as a "consequence" and a "utility", but they will not spontaneously and unprompted see all those other considerations in the formal form of "consequences" and "utilities".
If you just asked them informally whether it was a good or bad idea, they might ask "What if everyone did that?" or "Isn't it good that we can live in a society where people can store and transmit money?" or "How would it make effective altruism look, if people went around doing that in the name of effective altruism?" But if you ask them about consequences, they don't spontaneously, readily, intuitively classify all these other things as "consequences"; they think that their mind is being steered onto a kind of formal track, a defensible track, a track of stating only things that are very direct or blatant or obvious. They think that the rule of consequentialism is, "If you show me a good consequence, I have to do that thing."
If you present them with bad things that happen if people rob banks, they don't see those as also being 'consequences'. They see them as arguments against consequentialism; since, after all consequentialism says to rob banks, which obviously leads to bad stuff, and so bad things would end up happening if people were consequentialists. They do not do a double-take and say "What?" That consequentialism leads people to do bad things with bad outcomes is just a reasonable conclusion, so far as they can tell.
People like this should not be 'consequentialists' or 'utilitarians' as they understand those terms. They should back off from this form of reasoning that their mind is not naturally well-suited for processing in a native format, and stick to intuitively informally asking themselves what's good or bad behavior, without any special focus on what they think are 'outcomes'.
If they try to be consequentialists, they'll end up as Hollywood villains describing some grand scheme that violates a lot of ethics and deontology but sure will end up having grandiose benefits, yup, even while everybody in the audience knows perfectly well that it won't work. You can only safely be a consequentialist if you're genre-savvy about that class of arguments - if you're not the blind villain on screen, but the person in the audience watching who sees why that won't work.
Q4: I know EAs shouldn't rob banks, so this obviously isn't directed at me, right?
A: The people of whom I speak will look for and find the reasons not to do it, even if they're in a social environment that doesn't have strong established injunctions against bank-robbing specifically exactly. They'll figure it out even if you present them with a new problem isomorphic to bank-robbing but with the details changed.
Which is basically what you just did, in my opinion.
Q4: But from the standpoint of cold-blooded calculation -
A: Calculations are not cold-blooded. What blood we have in us, warm or cold, is something we can learn to see more clearly with the light of calculation.
If you think calculations are cold-blooded, that they only shed light on cold things or make them cold, then you shouldn't do them. Stay by the warmth in a mental format where warmth goes on making sense to you.
Q4: Yes yes fine fine but what's the actual downside from an expected-utility standpoint?
A: If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
If the world can be saved, it will be saved by people who didn't lie to themselves, and went on living inside reality until some unexpected hope appeared there.
If those people went around lying to others and paternalistically deceiving them - well, mostly, I don't think they'll have really been the types to live inside reality themselves. But even imagining the contrary, good luck suddenly unwinding all those deceptions and getting other people to live inside reality with you, to coordinate on whatever suddenly needs to be done when hope appears, after you drove them outside reality before that point. Why should they believe anything you say?
Q4: But wouldn't it be more clever to -
A: Stop. Just stop. This is why I advised you to reframe your emotional stance as dying with dignity.
Maybe there'd be an argument about whether or not to violate your ethics if the world was actually going to be saved at the end. But why break your deontology if it's not even going to save the world? Even if you have a price, should you be that cheap?
Q4 But we could maybe save the world by lying to everyone about how much hope there was, to gain resources, until -
A: You're not getting it. Why violate your deontology if it's not going to really actually save the world in real life, as opposed to a pretend theoretical thought experiment where your actions have only beneficial consequences and none of the obvious second-order detriments?
It's relatively safe to be around an Eliezer Yudkowsky while the world is ending, because he's not going to do anything extreme and unethical unless it would really actually save the world in real life, and there are no extreme unethical actions that would really actually save the world the way these things play out in real life, and he knows that. He knows that the next stupid sacrifice-of-ethics proposed won't work to save the world either, actually in real life. He is a 'pessimist' - that is, a realist, a Bayesian who doesn't update in a predictable direction, a genre-savvy person who knows that the viewer would say if there were a villain on screen making that argument for violating ethics. He will not, like a Hollywood villain onscreen, be deluded into thinking that some clever-sounding deontology-violation is bound to work out great, when everybody in the audience watching knows perfectly well that it won't.
My ethics aren't for sale at the price point of failure. So if it looks like everything is going to fail, I'm a relatively safe person to be around.
I'm a genre-savvy person about this genre of arguments and a Bayesian who doesn't update in a predictable direction. So if you ask, "But Eliezer, what happens when the end of the world is approaching, and in desperation you cling to whatever harebrained scheme has Goodharted past your filters and presented you with a false shred of hope; what then will you do?" - I answer, "Die with dignity." Where "dignity" in this case means knowing perfectly well that's what would happen to some less genre-savvy person; and my choosing to do something else which is not that. But "dignity" yields the same correct answer and faster.
Q5: "Relatively" safe?
A: It'd be disingenuous to pretend that it wouldn't be even safer to hang around somebody who had no clue what was coming, didn't know any mental motions for taking a worldview seriously, thought it was somebody else's problem to ever do anything, and would just cheerfully party with you until the end.
Within the class of people who know the world is ending and consider it to be their job to do something about that, Eliezer Yudkowsky is a relatively safe person to be standing next to. At least, before you both die anyways, as is the whole problem there.
Q5: Some of your self-proclaimed fans don't strike me as relatively safe people to be around, in that scenario?
A: I failed to teach them whatever it is I know. Had I known then what I knew now, I would have warned them not to try.
If you insist on putting it into terms of fandom, though, feel free to notice that Eliezer Yudkowsky is much closer to being a typical liberaltarian science-fiction fan, as was his own culture that actually birthed him, than he is a typical member of any subculture that might have grown up later. Liberaltarian science-fiction fans do not usually throw away all their ethics at the first sign of trouble. They grew up reading books where those people were the villains.
Please don't take this as a promise from me to play nice, as you define niceness; the world is ending, and also people have varying definitions of what is nice. But I presently mostly expect to end up playing nice, because there won't be any options worth playing otherwise.
It is a matter of some concern to me that all this seems to be an alien logic to some strange people who - this fact is still hard for me to grasp on an emotional level - don't spontaneously generate all of this reasoning internally, as soon as confronted with the prompt. Alas.
Q5: Then isn't it unwise to speak plainly of these matters, when fools may be driven to desperation by them? What if people believe you about the hopeless situation, but refuse to accept that conducting themselves with dignity is the appropriate response?
A: I feel like I've now tried to live my life that way for a while, by the dictum of not panicking people; and, like everything else I've tried, that hasn't particularly worked? There are no plans left to avoid disrupting, now, with other people's hypothetical panic.
I think we die with slightly more dignity - come closer to surviving, as we die - if we are allowed to talk about these matters plainly. Even given that people may then do unhelpful things, after being driven mad by overhearing sane conversations. I think we die with more dignity that way, than if we go down silent and frozen and never talking about our impending death for fear of being overheard by people less sane than ourselves.
I think that in the last surviving possible worlds with any significant shred of subjective probability, people survived in part because they talked about it; even if that meant other people, the story's antagonists, might possibly hypothetically panic.
But still, one should present the story-antagonists with an easy line of retreat. So -
Q6: Hey, this was posted on April 1st. All of this is just an April Fool's joke, right?
A: Why, of course! Or rather, it's a preview of what might be needful to say later, if matters really do get that desperate. You don't want to drop that on people suddenly and with no warning.
Q6: Oh. Really? That would be such a relief!
A: Only you can decide whether to live in one mental world or the other.
Q6: Wait, now I'm confused. How do I decide which mental world to live in?
A: By figuring out what is true, and by allowing no other considerations than that to enter; that's dignity.
Q6: But that doesn't directly answer the question of which world I'm supposed to mentally live in! Can't somebody just tell me that?
A: Well, conditional on you wanting somebody to tell you that, I'd remind you that many EAs hold that it is very epistemically unvirtuous to just believe what one person tells you, and not weight their opinion and mix it with the weighted opinions of others?
Lots of very serious people will tell you that AGI is thirty years away, and that's plenty of time to turn things around, and nobody really knows anything about this subject matter anyways, and there's all kinds of plans for alignment that haven't been solidly refuted so far as they can tell.
I expect the sort of people who are very moved by that argument, to be happier, more productive, and less disruptive, living mentally in that world.
Q6: Thanks for answering my question! But aren't I supposed to assign some small probability to your worldview being correct?
A: Conditional on you being the sort of person who thinks you're obligated to do that and that's the reason you should do it, I'd frankly rather you didn't. Or rather, seal up that small probability in a safe corner of your mind which only tells you to stay out of the way of those gloomy people, and not get in the way of any hopeless plans they seem to have.
Q6: Got it. Thanks again!
A: You're welcome! Goodbye and have fun!