I'm an admin of LessWrong. Here are a few things about me.
Thanks for the comment. (Upvoted.)
a. I expect there is a slightly more complicated relationship between my value-function and the likely configuration states of the universe than literally zero-correlation, but most configuration states do not support life and we are all dead, so in one sense a claim that in the future something very big and bad will happen is far more likely on priors. One might counter that we live in a highly optimized society where things being functional and maintained is an equilibrium state and it's unlikely for systems to get out of whack enough for bad things to happen. But taking this straightforwardly is extremely naive, tons of bad things happen all the time to people. I'm not sure whether to focus on 'big' or 'bad' but either way, the human sense of these is not what the physical universe is made out of or cares about, and so this looks like an unproductive heuristic to me.
b. On the other hand, I suspect the bigger claims are more worth investing time to find out if they're true! All of this seems too coarse-grained to produce a strong baseline belief about big claims or small claims.
c. I don't get this one. I'm pretty sure I said that if you believe that you're in a highly adversarial epistemic environment, then you should become more distrusting of evidence about memetically fit claims.
I don't know what true points you think Leo is making about "the reference class", nor which points you think I'm inaccurately pushing back on that are true about "the reference class" but not true of me. Going with the standard rationalist advice, I encourage everyone to taboo "reference class" and replace it with a specific heuristic. It seems to me that "reference class" is pretending that these groupings are more well-defined than they are.
Your points about Occam's razor have got nothing to do with this subject[1]. The heuristic "be more skeptical of claims that would have big implications if true" makes sense only when you suspect a claim may have been adversarially optimized for memetic fitness; it is not otherwise true that "a claim that something really bad is going to happen is fundamentally less likely to be true than other claims".
I'm having a little trouble connecting your various points back to your opening paragraph, which is the primary thing that I am trying to push back on.[2]
I claim it is a lot more reasonable to use the reference class of "people claiming the end of the world" than "more powerful intelligences emerging and competing with less intelligent beings" when thinking about AI x-risk. further, we should not try to convince people to adopt the latter reference class - this sets off alarm bells, and rightly so (as I will argue in short order) - but rather to bite the bullet, start from the former reference class, and provide arguments and evidence for why this case is different from all the other cases.
To restate the message I'm reading here: "Give up on having a conversation where you evaluate the evidence alongside your interlocutors. Instead frame yourself as trying to convince them of something, and assume that they are correct to treat your communications as though you are adversarially optimizing for them believing whatever you want them to believe." This assumption seems to give up a lot of my ability to communicate with people (almost ~all of it), and I refuse to simply do it because some amount of communication in the world is adversarially optimized, and I'm definitely not going to do it because of a spurious argument that Occam's razor implies that "claims about things being really bad or claims that imply you need to take action are fundamentally less likely to be true".
You are often in an environment where people are trying to use language to describe reality, and in that situation the primary thing to evaluate is not the "bigness" of a claim, but the evidence for and against it. I recommend instead to act in such a way as to increase the size and occurrence of that environment more-so than "act as though it's correct to expect maximum adversarial optimization in communications".
(Meta: The only literal quotes of Leo's in this comment are the big one in the quote block, my use of "" is to hold a sentence as object, they are not things Leo wrote.)
I agree that the more strongly a claim implies that you should take action, then the more you should consider that it is being optimized adversarially for you to take action. For what it's worth, I think that heuristic applies more so to claims that you should personally take action. Most people have little action to directly prevent the end of the world from AI; this is a heuristic more naturally applied to claims that you need to pay fines (which are often scams/spam). But mostly, when people give me claims that imply action, they are honestly meant claims and I do the action. This is the vast majority of my experience.
Aside to Leo: Rather than reply point-by-point to the each of the paragraphs in the second comment, I will try restating and responding to the core message I got in the opening paragraph of the first comment. I'm doing this because the paragraphs in the second-comment seemed somewhat distantly related / I couldn't tell whether the points were actually cruxy. They were responding to many different things, and I hope restating the core thing will better respond to your core point. However I don't mean to avoid key arguments, if you think I have done so feel free to tell me one or two paragraphs you would especially like me to engage with and I will do so in any future reply.
This all seems wrongheaded to me.
I endeavor to look at how things work and describe them accurately. Similarly to how I try to describe how a piece of code works, or how to to build a shed, I will try to accurately describe the consequences of large machine learning runs, which can include human extinction.
I personally think AGI will probably kill everyone. but this is a big claim and should be treated as such.
This isn't how I think about things. Reality is what exists, and if a claim accurately describes reality, then I should not want to hold it to higher standards than claims that do not describe reality. I don't think it's a good epistemology to rank claims by "bigness" and then say that the big ones are less likely and need more evidence. On the contrary, I think it's worth investing more in finding out if they're right, and generally worth bringing them up to consideration with less evidence than for "small" claims.
on the other hand, everyone has personally experienced a dozen different doomsday predictions. whether that's your local church or faraway cult warning about Armageddon, or Y2K, or global financial collapse in 2008, or the maximally alarmist climate people, or nuclear winter, or peak oil. for basically all of them, the right action empirically in retrospect was to not think too much about it.
I don't have the experiences you're describing. I don't go to churches, I don't visit cults, I was 3yrs old in the year 2000, I was 11 for the '08 financial crash and having read about it as an adult I don't recall extinction being a topic of discussion, I think I have heard of climate people saying that via alarmist news headlines but I have not had anyone personally try to convince me of this or even say that they believe it. I have heard it discussed for nuclear winter, yes, and I think nukes are quite scary and it was reasonable to consider, I did not dismiss it out of hand and wouldn't use that heuristic. I don't know what the oil thing is.
In other words, I don't recall anyone seriously trying to convince me that the world was ending except in cases where they had good reason to believe it. In my life, when people try to warn me about big things, especially if they've given it serious thought, usually I've found it's been worthwhile for me to consider it. (I like to think I am good at steering clear of scammers and cranks, so that I can trust the people in my life when they tell me things.)
The sense I get from this post is that, in it, you're assuming everyone else in the world is constantly being assaulted with claims meant to scare and control them rather than people attempting to describe the world accurately. I agree there are forces doing that, but I think this post gives up all too quickly on there being other forces in the world that aren't doing that that people can recognize and trust.
Oops, I didn't send my reply comment. I've just posted it, yes, that information did change my mind about this case.
Thank you for the details! I change my mind about the locus of responsibility, and don’t think Wascher seems as directly culpable as before. I don’t update my heuristic, I still think there should be legal consequences for decisions that cause human deaths,
My new guess is that something more like “the airport” should be held accountable and fined some substantial amount of money for the deaths, to go to the victim’s families.
Having looked into it a little more I see they were sued substantially for these, so it sounds like that broadly happened.
I liked reading these examples; I wanted to say, it initially seemed to me a mistake not to punish Wascher, whose mistake led to the death of 35 people.
I have a weak heuristic that, when you want enforce rules, costs and benefits aren’t fungible. You do want to reward Wascher’s honesty, but I still think that if you accidentally cause 35 people to die this is evidence that you are bad at your job, and separately it is very important to disincentivize that behavior for others who might be more likely to make that mistake recklessly. There must be a reliable punishment for that kind of terrible mistake.
So you must fire her and bar her from this profession, or fine her half a year’s wages, or something. If you also wish to help her, you should invest in supporting her get into a new line of work with which she can support her family, or something. You can even make her net better off for having helped uncover a critical mistake and saving future lives. But people should know that there was a cost and there will be if they do so in future.
Or at least this is what my weak heuristic says.
I don't think that propaganda must necessarily involve lying. By "propaganda," I mean aggressively spreading information or communication because it is politically convenient / useful for you, regardless of its truth (though propaganda is sometimes untrue, of course).
When a government puts up posters saying "Your country needs YOU" this is intended to evoke a sense of duty and a sense of glory to be had; sometimes this sense of duty is appropriate, but sometimes your country wants you to participate in terrible wars for bad reasons. The government is saying it loudly because for them it's convenient for you to think that way, and that’s not particularly correlated with the war being righteous or with the people who decided to make such posters even having thought much about that question. They’re saying it to win a war, not to inform their populace, and that’s why it’s propaganda.
Returning to the Amodei blogpost: I’ll happily concede that you don’t always need to give reasons for your beliefs when expressing them—context matters. But in every context—tweets, podcasts, ads, or official blogposts—there’s a difference between sharing something to inform and sharing it to push a party line.
I claim that many people have asked why Anthropic believes it’s ethical for them to speed up AI progress (by contributing to the competitive race), and Anthropic have rarely-if-ever given a justification of it. Senior staff keep indicating that not building AGI is not on the table, yet they rarely-if-ever show up to engage with criticism or to give justifications for this in public discourse. This is a key reason why it reads to me as propaganda, because it's an incredibly convenient belief for them and they state it as though any other position is untenable, without argument and without acknowledging or engaging with the position that it is ethically wrong to speed up the development of a technology they believe has a 10-20% chance of causing human extinction (or a similarly bad outcome).
I wish that they would just come out, lay out the considerations for and against building a frontier lab that is competing to reach the finish line first, acknowledge other perspectives and counterarguments, and explain why they made the decision they have made. This would do wonders for the ability to trust them.
(Relatedly, I don't believe the Machines of Loving Grace essay is defending the position that speeding up AI is good; the piece in fact explicitly says it will not assess the risks of AI. Here are my comments at the time on that essay also being propaganda.)
I'm saying that he is presenting it as something he believes from his place of expertise and private knowledge without argument, because it is something that is exceedingly morally and financially beneficial to him (he gets to make massive money and not be a moral monster), rather than because he has any evidence, and he stated it without evidence.
It is a similar sentence to if a President of a country who just initiated war said “If there’s one thing I’ve learned in my life it’s that war is inevitable, and there’s just a question of who wins and how to make sure it’s over quickly”, in a way that means they should be absolved of responsibility for initiating war.
Edit: Just as Casey B was writing his reply below, I edited an example out of Mark Zuckerberg saying something like "If there's one thing I've learned in my career, it's that social media is good, and the only choice is which sort of good social media we have". Leaving this note so that ppl aren't confused by his reply.
Can you expand on this? How can you tell the difference, and does it make much of a difference in the end (e.g., if most people get corrupted by power regardless of initial intentions)?
But I don't believe most people get corrupted by power regardless of initial intentions? I don't think Francis Bacon was corrupted by power, I don't think James Watt was corrupted by power, I don't think Stanislav Petrov was corrupted by power, and all of these people had far greater influence over the world than most people who are "corrupted by power".
I'm hearing you'd be interested in me saying more words about the difference in what it looks like to be motivated by responsibility versus power-seeking. I'll say some words, can see if they help.
> As a background model, I think if someone wants to take responsibility for some part of the world going well, by-default this does not look like "situating themselves in the center of legible power".
And yet, Eliezer, the writer of "heroic responsibility" is also the original proponent of "build a Friendly AI to take over the world and make it safe".
Building an AGI doesn't seem to me like a very legible mechanism of power, or at least it didn't in the era Eliezer pursued it (where it wasn't also credibly "a path to making billions of dollars and getting incredible prestige"). The word 'legible' was doing a lot of work in the sentence I wrote.
Another framing I sometimes look through (H/T Habryka) is constrained vs unconstrained power. Having a billion dollars is unconstrained power, because you can use it to do a lot of different things – buy loads of different companies or resources. Being an engineer overlooking missile-defense systems in the USSR is very constrained, you have an extremely well-specified set of things you can control. This changes the adversarial forces on you, because in the former case a lot of people stand to gain a lot of different possible things they want if they can get leverage over you, and they have to be concerned about a lot of different ways you could be playing them. So the pressures for insanity are higher. Paths that give you the ability to influence very specific things that route through very constrained powers are less insanity-inducing, I think, and I think most routes that look like "build a novel invention in a way that isn't getting you lots of money/status along the way" are less insanity-inducing, and I rarely find the person to have become as insane as some of the tech-company CEOs have. I also think people motivated by taking responsibility for fixing a particular problem in the world are more likely to take constrained power, because... they aren't particularly motivated by all the other power they might be able to get.
I don't suspect I addressed your cruxes here so far about whether this idea of heroic responsibility is/isn't predictably misused. I'm willing to try again if you wish, or if you can try pointing again to what you'd guess I'm missing.
You're writing lots of things here but as far as I can tell you aren't defending your opening statement, which I believe is mistaken.
Firstly, it's just not more reasonable. When you ask yourself "Is a machine learning run going to lead to human extinction?" you should not first say "How trustworthy are people who have historically claimed the world is ending?", you should of course primarily bring your attention to questions about what sorts of machine is being built, what sort of thinking capacities it has, what sorts of actions it can take in the world, what sorts of optimization it runs, how it would behave around humans if it were more powerful than them, and so on. We can go back to discussing epistemology 101 if need be (e.g. "Hug the Query!").
Secondly, insofar as someone believes you are a huckster or a crackpot, you should leave the conversation, communication here has broken down and you should look for other opportunities. However, insofar as someone is only evaluating this tentatively as one of many possible hypotheses about you then you should open yourself up to auditing / questioning by them about why you believe what you believe and your past history and your memetic influences. Being frank is the only way through this! But you shouldn't say to them "Actually, I think you should treat me like a huckster/scammer/serf-of-a-corrupt-empire." This feels analogous to a man on a date with a woman saying "Actually I think you should strongly privilege the hypothesis that I am willing to rape you, and now I'll try to provide evidence for you that this is not true." It would be genuinely a bad sign about a man that he thinks that about himself, and also he has moved the situation into a much more adversarial frame.
I suspect you could write some more narrow quick-take such as "Here is some communication advice I find helpful when talking with friends and colleagues about how AI can lead to human extinction", but in generalizing it all the way to making dictates about basic epistemology you are making basic mistakes and getting it wrong.
Please either (1) defend and/or clarify the original statement, or (2) concede that it was mistaken, rather than writing more paragraphs about memetic immune systems.