Specifically, against the following view described by a comment:
There seems to be a lack of emphasis in this market on outcomes where alignment is not solved, yet humanity turns out fine anyway. Based on an Outside View perspective (where we ignore any specific arguments about AI and just treat it like any other technology with a lot of hype), wouldn't one expect this to be the default outcome?
Take the following general heuristics:
If a problem is hard, it probably won't be solved on the first try.
If a technology gets a lot of hype, people will think that it's the most important thing in the world even if it isn't. At most, it will only be important on the same level that previous major technological advancements were important.
People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.
This, if applied to AGI, leads to the following conclusions:
Nobody manages to completely solve alignment.
This isn't a big deal, as AGI turns out to be disappointingly not that powerful anyway (or at most "creation of the internet" level influential but not "disassemble the planet's atoms" level influential)
I would expect the average person outside of AI circles to default to this kind of assumption.
Ideally, details are provided for why the outside view presented here is less favored on the evidence than the idea that AGI or PASTA will be a big deal, as popularized by Holden Karnofsky. Also, ideally you can estimate how much impact AI will have say, this century.
Motivation: I'm asking this question because one thing I notice is that there's the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I'd really like to know why LWers hold that AGI/ASI will be a big deal.
This is confusing to me.
I've read lots of posts on here about why AGI/AI would be a huge deal, and the ones I'm remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It seems to me like those assumptions have been stated and explored at great length, and I'm wondering how we've ended up looking at the same site and getting such different impressions.
(Holden's posts seem pretty good at laying out a bunch of things and explicitly tagging the assumptions as assumptions, as an example.)
Although that... doesn't feel fair on my part?
I've spent some time at the AI Risk or Computer Scientists workshops, and I might have things I learned from those and things I've learned from LessWrong mixed up in my brain. Or maybe they prepared me tounderstand and engage with the LW content in ways that I otherwise wouldn't have stumbled onto?
There are a lot of words on this site - and some really long posts. I've been browsing them pretty regularly for 4+ years now, and that doesn't seem like a burden I'd want to place on someone in order to listen to them. I'm sure I'm missing stuff that the longer term folks have soaked into their bones.
Maybe there's something like an "y'all should put more effort into collation and summary of your points if you want people to engage" point that falls out of this? Or something about "have y'all created an in-group, and to what extent is that intentional/helpful-in-cases vs accidental?"
Yes, this might be useful.