People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
Intuitively I would imagine that this should be weighted by something like population? So if we plot the distribution of humans vs time, doomers are saying that we are close to the median of the distribution, while optimists are saying that we are in the 0.0(multiple or many zeros)1% beginning of the distribution and that there is a huge world out there in the future.
Meta: I might be reading some the question incorrectly, but my impression is that it lumps "outside views about technology progress and hype cycles" together with "outside views about things people get doom-y about".
If it is about "people being doom-y" about things, then I think we are more playing in the realm of things where getting it right on the first try or first few tries matter.
Expected values seem relevant here. If people think there is a 1% chance of a really bad outcome and try to steer against that, even if they are correct you are going to see 99 people pointing at things that didn't turn out to be a detail for every 100 times this comes up. And if that 1 other person actually stopped something bad from happening, we're much less likely to remember the time that "a bad thing failed to happen because it was stopped a few causal steps early".
There also seems to be a thing there where the doom-y folks are part of the dynamic equilibrium. My mind goes to nuclear proliferation and climate change.
Folks got really worried about us all dying in a global nuclear war, and that has hasn't happened yet, and so we might be tempted to conclude that the people who were worried were just panicking and were wrong. It seems likely to me that some part of the reason that we didn't all die in a global nuclear war was that people were worried enough about that to collectively push over some unknowable-in-advance line where that lead to enough coordination to at least stop things going terminally bad with short notice. Even then, we've still had wobbles.
If the general response to the doom-y folks back then had been "Nah, it'll be fine", delivered with enough skill / volume / force to cause people to stop waving their warning flags and generally stop trying to do things, my guess is that we might have had much worse outcomes.
I am lumping them together since if you believe AGI isn't that impactful, then much argumentation around AI and alignment doesn't matter at all. Obviously, there is the bias argument that you responded to around doom, but there is another prong to that argument.
Motivation: I'm asking this question because one thing I notice is that there's the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I'd really like to know why LWers hold that AGI/ASI will be a big deal.
This is confusing to me.
I've read lots of posts on here about why AGI/AI would be a huge deal, and the ones I'm remembering seemed to do a good job at unpacking their assumptions (or at least a better job than I would do by default). It seems to me like those assumptions have been stated and explored at great length, and I'm wondering how we've ended up looking at the same site and getting such different impressions.
(Holden's posts seem pretty good at laying out a bunch of things and explicitly tagging the assumptions as assumptions, as an example.)
Although that... doesn't feel fair on my part?
I've spent some time at the AI Risk or Computer Scientists workshops, and I might have things I learned from those and things I've learned from LessWrong mixed up in my brain. Or maybe they prepared me tounderstand and engage with the LW content in ways that I otherwise wouldn't have stumbled onto?
There are a lot of words on this site - and some really long posts. I've been browsing them pretty regularly for 4+ years now, and that doesn't seem like a burden I'd want to place on someone in order to listen to them. I'm sure I'm missing stuff that the longer term folks have soaked into their bones.
Maybe there's something like an "y'all should put more effort into collation and summary of your points if you want people to engage" point that falls out of this? Or something about "have y'all created an in-group, and to what extent is that intentional/helpful-in-cases vs accidental?"
Maybe there's something like an "y'all should put more effort into collation and summary of your points if you want people to engage" point that falls out of this?
Yes, this might be useful.
Specifically, against the following view described by a comment:
Ideally, details are provided for why the outside view presented here is less favored on the evidence than the idea that AGI or PASTA will be a big deal, as popularized by Holden Karnofsky. Also, ideally you can estimate how much impact AI will have say, this century.
Motivation: I'm asking this question because one thing I notice is that there's the unstated assumption that AGI/AI will be a huge deal, and how much of a big deal would change virtually everything about LW works, depending on the answer. I'd really like to know why LWers hold that AGI/ASI will be a big deal.