I see strong hints, from how AI has been developing over the past couple of years, that there's plenty of room for increasing the predictive abilities of AI, without needing much increase in the AI's steering abilities.
What are these hints? Because I don't understand how this would happen. All that we need to add steering to predictive general models is to add an agent framework, e.g. a "predict what will make X happen best, then do that thing" - and the failures we see today in agent frameworks are predictive failures, not steering failures.
Unless the contention is that the AI systems will be great at predicting everything except how humans will react and how to get them to do what the AI wants, which very much doesn't seem like the path we're on. Or if the idea is to build narrow AI to predict specific domains, not general AI? (Which would be conceding the entire point IABIED is arguing.)
Yes, I'm also very unsatisfied with most answers - though that includes my own.
My view of consciousness is that it's not obvious what causes it, it's not obvious we can know if LLM-based systems have it, and it's unclear that it arises naturally in the majority of possible superintelligences. But even if I'm wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I'm not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space - and the longtermists seem to have bitten a few too many bullets on this front for my taste.
This is what I'm talking about when I say people don't take counterfactuals seriously - they seem to assume nothing could really be different, technology is predetermined. I didn't even suggest that without scaling early, NLP would have hit an AI winter. For example, if today's MS and FB had led the AI revolution, with the goals and incentives they had, you really think LLMs would have been their focus?
We can also see what happens to other accessible technologies when there isn't excitement and market pressure. For example, solar power was abandoned for a couple decades in the 1970s and 1980s. Nuclear was as well.
And even without presuming focus stays away from LLMs much longer, in fact, in our world, we see the tremendous difference between firms that started safety-pilled, and those which did not. So I think you're ignoring how much founder effects matter, and you're assuming technologists would by default pay attention to risk, or would embrace conceptual models that relied on a decade of theory and debate which by assumption wouldn't have existed.
Of course, any counterfactual has tons of different assumptions.
Yes, but engineering challenges get solved without philosophical justification all of the time. And this is a key point being made by the entire counterfactual - it's only because people took AGI seriously in designing LLMs that they frame the issues as alignment. To respond in more depth to the specific points:
In your posited case, CoT would certainly have been deployed as a clever trick that scales - but this doesn't mean the models they think of as stochastic parrots start being treated as proto-AGIs with goals. They aren't looking for true generalization, so any mistakes which need to be patched look like increased error rates to patch empirically, or places where they need a few more unit tests and ways to catch misbehavior - not a reason to design for safety for increasingly powerful models!
And before you dismiss this as implausible blindness, there are smart people who argue this way even today, despite being exposed to the arguments about increasing generality for years. So it's certainly not obvious that they'd take people seriously when they claim that this ELIZA v12.0 released in 2025 is truly reasoning.
I seems like you're arguing against something different than the point you brought up. You're saying that slow growth on multiple systems means we can get one of them right, by course correcting. But that's a really different argument - and unless there's effectively no alignment tax, it seems wrong. That is, the systems that are aligned would need to outcompete the others after they are smarter than each individual human, and beyond our ability to meaningfully correct. (Or we'd need to have enough oversight to notice much earlier - which is not going to happen.)
But the claim isn't, or shouldn't be, that this would be a short term reduction, it's that it cuts off the primary mechanism for growth that supports a large part of the economy's valuation - leading to not just a loss in value for the things directly dependent on AI, but also slowing growth generally. And reduction in growth is what makes the world continue to suck, so that most of humanity can't live first-world lives. Which means that slowing growth globally by a couple percentage points is a very high price to pay.
I think that it's plausibly worth it - we can agree that there's a huge amount of value enabled by autonomous but untrustworthy AI systems that are likely to exist if we let AI continue to grow, and that Sam was right originally that there would be some great [i.e. incredibly profitable] companies before we all die. And despite that, we shouldn't build it - as the title says.
But the way you are reading it seems to mean her "strawmann[ed]" point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don't solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means "we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems." Which is the think she then disagrees with in the remainder of that paragraph you're trying to defends.
I think the primary reason why the foom hypothesis seems load-bearing for AI doom is that without a rapid AI and local takeoff, we won't simply get "only one chance to correctly align the first AGI".
As the review makes very clear, the argument isn't about AGI, it's about ASI. And yes, they argue that you would in fact only get one chance to align the system that takes over. As the review discusses at length:
I do think we benefit from having a long, slow period of adaptation and exposure to not-yet-extremely-dangerous AI. As long as we aren’t lulled into a false sense of security, it seems very plausible that insights from studying these systems will help improve our skill at alignment. I think ideally this would mean going extremely slowly and carefully, but various readers may be less cautious/paranoid/afraid than me, and think that it’s worth some risk of killing every child on Earth (and everyone else) to get progress faster or to avoid the costs of getting everyone to go slow. But regardless of how fast things proceed, I think it’s clearly good to study what we have access to (as long as that studying doesn’t also make things faster or make people falsely confident).
But none of this involves having “more than one shot at the goal” and it definitely doesn’t imply the goal will be easy to hit. It means we’ll have some opportunity to learn from failures on related goals that are likely easier.
The “It” in “If Anyone Builds It” is a misaligned superintelligence capable of taking over the world. If you miss the goal and accidentally build “it” instead of an aligned superintelligence, it will take over the world. If you build a weaker AGI that tries to take over the world and fails, that might give you some useful information, but it does not mean that you now have real experience working with AIs that are strong enough to take over the world.
We worked on parts of this several years ago, and I will agree it's deeply uncertain and difficult to quantify. I'm also unsure that this direction will be fruitful for an individual getting started.
Here are two very different relevant projects I was involved with:
This argument seems like it is begging the question.
Yes, as long as we can solve the general problem of controlling intelligences, in the form of getting soldiers not to disobey what we mean by not staging a coup - which would necessarily include knowing when to disobey illegal orders, and listening to the courts instead of the commander in chief when appropriate - we can solve AI safety, by getting AI to be aligned in the same way. But that just means we have solved alignment in the general case, doesn't it?