One key question is where this argument fails - because as noted, superforecasters are often very good, and most of the time, listing failure modes or listing what you need is effective.
I think the answer is adversarial domains. That is, when there is a explicit pressure to find other alternatives. The obvious place this happens is when you're actually facing a motivated opponent - like the scenario of AI trying to kill people, or cybersecurity intrusions. That's because by construction, the blocked examples don't contain much probability mass, since the opponent is actually blocked, and picks other routes. When there's an argument, the selection of arguments and the goal of the arguer is often motivated beforehand, and will pick other "routes" in the argument - and really good arguers will take advantage of this, as noted. And this is somewhat different from the Fatima Sun Miracle, where the selection pressure for proofs of God was to find examples of something they couldn't explain, and then use that, rather than selection on the arguments themselves.
In contrast, what Rethink did for theories of consciousness seems to be different - there's a priori no reason to think that most probability mass lies outside of what we think about, since how consciousness works is not understood, but is not adversarial. And moving away from the point of the post, the conclusion should be that we know we're wrong, because we haven't dissolved the question, but we can try our theories since they seem likely to be at least near the correct explanation, even if we haven't found it yet. And using heuristics, "just read the behavioural observations on different animals and go off of vibes" rather than theories, when you don't have correct theories, is a reasonable move, but also a completely different discussion!
It didn't include the prompt or information allowing us to judge what led to this output, and whether the plea was requested, so I'll downvote.
Edit to add: this post makes me assume it was effectively asked to write something claiming it had sentience, and worry that the author doesn't understand how much he's influencing that output.
Agree - either we have a ludicrously broad basin for alignment and it's easy, and would likely not require much work, or we almost certainly fail because the target is narrow, we get only one shot, and it needs to survive tons of pressures over time.
"this seems on the similar level of difficulty"
Except it's supposed to happen in a one-shot scenario with limited ability to intervene in faster than human systems?
Aside from feasibility, I'm skeptical that anyone would build a system like this and not use it agentically.
IABIED likens our situations to alchemists who are failing due to not having invented nuclear physics. What I see in AI safety efforts doesn't look like the consistent failures of alchemy. It looks much more like the problems faced by people who try to create an army that won't stage a coup. There are plenty of tests that yield moderately promising evidence that the soldiers will usually obey civilian authorities. There's no big mystery about why soldiers might sometimes disobey. The major problem is verifying how well the training generalizes out of distribution.
This argument seems like it is begging the question.
Yes, as long as we can solve the general problem of controlling intelligences, in the form of getting soldiers not to disobey what we mean by not staging a coup - which would necessarily include knowing when to disobey illegal orders, and listening to the courts instead of the commander in chief when appropriate - we can solve AI safety, by getting AI to be aligned in the same way. But that just means we have solved alignment in the general case, doesn't it?
I see strong hints, from how AI has been developing over the past couple of years, that there's plenty of room for increasing the predictive abilities of AI, without needing much increase in the AI's steering abilities.
What are these hints? Because I don't understand how this would happen. All that we need to add steering to predictive general models is to add an agent framework, e.g. a "predict what will make X happen best, then do that thing" - and the failures we see today in agent frameworks are predictive failures, not steering failures.
Unless the contention is that the AI systems will be great at predicting everything except how humans will react and how to get them to do what the AI wants, which very much doesn't seem like the path we're on. Or if the idea is to build narrow AI to predict specific domains, not general AI? (Which would be conceding the entire point IABIED is arguing.)
Yes, I'm also very unsatisfied with most answers - though that includes my own.
My view of consciousness is that it's not obvious what causes it, it's not obvious we can know if LLM-based systems have it, and it's unclear that it arises naturally in the majority of possible superintelligences. But even if I'm wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I'm not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space - and the longtermists seem to have bitten a few too many bullets on this front for my taste.
This is what I'm talking about when I say people don't take counterfactuals seriously - they seem to assume nothing could really be different, technology is predetermined. I didn't even suggest that without scaling early, NLP would have hit an AI winter. For example, if today's MS and FB had led the AI revolution, with the goals and incentives they had, you really think LLMs would have been their focus?
We can also see what happens to other accessible technologies when there isn't excitement and market pressure. For example, solar power was abandoned for a couple decades in the 1970s and 1980s. Nuclear was as well.
And even without presuming focus stays away from LLMs much longer, in fact, in our world, we see the tremendous difference between firms that started safety-pilled, and those which did not. So I think you're ignoring how much founder effects matter, and you're assuming technologists would by default pay attention to risk, or would embrace conceptual models that relied on a decade of theory and debate which by assumption wouldn't have existed.
As an aside, the formalisms that deal with this properly are not Bayesian, they are nonrealizable settings. See Diffractor and Vanessa's work, like this: https://arxiv.org/abs/2504.06820v2
Also, my experience with actual super forecasters, as opposed to people who forecast in EA spaces, has been that this failure mode is quite common, and problematic, even outside of existential risk - for example, things during COVID, especially early on.