I think that the views of superforecasters on AI / AI risk should be basically no update.
It seems to me like the main reasons to defer to someone are:
1. They have a visibly good track record on the relevant domain. It has to be the literal domain, because people often have good views on their area of expertise, but crazy views elsewhere.
2. They are highly selected for having good beliefs in the domain. For example, if a mathematician tells me something that seems surprising about their area of expertise, I will tend to strongly believe them, despite not being able to evaluate their reasoning. The general reason for this is because mathematics is a verifiable domain, mathematicians are strongly selected for being correct about math. Other domains I'd basically defer to people in are historians about literal historical facts, physicists about well-established physics results, engineers about how cars work, etc. This consideration weakens as disciplines become less verifiable: I'm not very inclined to defer to philosophers, sociologists, psychologists, etc.
3. They make correct arguments about the domain (and very few incorrect arguments). If it's the case that you can talk to someone and they can consistently make clear rock-solid arguments that change your mind regularly, it is justified to defer to them on bottom line conclusions, even if you can't follow the arguments all the way through.
4. They are much smarter than you and are probably being honest. If someone (or, eventually, an AI) is clearly much smarter than you, and they are being honest (e.g. because they seem like an honest person), then you should probably defer to them substantially. (Of course, this isn't even fully general, e.g. a few hundred years ago, many of the smartest people around were superstitious, which would have led you astray.)
Now I'll go through and argue why these don't apply.
1. I think the track record of superforecasters on AI looks quite bad. Superforecasters consistent