This is definitely baked in for many people (e.g. me, but also see the discussion here for example).
The most concerning mundane risks that come to mind are unemployment, concentration of power, and adversarial forms of RL (I'm missing a better phrase here, basically what TikTok/Meta/the recent o4 model were already doing). The problems in education are partially downstream of that (what's the point if it's not going to help prepare you for work) and otherwise honestly don't seem too serious on absolute terms? Granted, the system may completely fail to adapt, but that seems more to be an issue with the system already being broken and not about AI in particular.
"Approaching human level" seems reasonable. I think one should just read this as her updating towards short timelines in general based on what experts say rather than her trying to make a prediction.
In the Sydney case, this was probably less Sydney ending the conversation and more the conversation being terminated in order to hide Sydney going off the rails.
I'm not saying that it's not worth pursuing as an agenda, but I also am not convinced it is promising enough to justify pursuing math related AI capabilities, compared to e.g. creating safety guarantees into which you can plug in AI capabilities once they arise anyway.
I think the "guaranteed safe AI" framework is just super speculative. Enough to basically not matter as an argument given any other salient points.
This leaves us with the baseline, which is that this kind of prize re-directs potentially a lot of brainpower from more math-adjacent people towards thinking about AI capabilities. Even worse, I expect it's mostly going to attract the un-reflective "full-steam-ahead" type of people.
Mostly, I'm not sure it matters at all except maybe slightly accelerating some inevitable development before e.g. deep mind takes another shot at it to finish things off.
Agreed, I would love to see more careful engagement with this question.
You're putting quite a lot of weight on what "mathematicians say". Probably these people just haven't thought very hard about it?
I believe the confusion comes from assuming the current board follows rules rather than doing whatever is most convenient.
The old board was trying to follow the rules, and the people in question were removed (technically were pressured to remove themselves).
I'd agree that this is to some extent playing the respectability game, but personally I'd be very happy for Eliezer and people to risk doing this too much rather than too little for once.