I think the "guaranteed safe AI" framework is just super speculative. Enough to basically not matter as an argument given any other salient points.
This leaves us with the baseline, which is that this kind of prize re-directs potentially a lot of brainpower from more math-adjacent people towards thinking about AI capabilities. Even worse, I expect it's mostly going to attract the un-reflective "full-steam-ahead" type of people.
Mostly, I'm not sure it matters at all except maybe slightly accelerating some inevitable development before e.g. deep mind takes another shot at it to finish things off.
Agreed, I would love to see more careful engagement with this question.
You're putting quite a lot of weight on what "mathematicians say". Probably these people just haven't thought very hard about it?
I believe the confusion comes from assuming the current board follows rules rather than doing whatever is most convenient.
The old board was trying to follow the rules, and the people in question were removed (technically were pressured to remove themselves).
I'd agree the OpenAI product line is net positive (though not super hung up on that). Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.
Or simply when scaling becomes too expensive.
There's a lot of problems with linking to manifold and calling it "the expert consensus"!
I wouldn't belabor it, but you're putting quite a lot of weight on this one point.
I mean it only suggests that they're highly correlated. I agree that it seems likely they represent the views of the average "AI expert" in this case. (I should take a look to check who was actually sampled)
My main point regarding this is that we probably shouldn't be paying this particular prediction market too much attention in place of e.g. the survey you mention. I probably also wouldn't give the survey too much weight compared to opinions of particularly thoughtful people, but I agree that this needs to be argued.
In general, yes - but see the above (I.e. we don't have a properly functioning prediction market on the issue).
I'm not saying that it's not worth pursuing as an agenda, but I also am not convinced it is promising enough to justify pursuing math related AI capabilities, compared to e.g. creating safety guarantees into which you can plug in AI capabilities once they arise anyway.