(indeed the politics of our era is moving towards greater acceptance of inequality)
How certain are you of this, and how much do you think it comes down more to something like "to what extent can disempowered groups unionise against the elite?".
To be clear, by default I think AI will make unionising against the more powerful harder, but it might depend on the governance structure. Maybe if we are really careful, we can get something closer to "Direct Democracy", where individual preferences actually matter more!
[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of "Science of Generalisation of Preferences", for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I'm not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like "people" "worshipping" "God" in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.
I have read through most of this post and some of the related discussion today. I just wanted to write that it was really interesting, and as far as I can tell, useful, to think through Paul's reasoning and forecasts about strategy-related questions.
In case he believes this is a good idea, I would be very glad to read through a longer, more comprehensive document describing his views on strategic considerations.
Sooo this was such an intriguing idea that I did some research -- but reality appears to be more boring:
In a recent informal discussion I believe said OPP CEO remarked he had to give up the OpenAI board seat as his fiancée joining Anthropic creates a conflict of interest. Naively this is much more likely, and I think is much better supported by the timelines.
According to LinkedIn of the mentioned fiancée joined in already as VP in 2018 and was promoted to a probably more serious position in 2020, and her sibling was promoted to VP in 2019.
The Anthropic split occurred in June 2021.
A new board member (who is arguably very aligned to OPP) was inducted in September 2021, probably in place of OPP CEO.
It is unclear when OPP CEO exactly left the board, but I would guess sometime in 2021. This seem better explained by "conflict of interest with his fiancée joining-cofounding Anthropic" and OpenAI putting an other OPP-aligned board member in his place wouldn't make for very productive scheming.
I think the point was less about a problem with refugees (which should be solved in time with European coordination), maybe more that the whole invasion is "good news" for conservative parties, as most crises are.
A lot of people brought up sanctions, and they could indeed influence European economy/politics.
I would be curious about what sanctions in particular are likely to be implemented, and what are their implications - a major economic setback/energy prices soaring could radicalize European politics perhaps?
My guess would be that overall the whole event increases support for conservative/nationalist/populist parties - for example, even though Hungary's populist government was trying to appear to be balancing "between the West and Russia" (thus now being in an uncomfortable situation), I think they can probably actually spin it around to their advantage. (Perhaps even more so, if they can fearmonger about refugees.)
If you think they didn't train on FrontierMath answers, why do you think having the opportunity to validate on it is such a significant advantage for OpenAI?
Couldn't they just make a validation set from their training set anyways?
In short, I don't think the capabilities externalities of a "good validation dataset" is that big, especially not counterfactually -- sure, maybe it would have took OpenAI a bit more time to contract some mathematicians, but realistically, how much more time?
Whereas if your ToC as Epoch is "make good forecasts on AI progress", it makes sense you want labs to report results on your dataset you've put together.
Sure, maybe you could commit to not releasing the dataset and only testing models in-house, but maybe you think you don't have the capacity in-house to elicit maximum capability from models. (Solving the ARC challange cost O($400k) for OpenAI, that is peanuts for them but like 2-3 researcher salaries at Epoch, right?)
If I was Epoch, I would be worried about "cheating" on the results (dataset leakage).
Re: unclear dataset split: yeah, that was pretty annoying, but that's also on OpenAI comms too.
I teeend to agree that orgs claiming to be safety orgs shouldn't sign NDAs preventing them from disclosing their lab partners / even details of partnerships, but this might be a tough call to make in reality.
I definitely don't see a problem with taking lab funding as a safety org. (As long as you don't claim otherwise.)