There is a contingent of people who want excellence in education (e.g. Tracing Woodgrains) and are upset about e.g. the deprioritization of math and gifted education and SAT scores in the US. Does that not count?
Given that ~ no one really does this, I conclude that very few people are serious about moving towards a meritocracy.
This sounds like an unreasonably high bar for us humans. You could apply it to all endeavours, and conclude that "very few people are serious about <anything>". Which is true from a certain perspective, but also stretches the word "serious" far past how it's commonly understood.
I haven't read Friendship is Optimal, but from the synopsis it sounds like it's clearly and explicitly about AI doom and AI safety, whereas HPMoR is mainly about rationality and only implicitly about x-risk and AI safety?
By this I was mainly arguing against claims like that this performance is "worse than a human 6-year-old".
Fair. But then also restrict it to someone who has no hands, eyes, etc.
Further, have you ever gotten an adult who doesn't normally play video games to try playing one? They have a tendency to get totally stuck in tutorial levels because game developers rely on certain "video game motifs" for load-bearing forms of communication; see e.g. this video.
So much +1 on this.
Also, I've played a ton of games, and in the last few years started helping a bit with playtesting them etc. And I found it striking how games aren't inherently intuitive, but are rather made so via strong economic incentives, endless playtests to stop players from getting stuck, etc. Games are intuitive for humans because humans spend a ton of effort to make them that way. If AIs were the primary target audience, games would be made intuitive for them.
And as a separate note, I'm not sure what the appropriate human reference class for game-playing AIs is, but I challenge the assumption that it should be people who are familiar with games. Rather than, say, people picked at random from anywhere on earth.
Right now it doesn't make sense; it is better to let the current owners keep improving their AIs.
Only if alignment progress keeps up with or exceeds AI progress, and you thus expect a controllable AI you can take over to do your bidding. But isn't all the evidence pointing towards AI progress >> alignment progress?
A lot of things could happen, but something that has already happened is that official US AI policy is now that not racing towards AGI is bad, and that impeding AI progress is bad. Doesn't that policy imply that AI lab nationalization is now less likely, rather than more likely, than it would've been under a D president?
Conversely, your scenario assumes that the Trump administration can do whatever it wants to do, but this ability is partially dependent on it staying popular with the general public. The public may not care about AI for now, but it very much does care about economics and inflation, and once Trump's policies worsen those (e.g. via tariffs), then that severely restricts his ability to take arbitrary actions in other domains.
AI assistants are weird. Here's a Perplexity Pro search I did for an EY tweet about finding the sweet spot between utilitarianism & deontology. Perplexity Pro immediately found the correct tweet:
Eliezer Yudkowsky, a prominent figure in the rationalist community, has indeed expressed a view that suggests finding a balance between utilitarianism and deontology. In a tweet, he stated: "Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you..."
But I wondered why it didn't provide the full quote (which is just a few more words, namely "Stay there at least until you have become a god."), and I just couldn't get it to do so, even with requests like "Just quote the full tweet from here: <URL>". Instead, it invented alternative versions like this:
Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you understand why.
or this:
Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the correct place. Stay there at least until you understand why you shouldn't go any further.
I finally provided the full quote and asked it directly:
Does the following quote represent Yudkowsky's tweet with 100% accuracy?
"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."
And it still doubled down on the wrong version.
While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]
For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.
But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.
And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to their face that you hate them and everything they stand for. This would be virtuously legible and non-deceptive, at the cost of immediately ending the conversation and thus forfeiting any chance of e.g. gains from trade, coming to a compromise, etc.
One way I've seen this cost manifest on LW is that some authors complain that there's a style of commenting here that makes it unenjoyable to post here as an author. As a result, those authors are incentivized to post less, or to post elsewhere.[2]
And as a final aside, I'm skeptical of treating Feynman as socially graceless. Maybe he was less deferential towards authority figures, but if he had told nothing but the truth to all the authority figures (who likely included some naked emperors) throughout his life, his career would've presumably ended long before he could've gotten his Nobel Prize. And b), IIRC the man's physics lectures are just really fun to watch, and I'm pretty confident that a sufficiently socially graceless person would not make for a good teacher. For example, it is socially graceful not to belittle fledgling students as intellectual inferiors, even though they in some ways are just that.
Related: I wrote this comment and this follow-up where I wished that Brevity was considered a rationalist virtue. Because if there's no counterbalancing virtue to trade off against other virtues like legibility and truth-seeking, then supposedly virtuous discussions are incentivized to become arbitrarily long.
The moderation log of users banned by other users is a decent proxy for the question of which authors have considered which commenters to be too costly to interact with, whether due to lack of social grace of something else.