When you start wondering if one of your heroes is playing 11D chess when they do things that run counter to your idealized notion of who they are... it probably means you've idealized them a bit too much. Eliezer is still "just a man", as the Merovingian might say.
You may also underestimate the kind of pressures to which he is subjected. Are you familiar with /r/sneerclub at all? This was a group of redditors united by contempt for Less Wrong rationalism, who collected dirt on anything to do with the rationalist movement, and even supplied some of it to a New York Times journalist. And that was before the current AI boom, in which even billionaires can feel threatened by him, to the point that pioneer MIRI donor Peter Thiel now calls him a legionnaire of Antichrist.
Add to that 10+ years of daily debate on social media, and it shouldn't be surprising that any idea of always being diplomatic, has died a death of a thousand cuts.
This seems continuous with CEV rather than being an alternative to it. CEV wants to extrapolate human values but didn't specify a way to identify them. You propose that human values can be identified via a perfected evolutionary psychology, but then acknowledge that they'll need to be extrapolated, in order to be extended to situations outside the ancestral distribution...
It may be hard to believe today, but for most of the 20th century it was considered standard wisdom that companies which had stood the test of time were wiser investments than inexperienced startups.
Aren't stock markets now dominated by massive institutional investors engaged in automated very-short-term speculation?
I'm confused by this. First of all you talk about situations in which there is a text containing multiple persons interacting, and you say an AI, in predicting the words of one of the persons, will inappropriately use information that this person would not possess in real life. But you don't give any examples of this.
Then, we switch to situations in which an AI is not extrapolating a story, but is explicitly, all the time, in a particular persona (that of an AI assistant). And the claim is that this assistant will have a poor sense of where it ends, and the user, or the universe, begins.
But in the scenario of an AI assistant talking with a user, the entire conversation is meant to be accessible to the assistant, so there's no information in the chat that "in real life" the assistant couldn't access. So I don't even know what the mechanism of bleed-through in an AI assistant is supposed to be?
I don't have much to say except that I think it would be good to create a bloc with the proposed goals and standards, but that it would be hard to adhere to those standards and get anywhere in today's politics.
Also, if I was an American and interested in the politics of AI, I would be interested in the political stories surrounding the two movements that actually made a difference to executive-branch AI policy, namely effective altruism during the Biden years and e/acc during Trump 2.0. I think the EAs got in because the arrival of AI blindsided normie society and the EAs were the only ones who had a plan to deal with it, and then the e/accs managed to reverse that because the tech right wanted to get rich from a new technological revolution, and were willing to bet on Trump.
Also, for the record, the USA actually had a state politician who was a rationalist, ten years ago.
As the paper notes, this is part of Terry Tao's proposed strategy for resolving the Navier-Stokes millennium problem.
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech. I expect that a sufficiently advanced neuroscience would eventually reveal the details. I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
I'm not sure what you mean, either in-universe or in the real world.
In-universe, the Culture isn't all powerful. Periodically they have to fight a real war, and there are other civilizations and higher powers. There are also any number of ways and places where Culture citizens can go in order to experience danger and/or primitivism. Are you just saying that you wouldn't want to live out your life entirely within Culture habitats?
In the real world... I am curious what preference for the fate of human civilization you're expressing here. In one of his novels, Olaf Stapledon writes of the final and most advanced descendants of Homo sapiens (inhabiting a terraformed Neptune) that they have a continent set aside as "the Land of the Young", a genuinely dangerous wilderness area where the youth can spend the first thousand years of their lives, reproducing in miniature the adventures and the mistakes of less evolved humanity, before they graduate to "the larger and more difficult world of maturity". But Stapledon doesn't suppose that his future humanity is at the highest possible level of development and has nothing but idle recreations to perform. They have serious and sublime civilizational purposes to pursue (which are beyond the understanding of mere humans like ourselves), and in the end they are wiped out by an astronomical cataclysm. How's that sound to you?
I am not familiar with this debate, but it seems to me that "creating a happy person" is not even something we know how to accomplish by choice. You can choose to create a person, create a new life, but you certainly can't guarantee that they will be happy. The human condition is hazardous, disappointing, with terrible events and fates scattered throughout it. For me, the hazard is great enough to be an antinatalist.
I concede the bare possibility that (as you have suggested elsewhere) this life could just be a prelude to a bigger one that somehow makes up for what happens here, or the possibility that, after some friendly singularity, we could know enough about the being of our world that creating a definitely happy life really is an option.
But neither of these corresponds to the life that we know and endure right now, which to a great extent is still about the struggle to survive rather than the pursuit of happiness.