I don't have much to say except that I think it would be good to create a bloc with the proposed goals and standards, but that it would be hard to adhere to those standards and get anywhere in today's politics.
Also, if I was an American and interested in the politics of AI, I would be interested in the political stories surrounding the two movements that actually made a difference to executive-branch AI policy, namely effective altruism during the Biden years and e/acc during Trump 2.0. I think the EAs got in because the arrival of AI blindsided normie society and the EAs were the only ones who had a plan to deal with it, and then the e/accs managed to reverse that because the tech right wanted to get rich from a new technological revolution, and were willing to bet on Trump.
Also, for the record, the USA actually had a state politician who was a rationalist, ten years ago.
As the paper notes, this is part of Terry Tao's proposed strategy for resolving the Navier-Stokes millennium problem.
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech. I expect that a sufficiently advanced neuroscience would eventually reveal the details. I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
I'm not sure what you mean, either in-universe or in the real world.
In-universe, the Culture isn't all powerful. Periodically they have to fight a real war, and there are other civilizations and higher powers. There are also any number of ways and places where Culture citizens can go in order to experience danger and/or primitivism. Are you just saying that you wouldn't want to live out your life entirely within Culture habitats?
In the real world... I am curious what preference for the fate of human civilization you're expressing here. In one of his novels, Olaf Stapledon writes of the final and most advanced descendants of Homo sapiens (inhabiting a terraformed Neptune) that they have a continent set aside as "the Land of the Young", a genuinely dangerous wilderness area where the youth can spend the first thousand years of their lives, reproducing in miniature the adventures and the mistakes of less evolved humanity, before they graduate to "the larger and more difficult world of maturity". But Stapledon doesn't suppose that his future humanity is at the highest possible level of development and has nothing but idle recreations to perform. They have serious and sublime civilizational purposes to pursue (which are beyond the understanding of mere humans like ourselves), and in the end they are wiped out by an astronomical cataclysm. How's that sound to you?
What do you want from life, that the Culture doesn't offer?
Ah, the topic that frustrates me more than any other. If only you could see some of the ripostes that I have considered writing:
"Every illusionist is declaring to the world that they can be killed, and there's no moral issue, because despite appearances, there's nobody home."
"I regret to inform you that your philosophy is actually a form of mental illness. You are prepared to deny your own existence rather than doubt whatever the assumptions were which led you in that direction."
"I wish I could punch you in the face, and then ask you, are you still sure there's no consciousness, no self, and no pain?"
"I would disbelieve in your existence before I disbelieved in my own. You should be more willing to believe in a soul, or even in magic microtubules, than whatever it is you're doing in this essay."
Illusionism and eliminativism are old themes in analytic philosophy. I suppose what's new here is that they are being dusted off in the context of AI. We don't quite see how consciousness could be a property of the brain, we don't quite see how it would be a property of artificial intelligence either, so let's deny that it exists at all, so we can feel like we understand reality.
It would be very Nietzschean of me to be cool about this and say, falsehoods sometimes lead to truth, let the illusionist movement unfurl and we'll see what happens. Or I could make excuses for you: we're all human, we all have our blindspots...
But unless illusionist research ends up backing itself into a corner where it can no longer avoid acknowledging that the illusion is real, then as far as discovering facts about human beings goes, it is a program of timidity and mediocrity that leads nowhere. The subject actually needs bold new hypotheses. Maybe it's beyond the capacity of most people to produce them, but nonetheless, that's what's needed.
What can explain all this callousness? ... people don’t generally value the lives of those they consider below them
Maybe that's a factor. But I would be careful about presuming to understand. At the start of the industrial age, life was cheap and perilous. A third of all children died before the age of five. Imagine the response if that was true in a modern developed society! But born into such a world, an atmosphere of fatalistic resignation would set in quickly. All you can do is pray to God for mercy, and then look on aghast if the person next to you is the unlucky one.
Someone in the field of "progress studies" offers an essay in this spirit, on "How factories were made safe". The argument is that the new dangers arising from machinery and from the layout of the factory, were at first not understood, in professions that had previously been handicrafts. There was an attitude that each person looks after themselves as best they can. Holistic enterprise-level thinking about organizational safety did not exist. In this narrative, unions and management both helped to improve conditions, in a protracted process.
I'm not saying this is the whole story either. The West Virginia coal wars are pretty wild. It's just that ... states of mind can be very different, across space and time. The person who has constant access to the intricate tapestry of thought and image offered by social media, lives in a very different mental world to people from an age when all they had was word of mouth, the printed word, and their own senses. Live long enough, and you will even forget how it used to be, in your own life, as new thoughts and conditions take hold.
Maybe the really important question is the extent to which today's elite conform to your hypothesis.
There are several ways to bring up a topic. You can make a post, you can make a question-post, you can post something on your shortform, you can post something in an open thread.
If there is some detailed opinion about a topic that is a core Less Wrong interest, I'd say make a post. If you don't have much of an opinion but just want such a topic discussed, maybe you can make it into a question-post.
If the topic is one that seems atypical or off-topic for Less Wrong, but you really want to bring it up anyway, you could post about it on your shortform or on the open thread.
The gist of my advice is that for each thing you want to discuss or debate, identify which kind of post is the best place to introduce it, and then just make the post. And from there, it's out of your control. People will take an interest or they won't.
I'm confused by this. First of all you talk about situations in which there is a text containing multiple persons interacting, and you say an AI, in predicting the words of one of the persons, will inappropriately use information that this person would not possess in real life. But you don't give any examples of this.
Then, we switch to situations in which an AI is not extrapolating a story, but is explicitly, all the time, in a particular persona (that of an AI assistant). And the claim is that this assistant will have a poor sense of where it ends, and the user, or the universe, begins.
But in the scenario of an AI assistant talking with a user, the entire conversation is meant to be accessible to the assistant, so there's no information in the chat that "in real life" the assistant couldn't access. So I don't even know what the mechanism of bleed-through in an AI assistant is supposed to be?