"existence" itself may be a category error—not because nothing is real
If something is real, then something exists, yes? Or is there a difference between "existing" and "being real"?
Do you take any particular attitude towards what is real? For example, you might believe that something exists, but you might be fundamentally agnostic about the details of what exists. Or you might claim that the real is ineffable or a continuum, and so any existence claim about individual things is necessarily wrong.
qualia ... necessary for our self-models, but not grounded in any formal or observable system
See, from my perspective, qualia are the empirical. I would consider the opposite view to be "direct realism" - experience consists of direct awareness of an external world. That would mean e.g. that when someone dreams or hallucinates, the perceived object is actually there.
What qualic realism and direct realism have in common, is that they also assume the reality of awareness, a conscious subject aware of phenomenal objects. I assume your own philosophy denies this as well. There is no actual awareness, there are only material systems evolved to behave as if they are aware and as if there are such things as qualia.
It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that "I", whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.
Descartes's cogito is the famous expression of this, but I actually think a formulation due to Ayn Rand is superior. We know that consciousness exists, just as surely as we know that existence exists; and furthermore, to be is to be something ("existence is identity"), to be aware is to know something ("consciousness is identification").
What we actually know by virtue of existing and being conscious, probably goes considerably beyond even that; but negating either of those already means that you're drifting away from reality.
This is an interesting demonstration of what's possible in philosophy, and maybe I'll want to engage in detail with it at some point. But for now I'll just say, I see no need to be an eliminativist or to consider eliminativism, any more than I feel a need to consider "air eliminativism", the theory that there is no air, or any other eliminativism aimed at something that obviously exists.
Interest in eliminativism arises entirely from the belief that the world is made of nothing but physics, and that physics doesn't contain qualia, intentionality, consciousness, selves, and so forth. Current physical theory certainly contains no such things. But did you ever try making a theory that contains them?
What's up with incredibly successful geniuses having embarassing & confusing public meltdowns? What's up with them getting into naziism in particular?
Does this refer to anyone other than Elon?
But maybe the real question intended, is why any part of the tech world would side with Trumpian populism? You could start by noting that every modern authoritarian state (that has at least an industrial level of technology) has had a technical and managerial elite who support the regime. Nazi Germany, Soviet Russia, and Imperial Japan all had industrial enterprises, and the people who ran them participated in the ruling ideology. So did those in the British empire and the American republic.
Our current era is one in which an American liberal world order, with free trade and democracy as universal norms, is splintering back into one of multiple great powers and civilizational regions. Liberalism no longer had the will and the power to govern the world, the power vacuum was filled by nationalist strongmen overseas, and now in America too, one has stepped into the gap left by the weak late-liberal leadership, and is creating a new regime governed by different principles (balanced trade instead of free trade, spheres of influence rather than universal democracy, etc).
Trump and Musk are the two pillars of this new American order, and represent different parts of a coalition. Trump is the figurehead of a populist movement, Musk is foremost among the tech oligarchs. Trump is destroying old structures of authority and creating new ones around himself, Musk and his peers are reorganizing the entire economy around the technologies of the "fourth industrial revolution" (as they call it in Davos).
That's the big picture according to me. Now, you talk about "public meltdowns" and "getting into naziism". Again I'll assume that this is referring to Elon Musk (I can't think of anyone else). The only "meltdowns" I see from Musk are tweets or soundbites that are defensive or accusatory, and achieve 15 minutes of fame. None of it seems very meaningful to me. He feuds with someone, he makes a political statement, his fans and his haters take what they want, and none of it changes anything about the larger transformations occurring. It may be odd to see a near-trillionaire with a social media profile more like a bad-boy celebrity who can't stay out of trouble, but it's not necessarily an unsustainable persona.
As for "getting into naziism", let's try to say something about what his politics or ideology really are. Noah Smith just wrote an essay on "Understanding America's New Right" which might be helpful. What does Elon actually say about his political agenda? First it was defeating the "woke mind virus", then it was meddling in European politics, now it's about DOGE and the combative politics of Trump 2.0.
I interpret all of these as episodes in the power struggle whereby a new American nationalism is displacing the remnants of the cosmopolitan globalism of the previous regime. The new America is still pretty cosmopolitan, but it does emphasize its European and Christian origins, rather than repressing them in favor of a secular progressivism that is intended to embrace the entire world.
In all this, there are echoes of the fascist opposition to communism in the 20th century, but in a farcical and comparatively peaceful form. Communism was a utopian secular movement that replaced capitalism and nationalism with a new kind of one-party dictatorship that could take root in any industrialized society. Fascism was a nationalist and traditionalist imitation of this political form, in which ethnicity rather than class was the decisive identity. They fought a war in which tens of millions died.
MAGA versus Woke, by comparison, is a culture war of salesmen versus hippies. Serious issues of war and peace, law and order, humanitarianism and national survival are interwoven with this struggle, because this is real life, but this has been a meme war more than anything, in which fascism and communism are just historical props.
Via David Gerard's forum, I learned of a recent article called "The questions ChatGPT shouldn't answer". It's a study of how ChatGPT replies to ethical dilemmas, written with an eye on OpenAI's recent Model Spec, and the author's conclusion is that AI shouldn't answer ethical questions at all, because (my paraphrase) ethical intelligence is acquired by learning how to live, and of course that's not how current AI acquires its ethical opinions.
Incidentally, don't read this article expecting scholarship; it's basically a sarcastic op-ed. I was inspired to see if GPT-4o could reproduce the author's own moral framework. It tried, but its imitations of her tone stood out more. My experiment was even less scientific and systematic than hers, and yet I found her article, and 4o's imitation, tickling my intuition in a way I wish I had time to overthink.
To begin with, it would be good to understand better, what is going on when our AIs produce ethical discourse or adopt a style of writing, so that we really understand how it differs from the way that humans do it. The humanist critics of AI are right enough when they point out that AI lacks almost everything that humans draw upon. But their favorite explanation of the mechanism that AI does employ is just "autocomplete". Eventually they'll have to develop a more sophisticated account, perhaps drawing upon some of the work in AI interpretability. But is interpretability research anywhere near explaining an AI's metaethics or its literary style?
Thirty years ago Bruce Sterling gave a speech in which he said that he wouldn't want to talk to an AI about its "bogus humanity", he would want the machine to be honest with him about its mechanism, its "social interaction engine". But that was the era of old-fashioned rule-based AI. Now we have AIs which can talk about their supposed mechanism, as glibly as they can pretend to have a family, a job, and a life. But the talk about the mechanism is no more honest than the human impersonation, there's no sense in which it brings the user closer to the reality of how the AI works; it's just another mask that we know how to induce the AI to wear.
Looking at things from another angle, the idea that authentic ethical thinking arises in human beings from a process of living, learning, and reflecting, reminds me of how Coherent Extrapolated Volition is supposed to work. It's far from identical; in particular CEV is supposed to arrive at the human-ideal decision procedure without much empirical input beyond a knowledge of the human brain's cognitive architecture. Instead, what I see is an opportunity for taxonomy; comparative studies in decision theory that encompass both human and AI, and which pay attention to how the development and use of the decision procedure is embedded in the life cycle (or product cycle) of the entity.
This is something that can be studied computationally, but there are conceptual and ontological issues too. Ethical decision-making is only one kind of normative decision-making (for example, there are also norms for aesthetics, rationality, lawfulness); normative decision-making is only one kind of action-determining process (some of which involve causality passing through the self, while others don't). Some forms of "decision procedure" intrinsically involve consciousness, others are purely computational. And ideally one would want to be clear about all this before launching a superintelligence. :-)
I consider myself broadly aligned with rationalism, though with a strong preference for skeptical consequentialism than overconfident utilitarianism
OK, thanks for the information! By the way, I would say that most people active on Less Wrong, disagree with some of the propositions that are considered to be characteristic of the Less Wrong brand of rationalism. Disagreement doesn't have to be a problem. What set off my alarms was your adversarial debut - the rationalists are being irrational! Anyway, my opinion on that doesn't matter since I have no authority here, I'm just another commenter.
The rationalist community is extremely influential in both AI development and AI policy. Do you disagree?
It was. It still has influence, but e/acc is in charge now. That's my take.
If you couldn't forecast the Republicans would be in favor of less regulation
If they actually saw AI as the creation of a rival to the human race, they might have a different attitude. Then again, it's not as if that's why the Democrats favored regulation, either.
Qwen ... Manus
I feel like Qwen is being hyped. And isn't Manus just Claude in a wrapper? But fine, maybe I should put Alibaba next to DeepSeek in my growing list of contenders to create superintelligence, which is the thing I really care about.
But back to the actual topic. If Gwern or Zvi or Connor Leahy want to comment on why they said what they did, or how their thinking has evolved, that would have some interest. It would also be of interest to know where certain specific framings, like "China doesn't want to race, so it's up to America to stop and make a deal", came from. I guess it might have come from politically minded EAs, rather than from rationalism per se, but that's just a guess. It might even come from somewhere entirely outside the EA/LW nexus.
I figured this was part of a 19th-century trend in Trump's thought - mercantilism, territorial expansion, the world system as a game of great powers rather than a parliament of nations. The USA will be greater if it extends throughout the whole of North America, and so Canada must be absorbed.
It hadn't occurred to me that the hunger for resources to train AI might be part of this. But I would think that even if it is part of it, it's just a part.
Musk has just been on Ted Cruz's podcast, and gave his take on everything from the purpose of DOGE to where AI and robotics will be ten years from now (AI smarter than the smartest human, humanoid robots everywhere, all goods and services essentially free). He sounded about as sane as a risk-taking tech CEO who managed to become the main character on the eve of singularity, could be.
I've just noticed in the main post, the reference to "high-functioning" bipolar individuals. I hadn't even realized that is an allowed concept, I had assumed that bipolar implies dysfunctional... I feel like these psychological speculations are just a way of expressing alienation with who he has become. It's bad enough that his takes are so mid and his humor is so cringe, but now he's literally allied with Trump and boosting similar movements worldwide.
If someone finds that an alien headspace to contemplate, it might be more comforting to believe that he's going crazy. But I think that in reality, like most members of today's right wing, he's totally serious about trying to undo 2010s thinking on race, gender, and nation. That's part of his vision for the future, along with the high technology. When I think of him like that, everything clicks into place for me.
we are likely to end up appendages to something with the intelligence of a toxoplasma parasite, long before a realistic chance of being wiped out by a lighcone-consuming alien robointelligence of our own creation.
All kinds of human-AI relationship are possible (and even a complete replacement of humanity so it's nothing but AIs and AIs); but unless they mysteriously coordinate to stop the research, the technical side of AI is going to keep advancing. If anything, AI whisperers on net seem likely to encourage humanity to keep going in that direction.
During the next few days, I do not have time to study exactly how you manage to tie together second-order logic, the symbol grounding problem, and qualia as Gödel sentences (or whatever that connection is). I am reminded of Hofstadter's theory that consciousness has something to do with indirect self-reference in formal systems, so maybe you're a kind of Hofstadterian eliminativist.
However, in response to this --
-- I can tell you how a believer in the reality of intentional states, would go about explaining you and EN. The first step is to understand what the key propositions of EN are, the next step is to hypothesize about the cognitive process whereby the propositions of EN arose from more commonplace propositions, the final step is to conceive of that cognitive process in an intentional-realist way, i.e. as a series of thoughts that occurred in a mind, rather than just as a series of representational states in a brain.
You mention Penrose. Penrose had the idea that the human mind can reason about the semantics of higher-order logic because brain dynamics is governed by highly noncomputable physics (highly noncomputable in the sense of Turing degrees, I guess). It's a very imaginative idea, and it's intriguing that quantum gravity may actually contain a highly noncomputable component (because of the undecidability of many properties of 4-manifolds, that may appear in the gravitational path integral).
Nonetheless, it seems an avoidable hypothesis. A thinking system can derive the truth of Gödel sentences, so long as it can reason about the semantics of the initial axioms, so all you need is a capacity for semantic reflection (I believe Feferman has a formal theory of this under the name "logical reflection"). Penrose doesn't address this because he doesn't even tackle the question of how anything physical has intentionality, he sticks purely to mathematics, physics, and logic.
My approach to this is Husserlian realism about the mind. You don't start with mindless matter and hope to see how mental ontology is implicit in it or emerges from it. You start with the phenomenological datum that the mind is real, and you build on that. At some point, you may wish to model mental dynamics purely as a state machine, neglecting semantics and qualia; and then you can look for relationships between that state machine, and the state machines that physics and biology tell you about.
But you should never forget the distinctive ontology of the mental, that supplies the actual "substance" of that mental state machine. You're free to consider panpsychism and other identity theories, interactionism, even pure metaphysical idealism; but total eliminativism contradicts the most elementary facts we know, as Descartes and Rand could testify. Even you say that you feel the qualia, it's just that you think "from a rational perspective, it must be otherwise".