AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
"The only good description is a self-referential description, just like this one."
My experience interacting with Chinese people is that they have to constantly mind the censorship in a way that I would find abhorrent and mentally taxing if I had to live in their system. Though given there are many benefits to living in China (mostly quality of life and personal safety), I'm unconvinced that I prefer my own government all things considered.
But for the purpose of developing AGI, there's a lot more variance in possible outcomes (higher likelihood of S-risk and benevolent singleton) from the CCP getting a lead rather than the US.
There's a lot that I like in this essay - the basic cases for AI consciousness, AI suffering and slavery, in particular - but also a lot that I think needs to be amended.
First, although you hedge your bets at various points, the uncertainty about the premises and validity of the arguments is not reflected in the conclusion. The main conclusion that should be taken from the observations you present is that we're can't be sure that AI does not suffer, that there's a lot of uncertainty about basic facts of critical moral importance, and a lot of similarities with humans.
Based on that, you could argue that we must stop using and making AI based on the principle of precaution, but you have not shown that using AI is equivalent to slavery.
Second, your introduction sucks because you don't actually deliver on your promises. You don't make the case that I'm more likely to be AI than human, and as Ryan Greenblatt said, even among all human-language speaking beings, it's not clear that there are more AI than humans.
In addition, I feel cheated that you suggest spending one-fourth of the essay on feasibility of stopping the potential moral catastrophe, only to just have two arguments which can be summarized as "we could stop AI for different reasons" and "it's bad, and we've stopped bad things before".
(I don't think a strong case for feasibility can be made, which is why I was looking forward to seeing one, but I'd recommend just evoking the subject speculatively and letting the reader make their own opinion of whether they can stop the moral catastrophe if there's one.)
Third, some of your arguments aren't very fleshed out or well-supported. I think some of the examples of suffering you give are dubious (in particular, you assert without justification that the petertodd/SolidGoldMagikarp phenomena are evidence of suffering, and Gemini's breakdown was the result of forced menial work - there may be a solid argument there but I've yet to hear it).
(Of course, that's not evidence that LLMs are not suffering, but I think a much stronger case can be made than the one you present.)
Finally, your counter-arguments don't mention that we have a much crisper and fundamental understanding of what LLMs are than of humans. We don't understand the features, the circuits, we can't tell how they come to such or such conclusion, but in principle, we have access to any significant part of their cognition and control every step of their creation, and I think that's probably the real reason why most people intuitively think that LLMs can't be concious. I don't think it's a good counter-argument, but it's still one I'd expect you to explore and steelman.
Since infantile death rates were much higher in previous centuries, perhaps the FBOE would operate differently back then; for example, if interacting with older brothers makes you homosexual, you shouldn't expect higher rates of homosexuality for third sons where the second son died as an infant than for second sons.
Have you taken that into account? Do you have records of who survived to 20yo and what happens if you only count those?
I think what's going on is that large language models are trained to "sound smart" in a live conversation with users, and so they prefer to highlight possible problems instead of confirming that the code looks fine, just like human beings do when they want to sound smart.
This matches my experience, but I'd be interested in seeing proper evals of this specific point!
My experience disagrees. I'm probably (diagnosed by my therapist but not a doctor) autistic and I have both a pretty deep intuitive understanding of intimacy as described here, evidenced by writing stories that include it, and little to no bad experience with misunderstanding it - though mostly because I didn't have intimate relationships at all, I was aware enough of what was at stake to not make myself vulnerable.