There's a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn't reasonable to call male. You said "if most people who would say they are in C are not actually working that way and are deceptively presenting as C," that seems far like the latter than the former, because "fake" implies the entire thing is fake[2].
Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a "protein meal" without implying there is no non-protein substance present.
There is a sense where this isn't true; if 5% of an image of a person is modified, I'd agree that the image is fake - but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn't say the picture is "of fake people," I'd need to say it's a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing "empirical clusters of people"!
If you said "mostly bullshit" or "almost always disengenious" I wouldn't argue, but would still question whether it's actually a majority of people in group C, which I'm doubtful of, but very unsure about - but saying it is fake would usually mean it is not a real thing anyone believes, rather than meaning that the view is unusual or confused or wrong.
Closely related to: You Don't Exist, Duncan.
I'll point to a similarly pessimistic but divergent view on how to mange the likely bad transition to an AI future that I co-authored recently;
Instead, we argue that we need a solution for preserving humanity and improving the future despite not having an easy solution of allowing gradual disempowerment coupled with single-objective beneficial AI...
The first question, one that is central to some discussions of long-term AI risk, is how can humanity stay in control after creating smarter-than-human AI?But given the question, the answer is overdetermined. We don’t stay in control, certainly not indefinitely. If we build smarter than human AI, which is certainly not a good idea right now, at best we must figure out how we are ceding control. If nothing else, power-seeking AI will be a default, and will be disempowering - even if it’s not directly an existential threat. Even if we solve the problem of treachery robustly, and build an infantilizing vision of superintelligent personal assistants, over long enough time scales, it’s implausible that we not only build that race of more intelligent systems, but do not then cede any power. (And if we did, somehow, the implications of keeping systems that are increasingly intelligent in permanent bondage seems at best morally dubious.)
So, if we (implausibly) happen to be in a world of alignment-by-default, or (even more implausibly) find a solution to intent alignment and agree to create a super-nanny for humanity, what world would we want? Perhaps we use this power to collectively evolve past humanity - or perhaps the visions of pushing for transhumanism before ASI to allow someone, some group to stay in control are realized. Either way, what then for the humans?
Why is there so little Rat brainpower devoted to the pragmatics of how AI safety could be advanced within the global and national political contexts?*
As someone who was there, I think the portrayal of the 2020-2022 era efforts to influence policy is strawmanned, but I agree that it was the first serious attempt to engage politically by the community - and was an effort which preceded SBF in lots of different ways - so it's tragic (and infuriating) that SBF poisoned the well by backing it and having it collapse. And most of the reason there was relatively little done by the existential risk community on pragmatic political action in 2022-2024 was directly because of that collapse!
Remaining in this frame of "we make our case for [X course of action] so persuasively that the world just follows our advice" does not make for a compelling political theory on any level of analysis.
...but it's not fake, it's just confused according to your expectations about the future - and yes, some people may say it dishonestly, but we should still be careful not to deny that people can think things you disagree with, just because they conflict with your map of the territory.
That said, I don't see as much value in dichotomizing the groups as others seem to.
As I said below, I think people are ignoring many different approaches compatible with the statement, and so they are confusing the statement with a call for international laws or enforcement (as you said, "attempts to make it as a basis for laws"), which is not mentioned. I suggested some alternatives in that comment:
"We didn't need laws to get the 1975 Alisomar moratorium on recombinant DNA research, or the email anti-abuse (SPF/DKIM/DMARC) voluntary technical standards, or the COSPAR guidelines that were embraced globally for planetary protection in space exploration, or press norms like not naming sexual assault victims - just strong consensus and moral suasion. Perhaps that's not enough here, but it's a discussion that should take place which first requires clear statement about what the overall goals should be."
I strongly support the idea that we need consensus building before looking at specific paths forward - especially since the goal is clearly far more widely shared than the agreement about what strategy should be pursued.
For example, contra Dean Bell's unfair strawman, this isn't a back-door to insist on centralized AI development, or even necessarily a position that requires binding international law! We didn't need laws to get the 1975 Alisomar moratorium on recombinant DNA research, or the email anti-abuse (SPF/DKIM/DMARC) voluntary technical standards, or the COSPAR guidelines that were embraced globally for planetary protection in space exploration, or press norms like not naming sexual assault victims - just strong consensus and moral suasion. Perhaps that's not enough here, but it's a discussion that should take place which first requires clear statement about what the overall goals should be.
This is also why I think the point about lab employees, and making safe pathways for them to speak out, is especially critical; current discussions about whistleblower protections don't go far enough, and while group commitments ("if N others from my company") are valuable, private speech on such topics should be even more clearly protected. And one reason for the inability to get consensus of lab employees is because there isn't currently common knowledge within labs about how many of the people think that the goal is the wrong one, and the incentives for the labs to get investment are opposed to those that would allow employees to have options for voice or loyalty, instead of exit - which explains why, in general, only former employees have spoken out.
I wonder if seeking a general protective order banning OpenAI from further Subpoenas of nonprofits without court review is warranted for the case - that seems like a good first step, and an appropriate precedent for the overwhelmingly likely later cases, given OpenAI's behavior.
I'm pointing out that the third camp, which you deny really exists, does exist, and as an aside, is materially different in important ways from the other two camps.
You say you don't think this matters for allocating funding, and you don't care about what others actually believe. I'm just not sure why either point is relevant here.