There are tons of groups with significant motivation to publish just about anything detrimental to transgender people
In the academia? Come on now. If those people post their stuff on Substack, or even in some bottom-tier journal, nobody else would notice or care.
Well, there does seem to be no shortage of trans girls at any rate
Transgender people, total, between both transmasc and transfem individuals, make up around 0.5% of the population of the US.
Among youth aged 13 to 17 in the U.S., 3.3% (about 724,000 youth) identify as transgender, according to the first Google link - https://williamsinstitute.law.ucla.edu/publications/trans-adults-united-states/ In any case, when we're talking about at least hundreds of thousands, "no shortage" seems like a reasonable description.
And again, the number of trans people in high level sports is in the double digit numbers.
So far.
Based on https://pmc.ncbi.nlm.nih.gov/articles/PMC10641525/ trans women get well within the expected ranges for cis women within around 3-4 years.
Yes Requires the Possibility of No. Do you think that such a study would be published if it happened to come to the opposite conclusion?
And, given how few trans women there are
Well, there does seem to be no shortage of trans girls at any rate, so these issues are only going to become more salient.
I agree, and yet it does seem to me that self-identified EAs are better people, on average. If only there was a way to harness that goodness without skirting Wolf-Insanity quite this close...
Offsetting makes no sense in terms of utility maximisation.
Donating less than 100% of your non-essential income also makes no sense in terms of utility maximization, and yet pretty much everybody is guilty of it, what's up with that?
As it happens, people just aren't particularly good at this utility maximization thing, so they need various crutches (like the GWWC pledge) to do at least better than most, and offsetting seems like a not-obviously-terrible crutch.
Yeah, but this doesn't have much to do with conscription. Getting the moribund industrial capacity up to speed does make sense on the other hand.
It's endlessly amusing that the terrible Russian threat is purported to be taken seriously by anybody not immediately bordering it. As usual, only Trump is there to call a spade a spade and Russia a paper tiger. It has been mired for years in dirt-poor Ukraine half-heartedly supported by the West for chrissake, a state of affairs set to continue for many more years, by all appearances.
Sure, it has nukes, which would be a problem if the leadership decides to go out with a bang, but nothing could be done about that in the medium term militarily speaking, so all the EU fussing, taken reasonably, could only be a pretext for internal power struggles, with generous propaganda helpings.
a remotely realistic-seeming story for how things will be OK, without something that looks like coordination to not build ASI for quite a while
My mainline scenario is something like:
LLM scaling and tinkering peters out in the next few years without reaching capacity for autonomous R&D. LLMs end up being good enough to displace some entry-level jobs, but the hype bubble bursts and we enter a new AI winter for at least a couple of decades.
The "intelligence" thingie turns out to be actually hard and not amenable to a bag of simple tricks with a mountain of compute, for reasons gestured at in Realism about rationality. Never mind ASI, we're likely very far from being able to instantiate an AGI worthy of the name, which won't happen while we remain essentially clueless about this stuff.
I also expect that each subsequent metaphorical AI "IQ point" will be harder to achieve, not easier, so no foom or swift takeover. Of course, even assuming all that, it still doesn't guarantee that "things will be OK", but I'm sufficiently uncertain either way.
There are many people in Iceland, Switzerland, Norway, Singapore &c, who do not feel this oppressive ennui. Instead, things work quite well, the government is sane and in control, and public discourse is by-and-large rational.
Do they believe that they have a say in civilization's direction? It's all well and good to have cozy little enclaves under the wing of the US hegemony while it lasts, but if it falters, something relevant beyond local political obstacles may just emerge. Of course, all in all, their position is still more enviable than most.
Often those people are innocent. The blinking-innocently isn’t a pretense. But it’s grounded in naïveté.
Or in neurodivergence. It seems to me that certain mind architectures just really struggle with these dynamics, and reliably delving even one layer deep, never mind multiple, is far beyond their abilities. If so, it would make sense to me that there should be some cultural spaces where this limitation is accommodated. Whether any particular space needs to be that is another question, but one that should be explicitly addressed.
I think that the actual heuristic that prevents drastic anti-AI measures is the following: "A purely theoretical argument about a fundamentally novel threat couldn't seriously guide policy."
There are, of course, very good reasons for it. For one, philosophy's track record is extremely unimpressive, with profound, foundational disagreements between groups of purported subject matter experts continuing literally for millennia, and philosophy being the paradigmatic domain of purely theoretical arguments. For another, plenty of groups throughout history predicted an imminent catastrophic end of the world, yet the world stubbornly persists even so.
Certainly, it's not impossible that "this time it's different", but I'm highly skeptical that humanity will just up and significantly alter the way it does things. For the nuclear non-proliferation playbook to becomes applicable, I expect that truly spectacular warning shots will be necessary.