xpym
Message
395
132
irrational reaction to anything “sex” and it is useful to give people space to talk about gender variance without it being inherently sexual
Yeah, "sex" itself is also problematic of course, referring both to essential characteristics of individuals and to complicated social interactions.
My biggest problem with "transgender" is that it implies the desirability of grave, not-entirely-reversible hormonal/surgical transition to people that are only uncomfortable with their gender role, but have no body map issues. I'd say that decoupling (temporary, reversi...
There is evidence of a biological basis for trans identity.
It's plausible that there is a biological basis for feelings of body-map mismatch, but categorizing all of this under the rubric of "trans identity" continues to seem like a horrible civilization-wide confusion-inducing mistake to me.
The deprecated term "transsexual" also had its issues, of course (the confusion about whether the "sexual" part refers to your sex or the sex of people you are attracted to, like it does in e.g. "homosexual"), but it at least clearly pointed to the fact that it isn't entirely about "gender identity" qua social role-play.
I would still use an actual mp3 player, but ReplayGain (volume adjustment) has irrevocably spoiled me. And also, wireless earbuds have finally gotten good since then. So, phone-as-a-mp3-player it is.
For the former-maids now working in the modern service sector, this was a major step up.
Seems doubtful tbh. I think that being a maid/manservant to a one-percenter could in theory be a much better gig, but society apparently collectively decided that such jobs are inherently degrading and fundamentally conflict with the Egalitarian Spirit, and abolished them on moral grounds instead of economic ones.
Yeah, it's not the gramophone that displaced in-person socializing. TV struck first, and then the internet dealt the killing blow.
Even humans have pretty much succeeded at taking over the world.
Coalitions of humans have. It's plausible that a slightly smarter in relevant ways AI might soon end up heading one, but I don't expect it to get away with acting egregiously misaligned.
The issue is that nobody is sure how things are going to go.
Well, they aren't behaving accordingly. Pessimists are super doomy, optimists expect "loving grace" around the corner, and neither side is at all discomfited by the vast gulf of confident disagreement in between.
This inclines me toward caution
A widely agreeable notion, surely, until elaborated on.
will future powerful AGI / ASI “by default” lack Approval Reward altogether?
I'd say that pessimists are similar to LLM optimists in their conviction that it would be pretty easy to match and then greatly surpass general human intelligence, trusting their own intuitions far too much. Of course, once that assumption is made, everything else straightforwardly follows.
If you define wireheading as hacking the brain to do something weird that makes you feel better
There are similarities, but the space of hardware solutions is much bigger.
stimulated their reward system which, under RTB, is unlikely to solve the problem of chronic suffering
But surely something in the vicinity should work? In any case, I'm pretty sure that most people don't want to exist in a permanent state of pure bliss, whatever it means, and wouldn't take a drug to that effect, so the problem description seems lacking. I'm not claiming to be able to produce a better one, though.
Absolute-zero-based suffering.
Does this imply that wireheading perfectly solves the problem, absent traditional Buddhist worries like reincarnation, which RTB presumably eschews?
Players naturally distinguish “legitimate” actions (swinging sword, drinking potion) from “illegitimate” ones (using console commands to spawn items). This isn’t in the game’s code—the engine doesn’t care. It’s a social distinction we impose based on our intuitions about fair play and authentic experience. We’ve collectively decided that some causal interventions are kosher and others are “cheating,” even though they’re all just bits flipping in RAM.
It's worth mentioning speedrunning here. When players decide to optimize some aspects of gameplay (e.g. g...
One might also do, say, a thought experiment with alien civilisations untouched by whites’ hands and unaware about the oppression system.
Even though their supposed oppressor classes are unlikely to look like white males, that doesn't guarantee the absence of platonic toxic whiteness & masculinity.
What #1,#2,#4 have in common is that it is harder to check experimentally unless you are immersed with the area and the potential difficulty of publishing your results threatening to invalidate the dominant narrative.
Indeed.
...
It’s usually much easier to bullshit value claims than epistemic claims.
Sure, if we compare the sets of all value claims with all epistemic claims. However, the controversial epistemic claims aren't typical, they're selected for both being difficult to verify and having obvious value implications. Consider the following "factual" claims that are hacking people's brains these days:
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.
I'm surprised that you're surprised. To me you've always been a go-to example of someone exceptionally good at both original seeing and taking weird ideas seriously, which isn't a well-trodden intersection.
We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
If you disagree with the premise of “we’re pretty likely to die unless the political situation changes A Lot”, well, it makes sense if you’re worried about the downside risks of the sort of thing I’m advocating for here. We might be political enemies some of the time, sorry about that.
These propositions seem in tension. I think that we're unlikely to die, but agree with you that without an "epistemic-clarity-win" your side won't get its desired polici...
general public making bad arguments
My point is that "experts disagree with each other, therefore we're justified in not taking it seriously" is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I'm far from certain.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don't take AI doom seriously, considering that many of them are doing the...
My core claim here is that most people, most of the time, are going to be terrible critics of your extreme idea. They will say confused, false, or morally awful things to you, no matter what idea you have.
I think that most unpopular extreme ideas have good simple counterarguments. E.g. for Marxism it's that it whenever people attempt it, this leads to famines and various extravagant atrocities. Of course, "real Marxism hasn't been tried" is the go-to counter-counterargument, but even if you are a true believer, it should give you pause that it has been ...
the only divided country left after Germany
China/Taiwan seem to be (slightly) more so these days, after Kim explicitly repudiated the idea of reunification.
But doesn't increasing the accuracy of DL outputs require exponentially more compute? It only "works" to the extent that labs have been able to afford exponential compute scaling so far.