outerloper has not written any posts yet.

outerloper has not written any posts yet.

Nothing like taking over the world. From a certain angle it’s almost opposite to that, relinquishing some control.
The observations in my long comment suggest to me some different angles for how to talk about alignment risk. They are part of a style of discourse that is not well-respected on LessWrong, and this being a space where that is pushed out is probably good for the health of LessWrong. But the state of broader popular political/ethical discourse puts a lot of weight on these types of arguments, and they’re more effective (because they push around so much social capital) at convincing engineers they have an external responsibility.
I don’t want to be too specific... (read more)
tl;dr: most AI/ML practitioners make moral decisions based on social feedback rather than systems of moral thought. Good arguments don't do much here.
Engineers and scientists, most of the time, do not want to think about ethics in the context of their work, and begrudgingly do so to the extent that they are socially rewarded for it (and socially punished for avoiding it). See here.
I wrote in another comment about my experience in my early research career at a FAANG AI lab trying to talk to colleagues about larger scale risks. Granted we weren't working on anything much like AGI at the time in that group, but... (read 1232 more words →)
When I worked a FAANG research job, my experience was that it was socially punishable to bring up AI alignment research in just about any context, with exceptions as it was relevant to the team's immediate mission, for example robustness on the scale required for medical decisions (a much smaller scale than AGI ruin, but a notably larger scale, in the sense of errors being costly, than most deep learning systems in production use at the time).
I find that in some social spaces, Rationality/EA-adjacent ones in particular, it's seen as distracting, rude, and low status to emphasize a hobby horse social justice issue at the expense of whatever else is being discussed.... (read 654 more words →)
This is a bit of an odd time to start debating, because I haven't explicitly stated a position, and it seems we're in agreement that that's a good thing[1]. Calling this to attention because
Speaking first to this point about culture wars: that all makes sense to me. By this argument, "trying to elevate something to being regulated by congress by turning it into a culture war is not a reliable strategy" is probably a solid heuristic.
I wonder whether we've lost the context of my top-level comment. The scope (the "endgame") I'm... (read 363 more words →)