All of outerloper's Comments + Replies

This is a bit of an odd time to start debating, because I haven't explicitly stated a position, and it seems we're in agreement that that's a good thing[1]. Calling this to attention because

  1. You make good points.
  2. The idea you're disagreeing with digresses from any idea I would endorse multiple times in the first two sentences.

Speaking first to this point about culture wars: that all makes sense to me. By this argument, "trying to elevate something to being regulated by congress by turning it into a culture war is not a reliable strategy" is probably a solid ... (read more)

2frontier64
Allying AI safety with DEI LGBTQIA+ activism won't do any favors to AI safety. Nor do I think it's a really novel idea. Effective Altruism occasionally flirts with DEI and other people have suggested using similar tactics to get AI safety in the eyes of modern politics. AI researchers are already linking AI safety with DEI with the effect of limiting the appearance of risk. If someone was to read a 'risks' section on an OpenAI paper they would come away with the impression that the biggest risk of AI is that someone could use it to make a misleading photo of a politician or that the AI might think flight attendants are more likely to be women than men! Their risks section on Dalle-2 reads: The point being, DEI does not take up newcomers and lend its support to their issues. It subsumes real issues and funnels efforts directed to solve them towards the DEI wrecking ball.
3Nicholas / Heather Kross
Ah, thank you for clarification!

Nothing like taking over the world. From a certain angle it’s almost opposite to that, relinquishing some control.

The observations in my long comment suggest to me some different angles for how to talk about alignment risk. They are part of a style of discourse that is not well-respected on LessWrong, and this being a space where that is pushed out is probably good for the health of LessWrong. But the state of broader popular political/ethical discourse puts a lot of weight on these types of arguments, and they’re more effective (because they push around s... (read more)

0Nicholas / Heather Kross
EDIT: retracted in this context, see reply. S̶o̶.̶.̶.̶ ̶g̶r̶o̶w̶i̶n̶g̶ ̶a̶l̶i̶g̶n̶m̶e̶n̶t̶ ̶b̶y̶.̶.̶.̶ ̶m̶e̶r̶g̶i̶n̶g̶ ̶i̶t̶ ̶w̶i̶t̶h̶ ̶f̶a̶r̶-̶l̶e̶s̶s̶-̶i̶m̶p̶o̶r̶t̶a̶n̶t̶ ̶p̶o̶l̶i̶t̶i̶c̶a̶l̶ ̶i̶s̶s̶u̶e̶s̶.̶.̶.̶ ̶a̶n̶d̶ ̶b̶e̶i̶n̶g̶ ̶e̶x̶p̶l̶i̶c̶i̶t̶l̶y̶ ̶c̶u̶l̶t̶u̶r̶e̶-̶w̶a̶r̶-̶y̶ ̶a̶b̶o̶u̶t̶ ̶i̶t̶?̶ ̶I̶s̶ ̶t̶h̶e̶ ̶e̶n̶d̶g̶a̶m̶e̶ ̶g̶e̶t̶t̶i̶n̶g̶ ̶n̶o̶n̶-̶r̶i̶g̶h̶t̶-̶w̶i̶n̶g̶ ̶p̶o̶l̶i̶t̶i̶c̶i̶a̶n̶s̶ ̶t̶o̶ ̶r̶e̶g̶u̶l̶a̶t̶e̶ ̶A̶G̶I̶ ̶d̶e̶v̶e̶l̶o̶p̶m̶e̶n̶t̶?̶ ̶ B̶e̶c̶a̶u̶s̶e̶.̶.̶.̶ ̶f̶o̶r̶ ̶[̶s̶t̶r̶u̶c̶t̶u̶r̶a̶l̶]̶(̶h̶t̶t̶p̶s̶:̶/̶/̶e̶n̶.̶w̶i̶k̶i̶p̶e̶d̶i̶a̶.̶o̶r̶g̶/̶w̶i̶k̶i̶/̶U̶n̶i̶t̶e̶d̶_̶S̶t̶a̶t̶e̶s̶_̶E̶l̶e̶c̶t̶o̶r̶a̶l̶_̶C̶o̶l̶l̶e̶g̶e̶)̶ ̶[̶r̶e̶a̶s̶o̶n̶s̶]̶(̶h̶t̶t̶p̶s̶:̶/̶/̶e̶n̶.̶w̶i̶k̶i̶p̶e̶d̶i̶a̶.̶o̶r̶g̶/̶w̶i̶k̶i̶/̶G̶e̶r̶r̶y̶m̶a̶n̶d̶e̶r̶i̶n̶g̶)̶,̶ ̶o̶n̶e̶ ̶s̶i̶d̶e̶ ̶h̶a̶s̶ ̶a̶n̶ ̶a̶d̶v̶a̶n̶t̶a̶g̶e̶ ̶i̶n̶ ̶m̶a̶n̶y̶ ̶c̶u̶l̶t̶u̶r̶e̶ ̶w̶a̶r̶ ̶b̶a̶t̶t̶l̶e̶s̶,̶ ̶d̶e̶s̶p̶i̶t̶e̶ ̶b̶e̶i̶n̶g̶ ̶a̶ ̶m̶i̶n̶o̶r̶i̶t̶y̶ ̶o̶f̶ ̶c̶i̶t̶i̶z̶e̶n̶s̶ ̶a̶t̶ ̶t̶h̶e̶ ̶n̶a̶t̶i̶o̶n̶a̶l̶ ̶l̶e̶v̶e̶l̶.̶ ̶(̶W̶i̶t̶h̶i̶n̶ ̶a̶ ̶s̶t̶a̶t̶e̶,̶ ̶e̶i̶t̶h̶e̶r̶ ̶"̶s̶i̶d̶e̶"̶ ̶c̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶h̶e̶ ̶a̶d̶v̶a̶n̶t̶a̶g̶e̶,̶ ̶b̶u̶t̶ ̶w̶h̶i̶c̶h̶e̶v̶e̶r̶ ̶o̶n̶e̶ ̶h̶a̶s̶ ̶t̶h̶e̶ ̶a̶d̶v̶a̶n̶t̶a̶g̶e̶,̶ ̶t̶e̶n̶d̶s̶ ̶t̶o̶ ̶k̶e̶e̶p̶ ̶i̶t̶ ̶f̶o̶r̶ ̶a̶ ̶w̶h̶i̶l̶e̶)̶.̶ ̶T̶u̶r̶n̶i̶n̶g̶ ̶A̶G̶I̶ ̶s̶a̶f̶e̶t̶y̶ ̶i̶n̶t̶o̶ ̶a̶ ̶c̶u̶l̶t̶u̶r̶e̶ ̶w̶a̶r̶ ̶t̶h̶i̶n̶g̶ ̶i̶s̶ ̶a̶ ̶b̶a̶d̶ ̶i̶d̶e̶a̶,̶ ̶s̶i̶n̶c̶e̶ ̶[̶c̶u̶l̶t̶u̶r̶e̶ ̶w̶a̶r̶ ̶t̶h̶i̶n̶g̶s̶ ̶d̶o̶n̶'̶t̶ ̶s̶e̶e̶m̶ ̶g̶e̶t̶ ̶m̶u̶c̶h̶ ̶a̶c̶t̶u̶a̶l̶ ̶p̶r̶o̶g̶r̶e̶s̶s̶ ̶i̶n̶ ̶C̶o̶n̶g̶r̶e̶s̶s̶]̶(̶h̶t̶t̶p̶s̶:̶/̶/̶w̶w̶w̶.̶s̶l̶o̶w̶b̶o̶r̶i̶n̶g̶.̶c̶o̶m̶/̶p̶/̶t̶h̶e̶-̶r̶i̶s̶e̶-̶a̶n̶d̶-̶i̶m̶p̶o̶r̶t̶a̶n̶c̶e̶-̶o̶f̶-̶s̶e̶c̶r̶e̶t̶)̶,̶ ̶a̶n̶d̶ ̶w̶h̶e̶n̶ ̶t̶h̶e̶y̶ ̶̶d̶o̶̶ ̶i̶t̶'̶s̶ ̶o̶f̶t̶e̶n̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶b̶a̶d̶ ̶(̶s̶e̶e̶ ̶"̶s̶t̶r̶u̶c̶t̶u̶r̶a̶l̶ ̶r̶e̶a̶s̶o̶n̶s̶"̶ ̶a̶b̶o̶v̶e̶)̶.̶ ̶ I̶f̶ ̶t̶h̶i̶s̶ ̶w̶a̶s̶ ̶y̶o̶u̶r̶ ̶i̶d̶e̶a̶,̶ ̶I̶ ̶g̶u̶e̶s̶s̶ ̶I̶'̶m̶ ̶g̶l̶a̶d̶ ̶y̶o̶u̶ ̶d̶i̶d̶n̶'̶t̶ ̶p̶o̶s̶t̶ ̶i̶t̶,̶ ̶b̶u̶t̶ ̶I̶ ̶a̶l̶s̶o̶ ̶t̶h̶i̶n̶k̶ ̶y̶o̶u̶ ̶s̶h̶o̶u̶l̶d̶ ̶t̶h̶i̶n̶k̶ ̶h̶a̶r̶d̶e̶r̶ ̶o̶
outerloper*5716

tl;dr: most AI/ML practitioners make moral decisions based on social feedback rather than systems of moral thought. Good arguments don't do much here.

Engineers and scientists, most of the time, do not want to think about ethics in the context of their work, and begrudgingly do so to the extent that they are socially rewarded for it (and socially punished for avoiding it). See here.

             

I wrote in another comment about my experience in my early research career at a FAANG AI lab trying to talk to colleagues about la... (read more)

1Nicholas / Heather Kross
Is the ending suggestion "take over the world / a large country"? Pure curiosity, since Leverage Research seems to have wanted to do that but... they seem to have been poorly-run, to understate.
Answer by outerloper*340

When I worked a FAANG research job, my experience was that it was socially punishable to bring up AI alignment research in just about any context, with exceptions as it was relevant to the team's immediate mission, for example robustness on the scale required for medical decisions (a much smaller scale than AGI ruin, but a notably larger scale, in the sense of errors being costly, than most deep learning systems in production use at the time).

I find that in some social spaces, Rationality/EA-adjacent ones in particular, it's seen as distracting, rude, and ... (read more)