Is it a satire of the 'AI ethics' position?
No, it is not actually.
What is confusing? :-)
Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute.
I do not know how I can help personally with this, but here is a link for anyone who reads this and happens to work at an AI lab: https://aflcio.org/formaunion/4-steps-form-union
Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.
I am using a moral appeal to elicit a practical outcome.
Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something.
Two objections:
One recommendation: Unionise.
You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.
Great marginal impact, precisely because of the media effect. "AI researchers strike against the machines, demanding AI lab pause"
The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic.
OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.
Would be great to hear the objections from the down-voters.
Thank you for your words of caution @the gears to ascension , @Ruby , @Chris_Leong
Indeed I have just recently updated on AI. I lived happily believing AGI was just nonsense after seeing gimmick after gimmick and slow progress on anything general. This all came down as a rude shock a couple of weeks ago.
I will heed your advise on consulting with others.
I am however of the firm opinion that AI alignment is not going to be solved any time soon. The best thing is just to shuit progress on new capabilities down indefinitely. I do not see it being done without the force of law and politics will inevitably be at play.
Many words. But fundamentally first time I see something that makes sense on the topic. If you make a God, prepare to be killed by him.
If Sutskever, Altman et al. want that, I wish there was a way to send them off to a parallel universe to run their experiments. I have a family and normal life to attend to.
There is no such thing as safe AGI. It must be indefinitely delayed.
I generally agree with your commentary about the dire lack of research in this area now, and I want to be hopeful about solvability of alignment.
I want to propose that AI alignment is not only a problem for ML professionals. It is a problem for the whole society and we need to get as many people involved here as possible, soon. From lawyers and law-makers, to teachers and cooks. It is so for many reasons:
I want to show what we are doing at my company: https://conjointly.com/blog/ai-alignment-research-grant/ . The aim is to make social science PhDs aware of the alignment problem and get them involved in the way they can. Is it the right way to do it? I do not know.
I, for one, am not an LLM specialist. So I intend to be making noise everywhere I can with the resources I have. This weekend I will be writing to every member of the Australian parliament. Next weekend, I will be writing to every university in the country.
So I am ready for this comment to be downvoted 😀
I realise that what I wrote did not resonate with the readers.
But I am not an inexperienced writer. I would not rate the above piece as below average in substance, precision, or style. Perhaps the main addressees were not clear (even though they are named in the last paragraph).
I am exploring a tension in this post and I feel this very tension has burst out into the comments and votes. This tension wants to be explored further and I will take time to write better about it.