Review

AI labs are on a quest to bring a prosperous wonderful future for all men and women of the world, without disease or suffering, altruistically building machines that will shine knowledge, prosperity, and splendour onto the universe. Their glorious leaders are fighting against adversity, towards ..... - and other egregious baloney, you know.

You are smart enough to see through the bs, wishful thinking, and self-deceit. There is no magic in the world. This thing will explode and you will be hurt, maybe worse than others - because your conscience will not forgive you and in your last dying moment you will be engulfed by sorrow. 

That is... unless you quit. 

If YOU, the reader, employed at Anthropic, Google, Baidu, Microsoft, HuggingFace, Meta, etc, quit. This will not happen. We will not become a galaxy-faring species during your lifetime either, and that's actually OK.

Don't fool yourself. You are not going to get the light cone of the universe. You know how all these rushed kerfuffles end.

You are a free person, and you can quit today.

New Comment
10 comments, sorted by Click to highlight new comments since:

I'm at over 50% chance that AI will kill us all. But consider the decision to quit from a consequentialist viewpoint. Most likely the person who replaces you will be almost as good as you at capacity research but care far less than you do about AI existential risk. Humanity, consequently, probably has a better chance if you stay in the lab ready for the day when, hopefully, lots of lab workers try to convince the bosses that now is the time for a pause, or at least that now is the time to shift a lot of resources from capacity to alignment.

The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic.

OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.

Your framework appears to be moral rather than practical.  Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something. You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.

I am using a moral appeal to elicit a practical outcome.

Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something.

Two objections:

  1. I think it will not get you fired now. If you are an expensive AI researcher (or better a bunch of AI researchers), your act will create a small media storm. Firing you will not be an acceptable option for optics. (Just don't say you believe AI is conscious.)
  2. A year or two might be a little late for that.

One recommendation: Unionise.

You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.

Great marginal impact, precisely because of the media effect. "AI researchers strike against the machines, demanding AI lab pause"

I like the idea of an AI lab workers' union. It might be worth talking to union organizers and AI lab workers to see how practical the idea is, and what steps would have to be taken. Although a danger is that the union would put salaries ahead of existential risk.

Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute.

I do not know how I can help personally with this, but here is a link for anyone who reads this and happens to work at an AI lab: https://aflcio.org/formaunion/4-steps-form-union

Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.

[+][comment deleted]20

Would be great to hear the objections from the down-voters.

In general, appeals to people of the form "You already agree with me that this is right, you're just lying to yourself and that's why you don't do it," are not apt to be well-received. Such appeals combine the belief that the right thing to do is obvious (which is often false) and that the person you address is actually deceiving themself (which is also often false).

Consider:

"You already know Jesus is God, you're resisting His grace because of your attachment to sin, but I know you can turn away from the darkness." Or "You already know Christianity is false, you're just staying in it because it is comfortable, I know you can face the truth of atheism."

Both from opposite perspectives, but both quite irritating to hear, and both including statements about the listener's internal state that are often just false. I think this kind of appeal is a result of expecting short inferential distances