I'm actually quite confused by the content and tone of this post.
Is it a satire of the 'AI ethics' position?
I speculate that the downvotes might reflect other people being confused as well?
The post reads like a half-assed college essay where you're going through the motions of writing without things really coming together. Heavy on the structure, there's no clear thread of rhetoric progressing through it, and it's hard to get a clear sense where you're coming from with the whole thing. The overall impression is just list of disjointed arguments, essay over.
So I am ready for this comment to be downvoted 😀
I realise that what I wrote did not resonate with the readers.
But I am not an inexperienced writer. I would not rate the above piece as below average in substance, precision, or style. Perhaps the main addressees were not clear (even though they are named in the last paragraph).
I am exploring a tension in this post and I feel this very tension has burst out into the comments and votes. This tension wants to be explored further and I will take time to write better about it.
A retort familiar to people discussing existential risk is that these conversations distract us from the “real issues” and “current harms” of AI.
On the surface this retort is easily debunked:
Below are three reasons why discussing existential risk distracts us from the real issues and current harms:
Firstly, the statement is factually correct. Humans have only so much attention they can pay to AI. If you talk about x-risk, that means there is less time to talk about current harms. If you pay lawyers to draft a bill aimed at preventing x-risk and focus on that, they may neglect to draft in provisions to address current harms. Same goes if you get hold of a politician whose time is very limited and precious.
Second, focusing one’s mind on existential risk is just that: focusing usually means saying “no” to other things. Imagine you are buying a ticket to the Arctic to take pictures of blooming tundra. The brochure is all about the pictures and types of cameras best recommended for the most striking pictures of tussock grasses. And no word about how to get there, how to get dressed, what visas you need, etc. Just pay $10,000 and focus on the tundra. You call the sales infoline and their robotic voice tells you “You really need to consider the flowering tundra in its splendour. We will get you there”. “What visa do I need? Which country is it in?” you wonder. “Please focus on the tundra,” the highly convincing salesbot replies. That’s what “focusing” on existential risk means. It means risking failure to consider other relevant factors in decisions and policies around AI.
Even if one were to believe in “transformative AI” and “safe AGI”, one needs to have a plan to get where they want to go. Obstacles such as bias and discrimination and copyright violations and … must be addressed already now while they are still addressable, not “when we get there”.
Third, “existential risk” is a characterisation of the level of risk, not nature of risk. Extinction can come through many forms, including discriminatory decision making (the Australian Robodebt scandal is a good example). Including not having anything to eat because of under-employment, which is already starting to affect copywriters and digital artists. It can come from foom too. All these issues can be magnified and intertwingled as AI companies scale AI (in model sizes, use-cases, capital, hardware, influence, algorithmic improvements, etc.). Addressing and solving AI issues as and when they emerge is important. We are already behind on that.
We need to address old issues, monitor new issues and react quickly, and work on ways to prevent future issues. Discussion of existential risk (including "alignment plans" if you are writing one) must not be decoupled from remedying other harms and building a solid way forward.