Hi LW, this article has put some doubt in my mind as to whether researching AI alignment is worthwhile, or a frivolity that detracts from more pressing issues such as mitigation of harmful biases.
[Discussing AI consciousness] is a distraction, when current AI systems are increasingly pervasive, and pose countless ethical and social justice questions that deserve our urgent attention
I would like to hear some opinions from this community on the sentiment expressed in the above quote.
First, though, an introduction, or why this has prompted me to emerge from my decade-long lurker state:
I am a computational linguistics MS student approaching my final semester and its accompanying research capstone. My academic background is in... (read 240 more words →)
I probably should have included the original Twitter thread that sparked the article link in which the author says bluntly that she will no longer discuss AI consciousness/superintelligence. Those two had become conflated, so thanks for pointing that out!
With regards to instrumental convergence (just browsed the Arbitral page), are you saying the big names working on AI safety are now more focused on incidental catastrophic harms caused by a superintelligence on its way to achieve goals, rather than making sure artificial intelligence will understand and care about human values?