I think this might be a really important idea, and I'd like to explore it more.
From my perception, your prediction 1 didn't come true in 2022 or 2023. But I think it still might.
I think when we're publicly exposed to an agentic model that's reasonably intelligent, this question will come up. I also think it's a valid question.
I think sentience/ phenomenal consciousness is a separate and less important question than capabilities, but it's potentially a good intuition pump for why powerful AGI might be dangerous. I also think the two questions aren't entirely separate, so there's some real meat in that discssion.
I'd be interested in a dialogue on this topic. I've got a lot of thoughts but it's not high enough on my list to make it into a post. A dialogue would be an interesting option; I've wanted to try that format.
interesting idea. like.. a mix of genuine sympathy/expansion of moral circle to AI, and virtue signaling/anti-corporation meme spreads to the majority population and effectively curtails AGI capabilities research? This feels like a thing that might actually do nothing to reduce corporations' efforts to get to powerful AI unless it reaches a threshold at which point there's very dramatic actions against corporations who continue to try to do that thing
[Cross-posting from something I posted on Facebook.]
Prediction 1 (high confidence): In 2022, many more people will freak out over vague concerns about AI "becoming sentient" than will freak out over the types of issues that AI safety researchers are actually concerned about.
Prediction 2 (medium-low confidence): In the longer run, people freaking out over vague concerns about AI becoming sentient will do more to mitigate serious AI safety concerns than anything that AI safety researchers might have done on their own. This is because the masses freaking out seems more likely to change the interests and values of the government, industry, and even academia than anything that a bunch of specialist researchers might say or do. And any change in overall government / industry / academic interests and values seems likely to result in more substantial downstream change than anything that the specialist researchers might have done on their own.
[Sorry, I don't currently have any particularly good way of operationalizing those predictions or verifying them, so I'm not going to put them up on Metaculus or anything like that. If anybody else has a good way of operationalizing or verifying them, I'm open to suggestions.]
Weak evidence: In the game I'm currently playing, Mass Effect, the galactic government has placed a moratorium on companies doing research into "real artificial intelligence" out of concerns about AI becoming conscious. Clearly the game's designers, at least, thought that this is a plausible concern and a plausible reason to ban AI development. I'm pretty sure I've also seen other comparable examples in novels and other media. My model of the world says that most thought leaders in government, industry, and even academia are likely to be no more sophisticated or educated on these matters than the relevant game designers / authors.
Never mind that in this game, like in almost all media that I've seen, the distinction between "extremely capable robots" (of the type that AI safety researchers might freak out about) and "real AI" (that other people freak out about) is fuzzy at best. In fact, I suspect that to many people the only distinction is that the latter are "sentient" in some fuzzy sense whereas the former are not.
Furthermore, in my experience at least, there seems to be a strong association in the media between "real" / "sentient" / "conscious" AI, and AI developing its own goals and desires and maybe turning on humanity.
(To give another example - I mean, seriously, Ultron is supposed to be "artificial intelligence" but Jarvis is not?!)