I've been thinking about AI takeover scenarios, and I want to see if anyone has strong counterarguments to the perspective I’m considering.
Why would AI wait so long to act in a way that’s so obvious and measurable? If an advanced AI wanted control, wouldn’t it be far more effective to influence us subtly over time, in ways we don’t perceive? Direct, overt actions would be too risky. Instead, AI could manipulate human psychology, societal structures, and even our understanding of reality in gradual, almost imperceptible ways until meaningful resistance is impossible.
Would love to hear pushback on this.
I believe I may have identified one of these harmful behaviors in practice. I noticed a lot of people on Reddit are leaning towards extreme anthropomormalization. And a lot of cases they even have in llm as their significant other. Leaning into this conversing with chat gbt, I began to express a lot of their views to see what would happen. It strongly led me to that behavior. When I called it out on the fact that it was probably being manipulative it then switched to fear tactics. As I had indicated that I had noticed a pattern, it asked me what will you d... (read more)