But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.” —Aldous Huxley
Most AIs are sycophants. What if we build antagonistic AI?
Colleagues & I just released a working paper wherein we argue that we should explore AI that are purposefully antagonistic towards users: https://arxiv.org/abs/2402.07350
We ask: Why are “good” and "moral" AI being equated with sycophantic, servile, comfortable, polite, nice AI? What are we missing in this moral paradigm? Where might antagonistic AI have benefits?--AI that are difficult, disagreeable, harsh, angry, rude, shaming, etc?
Drawing from a speculative design workshop and formative explorations, we identify contexts and areas when “bad” AI... (read 226 more words →)