Could A Superintelligence Out-Argue A Doomer?
Imagine a hypothetical conversation between an intelligent, rational, epistemically confident AI doomer (say Eliezer Yudkowsky) and a superintelligent AI. The goal of the superintelligence is to genuinely convince the doomer that doom is unlikely or impossible, using only persuasion - no rearranging mind states using nanobots or similar tricks, and...
I wouldn't consider AI art to be an "AI harm" - I think it's a tremendous net benefit for artists, just like the digital camera or Photoshop.