Strong statement from Bill Gates on machine superintelligence as an x-risk, on today's Reddit AMA:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thing is, on that line, this would sorta become a simulation fic, and Yudkowsky said it wasn't that.
Didn't he also say there wouldn't be AI in it?