If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
This looks to me like gibberish, does it refer to something after all that someone could explain and/or link to? Or was it meant merely to be a story idea, unlabeled?
It's actually pretty clever. We're taking the assertion "Every strong AI instantly kills everyone" as a premise, meaning that on any planet where Strong AI has ever been created or ever will be created, that AI always ends up killing everyone.
Anthropic reasoning is a way of answering questions about why our little piece of the universe is perfectly suited for human life. For example, "Why is it that we find ourselves on a planet in the habitable zone of a star with a good atmosphere that blocks most radiation, that gravity is not too low a... (read more)