If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
I presume this is downvoted due to some inferential gap... How does one get from no AGI to no humans? Or, conversely, why humans implies AGI?
I downvoted mainly because Eliezer is being rude. Dude didn't even link http://lesswrong.com/lw/ql/my_childhood_role_model/ or anything.