If Strong AI turns out to not be possible, what are our best expectations today as to why?
I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.
I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.
If I replicate the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI"?
If I make something very very similar, but not identical to the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI?"
Its a terminology discussion at this point, I think.
In my original reply my intent was "provided that there are no souls/inputs from outside the universe required to make a functioning human, then we are able to create an AI by building something functionally equivalent to a human, and therefore strong AI is possible".
Possibly, that's a borderline case.
... (read more)