gjm comments on Wanted: "The AIs will need humans" arguments - Less Wrong

7 Post author: Kaj_Sotala 14 June 2012 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread.

Comment author: gjm 14 June 2012 12:32:21PM 10 points [-]

Lucas's argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn't be much of a reason for AGIs to keep humans around. "Oh damn, I need to prove my Goedel sentence. How I wish I hadn't slaughtered all the humans a century ago."

Comment author: Nisan 14 June 2012 07:41:12PM 16 points [-]

In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It's merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in "truth farms" where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.

Comment author: gjm 14 June 2012 10:14:39PM 2 points [-]

That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn't possible.

(Halfway plausible? That's probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)