faul_sname comments on Stupid Questions Open Thread Round 3 - Less Wrong

8 Post author: OpenThreadGuy 07 July 2012 05:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 08 July 2012 11:11:45PM *  6 points [-]

Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?

Which god? If by "God" you mean "something essentially perfect and infallible," then yes. If by "God" you mean "that entity that killed a bunch of Egyptian kids" or "that entity that's responsible for lightning" or "that guy that annoyed the Roman empire 2 millennia ago," then no.

Also, essentially infallible to us isn't necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).

Comment author: stcredzero 10 July 2012 01:20:10AM 0 points [-]

Which god? If by "God" you mean "something essentially perfect and infallible," then yes.

That one. Big man in sky invented by shepherds does't interest me much. Just because I'm a better optimizer of resources in certain contexts than an amoeba doesn't make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn't make X perfect and infallible. Just because X can rapidly optimize itself doesn't make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.

Comment author: faul_sname 10 July 2012 01:30:41AM *  0 points [-]

Y'know, I'm not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I've never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).

Comment author: thomblake 10 July 2012 06:22:27PM 0 points [-]

but I've never seen a convincing argument that intelligence scales with optimization power

"optimization power" is more-or-less equivalent to "intelligence", in local parlance. Do you have a different definition of intelligence in mind?

Comment author: faul_sname 10 July 2012 10:09:06PM 0 points [-]

One that doesn't classify evolution as intelligent.

Comment author: thomblake 11 July 2012 01:49:22PM 0 points [-]

So the nonapples theory of intelligence, then?

Comment author: faul_sname 11 July 2012 03:52:54PM 1 point [-]

More generally, a theory that requires modeling of the future for something to be intelligent.