You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

faul_sname comments on Stupid Questions Open Thread Round 3 - Less Wrong Discussion

8 Post author: OpenThreadGuy 07 July 2012 05:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 10 July 2012 01:30:41AM *  0 points [-]

Y'know, I'm not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I've never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).

Comment author: thomblake 10 July 2012 06:22:27PM 0 points [-]

but I've never seen a convincing argument that intelligence scales with optimization power

"optimization power" is more-or-less equivalent to "intelligence", in local parlance. Do you have a different definition of intelligence in mind?

Comment author: faul_sname 10 July 2012 10:09:06PM 0 points [-]

One that doesn't classify evolution as intelligent.

Comment author: thomblake 11 July 2012 01:49:22PM 0 points [-]

So the nonapples theory of intelligence, then?

Comment author: faul_sname 11 July 2012 03:52:54PM 1 point [-]

More generally, a theory that requires modeling of the future for something to be intelligent.