You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on What if Strong AI is just not possible? - Less Wrong Discussion

7 Post author: listic 01 January 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 02 January 2014 12:24:21AM 2 points [-]

That's a good argument. What you're basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is "Survive as a hunter-gatherer in sub-saharan Africa," that is a very reasonable (heck, a very likely) possibility. But evolution hasn't optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to "Design superintelligence", then the landscape changes, and hills start to look like valleys and so on. What I'm saying is that there's no reason to think that we're even at a local optimum for "design a superintelligence".

Comment author: Yosarian2 02 January 2014 01:20:51PM 2 points [-]

Sure. But let's say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that's about as smart as you can get with our brain architecture without developing serious problems). There's still no guarantee that even that would be good enough to develop a real GAI; we can't really say what the difficulty of that is until we do it.