Will_Newsome comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
How do we know this is far off? For some very useful processes we're already close to optimal. For example, linear programming is close to the theoretical optimum already as are the improved versions of the Euclidean algorithm, and even the most efficient of those are not much more efficient than Euclid's original which is around 2000 years old. And again, if it turns out that the complexity hierarchy strongly does not collapse then many algorithms we have today will turn out to be close to best possible. So what makes you so certain that we can see that reaching optimality limits is far off?
I was comparing with the human brain. That is far from optimal - due to 1-size-fits-all pattern, ancestral nutrient availability issues (now solved) - and other design constraints.
Machine intelligence algorithms are currently well behind human levels in many areas. They will eventually wind up far ahead - and so currently there is a big gap.
Comparing to the human brain is primarily connected to failure option 2, not option 3. We've had many years now to make computer systems and general algorithms that don't rely on human architecture. We know that machine intelligence is behind humans in many areas but we also know that computers are well ahead of humans in other areas (I'm pretty sure that no human on the planet can factor 100 digit integers in a few seconds unaided). FOOMing would likely require not just an AI that is much better than humans at many of the tasks that humans are good at but also an AI that is very good at tasks like factoring that computers are already much better at than humans. So pointing out that the human brains are very suboptimal doesn't make this a slamdunk case. So I still don't see how you can label concerns about 3 as silly.
Cousin it's point (gah, making the correct possessive there looks really annoying because it looks like one has typed "it's" when one should have "its") that the NP hard problems that an AI would need to deal with may be limited to instances which have high regularity seems like a much better critique.
Edit: Curious for reason for downvote.
It feels a little better if I write cousin_it's. Mind you I feel 'gah' whenever I write 'its'. It's a broken hack in English grammar syntax.