You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SilentCal comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: SilentCal 23 July 2015 04:10:56PM 13 points [-]

I'm sure there's no need to point to Robin Hanson's anti-foom writings? The best single article is IMO Irreducible Detail essentially questioning the generality of intelligence.

Comment author: jacob_cannell 24 July 2015 04:35:20AM *  7 points [-]

Here is a key quote:

Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours - See more at: http://www.overcomingbias.com/2014/07/limits-on-generality.html#sthash.S1KygaG4.dpuf

It is true that adult human brains are built out of many domain specific modules, but these modules develop via a very general universal learning system. The neuroscientific evidence directly contradicts the evolved modularity hypothesis, which hanson appears to be heavily influenced by. That being said, his points about AI progress being driven by a large number of mostly independent advances still carries through.

Hanson's general analysis of the economics of AGI takeoff seem pretty sound - even if it is much more likely that neuro-AGI precedes ems.

Comment author: sentientplatypus 24 July 2015 03:28:50AM 2 points [-]

I hadn't seen this before. Hanson's conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think 'intelligence' can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.

Comment author: roystgnr 27 July 2015 09:36:29PM 1 point [-]

Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.

Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capable of creating modules, what we have doesn't qualify as human-equivalent AGI. But if/when we can, then it's likely that it can also create an improved version of itself, and so it's still an open question as to how fast or how far it can improve.

Comment author: [deleted] 23 July 2015 09:22:15PM *  1 point [-]

Thank you for that Irreducible Detail article, I remember reading it before but couldn't find it later. Hanson's argument is very convincing and intuitive, and really sheds light on what intelligence might really be about. When I think about my own intelligence, it doesn't feel like I have some overarching general module planning, but more like I have many simple heuristics, and rules of thumb, and automatic behaviors that just happen to work. This feels more like Hanson's idea of intelligence.

I think this is the single best argument against MIRI's idea of intelligence.

Here is an interesting article in the same vein.