It was pretty well accepted at MIT's Media Lab back when my orbit took me around there periodically, a decade or so ago, that there was a huge amount of low-hanging fruit in this area... not necessarily of academic interest, but damned useful (and commercial).
not necessarily of academic interest, but damned useful (and commercial).
Actually, I'm curious that isn't seen as an area of significan academic interest -- designing artificial systems around being efficient parsers of extraneous data. I recall that one of the major differences between Deep Blue and Deep Fritz in the Kasperov chess matches was precisely that Fritz was designed around not probing every last possible set of playable moves; that is, Deep Fritz was "learning to forget the right things".
It seems to me that understanding this mech...
From Geoff Anders of Leverage Research:
Not a surprising result, perhaps, but the details of how Geoff taught AGI danger and the reactions of his students are quite interesting.