gwern comments on Reply to Holden on 'Tool AI' - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (348)
I think it's a pity that we're not focusing on what we could do to test the tool vs general AI distinction. For example, here's one near-future test: how do we humans deal with drones?
Drones are exploding in popularity, are increasing their capabilities constantly, and are coveted by countless security agencies and private groups for their tremendous use in all sorts of roles both benign and disturbing. Just like AIs would be. The tool vs general AI distinction maps very nicely onto drones as well: a tool AI corresponds to a drone being manually flown by a human pilot somewhere, while a general AI would correspond to an autonomous drone which is carrying out some mission (blast insurgents?).
So, here is a near-future test of the question 'are people likely to let tool AIs 'drive themselves' for greater efficiency?' - simply ask whether in, say, a decade there are autonomous drones carrying tasks that now would only be carried out by piloted drones.
If in a decade we learn that autonomous drones are killing people, then we have an answer to our tool AI question: it doesn't matter because given a tool AI, people will just turn it into a general AI.
(Amdahl's law: if the human in the loop takes up 10% of the time, and the AI or drone part comprises the other 90%, then even if the drone or AI become infinitely fast, you will still never speed up the whole loop by more than 90%... until you hand over that 10% to the AI, that is. EDIT: See also https://web.archive.org/web/20121122150219/http://lesswrong.com/lw/f53/now_i_appreciate_agency/7q4o )
Besides Knight Capital, HFT may provide another example of near-disaster from economic incentives forcing the removal of safety guidelines from narrow AI. From the LRB's "Be grateful for drizzle: Donald MacKenzie on high-frequency trading":
(Memoirs from US drone operators suggest that the bureaucratic organizations in charge of racking up kill-counts have become disturbingly cavalier about not doing their homework on the targets they're blowing up, but thus far, anyway, they haven't made the drones fully autonomous.)