You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on Google may be trying to take over the world - Less Wrong Discussion

22 [deleted] 27 January 2014 09:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 27 January 2014 09:23:05PM 14 points [-]

Update: "DeepMind reportedly insisted on the board’s establishment before reaching a deal."

Comment author: lukeprog 28 January 2014 06:18:35PM *  6 points [-]

Update: DeepMind will work under Jeff Dean at Google's search team.

And, predictably:

“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”

...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

Comment author: XiXiDu 28 January 2014 07:42:56PM *  9 points [-]

NYTimes also links to LessWrong.

Quote:

Mr. Legg noted in a 2011 Q&A with the LessWrong blog that technology and artificial intelligence could have negative consequences for humanity.

Comment author: shminux 28 January 2014 08:30:50PM *  3 points [-]

despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.

I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.

Comment author: XiXiDu 28 January 2014 06:55:53PM 2 points [-]

They actually link to LessWrong in the article, namely to my post here.

Comment author: CellBioGuy 30 January 2014 04:15:52AM *  0 points [-]

I personally suspect the ethics board exists for more prosaic reasons. Think "don't bias the results of people's medical advice searches to favor the products of pharmaceutical companies that pay you money" rather than "don't eat the world".

EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.