http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html

Not going to summarize the article content, but I think this is the highest-level publication linking to LW so far.

Also, it's appears that Shane Legg, Jaan Tallin and others at Deep Mind leveraged the acquisition and moved the friendly AI conversation to a higher level, quite possibly highest level at Google. Interesting times, these are.

New Comment
12 comments, sorted by Click to highlight new comments since:

By the way, I asked Shane Legg for a follow-up, but he replied that they were not currently doing any media so he's unable to comment further.

Here are the questions I wanted to ask him (maybe he can reply in future):

Q1. Has your opinion about risks associated with artificial general intelligence changed since 2011?

Q2. Can you comment on the creation of an internal ethics board at Google?

Q3. To what extent do people within DeepMind and Google agree with the general position of the Machine Intelligence Research Institute?

Q4. Do you believe that Google will create an artificial general intelligence?

Q5. Do you have any general comments for the LessWrong community regarding Google and their recent acquisition of DeepMind?

[-]oooo10

Q6. How much influence will the ethics committee actually have? For example, are there commercial and IP clawback provisions if the committee is deemed to be ignored or sidelined?

With Google's army of lawyers I wouldn't count on this as the enforcement mechanism. BUT, this has a chance to get Larry or Sergei involved, which has a chance of succeeding and making a huge difference.

[-]Shmi40

From what I gather, it is chiefly Sergey Brin who is concerned with ethical issues, and his attention is on various Google X projects. Larry Page and Eric Schmidt don't seem to care as much, if at all. That's probably one reason Google has been getting visibly eviller in the last couple of years. Unless Deep Mind is a part of Google X, I would not expect the ethics board to matter.

A blog connected to the NYT also linked to the interview.

Mr. Legg noted in a 2011 Q&A with the LessWrong blog that technology and artificial intelligence could have negative consequences for humanity.

[-]Shmi20

Downvoted for persistently refusing to summarize your links. Show some respect to you readers, man.

Eh? Given that he started this thread with the idea of discussing the link to LW and not so much the content of the article, it doesn't seem like much of an issue (especially when the acquisition has been discussed on LW already).

I would be more inclined to downvote because I think that this is more suited for the Open Thread but I think the same for a lot of posts.

Then be consistent and downvote the other posts, too.

I am usually fairly consistent in this and downvote most such posts when I see them, however, I didn't downvote this post because it is almost borderline. You are right though, I should've downvoted this post to be consistent with my other downvotes, so I shall.

[-]Shmi10

Huffpost is a pretty shallow publication, and this article is no exception: the author assumes that one can get away with Asimov-style deontological rules:

who would we trust to develop a "10 commandments" for ethical AI?

It could be metaphorical.