endoself comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 10 July 2012 10:05:01PM *  6 points [-]

Good point. But I don't see any evidence that anyone who was likely to create an AI soon, now won't.

Those whose profession and status is in approximating AI largely won't change course for what must seem to them like sci-fi tropes. [1]

Or, put another way, there are working computer scientists who are religious - you can't expect reason everywhere in someone's life.

[1] but in the long run, perhaps SI and others can offer a smooth transition for dangerously smart researchers into high-status alternatives such as FAI or other AI risk mitigation.

Comment author: endoself 11 July 2012 12:09:52AM *  7 points [-]

But I don't see any evidence that anyone who was likely to create an AI soon, now won't.

According to Luke, Moshe Looks (head of Google's AGI team) is now quite safety conscious, and a Singularity Institute supporter.

Comment author: lukeprog 19 October 2012 12:53:52AM *  2 points [-]

Update: It's not really correct to say that Google has "an AGI team." Moshe Looks has been working on program induction, and this guy said that some people are working on AI "on a large scale," but I'm not aware of any publicly-visible Google project which has the ambitions of, say, Novamente.