endoself comments on Reply to Holden on The Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (213)
Good point. But I don't see any evidence that anyone who was likely to create an AI soon, now won't.
Those whose profession and status is in approximating AI largely won't change course for what must seem to them like sci-fi tropes. [1]
Or, put another way, there are working computer scientists who are religious - you can't expect reason everywhere in someone's life.
[1] but in the long run, perhaps SI and others can offer a smooth transition for dangerously smart researchers into high-status alternatives such as FAI or other AI risk mitigation.
According to Luke, Moshe Looks (head of Google's AGI team) is now quite safety conscious, and a Singularity Institute supporter.
Update: It's not really correct to say that Google has "an AGI team." Moshe Looks has been working on program induction, and this guy said that some people are working on AI "on a large scale," but I'm not aware of any publicly-visible Google project which has the ambitions of, say, Novamente.