Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

dogiv comments on Open thread, Mar. 20 - Mar. 26, 2017 - Less Wrong Discussion

3 Post author: MrMind 20 March 2017 08:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

You are viewing a single comment's thread. Show more comments above.

Comment author: dogiv 20 March 2017 06:58:18PM 0 points [-]

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Comment author: MrMind 22 March 2017 11:07:09AM 0 points [-]

But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development.

This is almost the inverse Basilisk argument.