Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

markan comments on Open thread, Mar. 20 - Mar. 26, 2017 - Less Wrong Discussion

3 Post author: MrMind 20 March 2017 08:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

You are viewing a single comment's thread.

Comment author: markan 20 March 2017 06:29:30PM 1 point [-]

I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI

Comment author: ChristianKl 22 March 2017 12:16:43PM 1 point [-]

A good metaphor is a cliff. A cliff poses a risk in that it is physically possible to drive over it. In the same way, it may be physically possible to build a very dangerous AI. But nobody wants to do that, and—in my view—it looks quite avoidable.

That's sounds naive and gives the impression that you haven't taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don't see the problem of AI risk.

The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn't obvious beforehand. There's the belief that if the value alignment problem isn't solved before human level AGI, that means the end of humanity.

Comment author: turchin 20 March 2017 07:12:00PM 0 points [-]

If you prove that HLAI is safer than narrow AI jumping in paper clip maximiser, it is good EA case.

If you prove that risks of synthetic biology is extremely high if we will not create HLAI in time, it would also support your point of view.

Comment author: dogiv 20 March 2017 06:58:18PM 0 points [-]

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Comment author: MrMind 22 March 2017 11:07:09AM 0 points [-]

But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development.

This is almost the inverse Basilisk argument.