You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on SSC Discussion: No Time Like The Present For AI Safety Work - Less Wrong Discussion

6 Post author: tog 05 June 2015 02:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: anon85 05 June 2015 04:09:28AM 3 points [-]

I think point 1 is very misleading, because while most people agree with it, hypothetically a person might assign 99% chance of humanity blowing itself up before strong AI, and < 1% chance of strong AI before the year 3000. Surely even Scott Alexander will agree that this person may not want to worry about AI right now (unless we get into Pascal's mugging arguments).

I think most of the strong AI debate comes from people believing in different timelines for it. People who think strong AI is not a problem think we are very far from it (at least conceptually, but probably also in terms of time). People who worry about AI are usually pretty confident that strong AI will happen this century.

Comment author: Houshalter 05 June 2015 10:10:31PM 5 points [-]

In my experience the timeline is not usually the source of disagreement. They usually don't believe that AI would want to hurt humans. That the paperclip maximizer scenario isn't likely/possible. E.g. this popular reddit thread from yesterday.

I guess that would be premise number 3 or 4, that goal alignment is a problem that needs to be solved.

Comment author: anon85 06 June 2015 01:45:12AM 4 points [-]

Yeah, you're probably right. I was probably just biased because the timeline is my main source of disagreement with AI danger folks.