You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on SSC Discussion: No Time Like The Present For AI Safety Work - Less Wrong Discussion

6 Post author: tog 05 June 2015 02:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: anon85 05 June 2015 04:09:28AM 3 points [-]

I think point 1 is very misleading, because while most people agree with it, hypothetically a person might assign 99% chance of humanity blowing itself up before strong AI, and < 1% chance of strong AI before the year 3000. Surely even Scott Alexander will agree that this person may not want to worry about AI right now (unless we get into Pascal's mugging arguments).

I think most of the strong AI debate comes from people believing in different timelines for it. People who think strong AI is not a problem think we are very far from it (at least conceptually, but probably also in terms of time). People who worry about AI are usually pretty confident that strong AI will happen this century.

Comment author: IlyaShpitser 05 June 2015 08:43:37PM 1 point [-]

My reading of that article is:

"I am stumping for my friends."

Comment author: knb 07 June 2015 06:55:18AM *  0 points [-]

So are you claiming he doesn't really believe his argument?

Comment author: IlyaShpitser 07 June 2015 11:12:17AM *  -2 points [-]

I am saying he wrote that article because his friends asked him to. You are asking the wrong person about Scott's beliefs.

Comment author: knb 07 June 2015 09:41:05PM 2 points [-]

I wasn't asking you about his beliefs, I was asking about what implication you were making. We already know what Scott says he believes; unless you doubt he is being honest there is no reason to assume he is stumping for his friends rather than advocating his own beliefs.

Comment author: IlyaShpitser 09 June 2015 10:07:33AM *  1 point [-]

I am not sure what you are asking. I don't think Scott is an evil mutant, he wouldn't just cynically lie, I don't think. AI risk is not one of his usual blog topics, however.


I think you are underestimating the degree to which personal truth is socially constructed, and in particular influenced by friends.

Comment author: Raemon 14 June 2015 04:58:34PM *  1 point [-]

He doesn't talk about AI as often as, say, psychiatry, but he talks about it with some frequency.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Aslatestarcodex.com%20artificial%20intelligence

In particular, Meditations on Moloch makes it pretty clear that he takes AI seriously.