You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 09 January 2012 10:30:29AM *  3 points [-]

If we accept XiXidu's implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.

How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.

Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he'll now research risks from AI.

You might also tell me who you think is important and I will write them.

Comment author: wedrifid 09 January 2012 11:49:48AM 3 points [-]

How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.

I'm not questioning the value of writing to a broad range of people, or your initiative. I'm just discounting the authority of narrow AI experts on GAI - two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!