You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Daniel_Burfoot comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: Daniel_Burfoot 09 January 2012 12:57:23AM *  3 points [-]

Why? These guys think things are going to be fine. You should raise your probability estimate that humanity will survive the next century. This is great news!

Comment author: wedrifid 09 January 2012 02:42:04AM *  7 points [-]

Why? These guys think things are going to be fine. You should raise your probability estimate that humanity will survive the next century. This is great news!

Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century. People not being aware (or denying) threats are less likely to do what is necessary to prevent them. If we accept XiXidu's implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.

(It happens that I don't accept the premise. Narrow AI is a completely different subject to GAI and experts are notorious for overestimating the extent that their expertise applies to loosely related areas.)

Comment author: XiXiDu 09 January 2012 10:30:29AM *  3 points [-]

If we accept XiXidu's implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.

How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.

Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he'll now research risks from AI.

You might also tell me who you think is important and I will write them.

Comment author: wedrifid 09 January 2012 11:49:48AM 3 points [-]

How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.

I'm not questioning the value of writing to a broad range of people, or your initiative. I'm just discounting the authority of narrow AI experts on GAI - two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!

Comment author: Daniel_Burfoot 10 January 2012 03:02:06AM -1 points [-]

Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century

Okay, but this seems to violate conservation of expected evidence. Either you can be depressed by the answer "we're all going to die" or, less plausibly, by the answer "Everything is going to be fine", but not both.

Comment author: wedrifid 10 January 2012 03:36:04AM 2 points [-]

Okay, but this seems to violate conservation of expected evidence.

No it doesn't.

Either you can be depressed by the answer "we're all going to die" or, less plausibly, by the answer "Everything is going to be fine", but not both.

I only suggested the latter, never the former. I'd be encouraged if the AI researchers acknowledged more risk. (Only slightly given the lack of importance I have ascribed to these individuals elsewhere.)

Comment author: shokwave 09 January 2012 02:36:13AM 2 points [-]

If only they hadn't used such low probabilities - I could almost have believed them.

Comment author: khafra 09 January 2012 02:13:36PM 0 points [-]

Even with reasonable probabilities, it was pretty clear that Hayes was completely missing the point on a few questions; and if the other two had answered with the length and clarity he did, their point-missing might have been similarly clear.

Comment author: shokwave 10 January 2012 03:08:41AM 0 points [-]

Sure, but if it was easier for me to not notice them missing the point I might have been able to update more towards no UFAI.