danieldewey comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: danieldewey 23 April 2012 03:22:33AM *  2 points [-]

I found this reaction enlightening. Thanks for writing it up.

Comment author: shminux 23 April 2012 03:54:08AM 0 points [-]

What is your reaction?

Comment author: danieldewey 23 April 2012 06:48:03AM 10 points [-]

I was dismayed that Pei has such a poor opinion of the Singularity Institute's arguments, and that he thinks we are not making a constructive contribution. If we want the support of the AGI community, it seems we'll have to improve our communication.

Comment author: John_Maxwell_IV 24 April 2012 04:57:21AM 7 points [-]

It might be more worthwhile to try to persuade graduate students and undergraduates who might be considering careers in AI research, since the personal cost associated with deciding that AI research is dangerous is lower for them. So less motivated cognition.

Comment author: ciphergoth 29 April 2012 08:15:17PM 0 points [-]

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it" - Upton Sinclair

Comment author: danieldewey 24 April 2012 05:35:56AM 0 points [-]

Good point!

Comment author: Bugmaster 23 April 2012 08:06:51AM *  4 points [-]

Correct me if I'm wrong, but isn't it the case that you wish to decelerate AI research ? In this case, you are in fact making a destructive contribution -- from the point of view of someone like Wang, who is interested in AI research. I see nothing odd about that.

Comment author: Luke_A_Somers 23 April 2012 03:49:21PM 5 points [-]

To decelerate AI capability research and accelerate AI goal management research. An emphasis shift, not a decrease. An increase would be in order.

Comment author: timtyler 23 April 2012 04:37:49PM *  1 point [-]

It sounds as though you mean decelerating the bits that he is interested in and accelerating the bits that the SI is interested in. Rather as though the SI is after a bigger slice of the pie.

If you slow down capability research, then someone else is likely to become capable before you - in which case, your "goal management research" may not be so useful. How confident are you that this is a good idea?

Comment author: shminux 23 April 2012 07:29:24AM 3 points [-]

If we want the support of the AGI community, it seems we'll have to improve our communication.

Yes, this does seem to be an issue. When people in academia write something like "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.", the communication must be at an all-time low.

Comment author: XiXiDu 23 April 2012 09:41:15AM 5 points [-]

The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.", the communication must be at an all-time low.

Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn't written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.

Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?

Comment author: Rain 23 April 2012 12:29:55PM 9 points [-]

I wouldn't be as worried if they took it upon themselves to study AI risk independently, but rather than "not listen to Eliezer", the actual event seems to be "not pay attention to AI risks" as a whole.

Comment author: XiXiDu 23 April 2012 01:56:53PM *  8 points [-]

I wouldn't be as worried if they took it upon themselves to study AI risk independently, but rather than "not listen to Eliezer", the actual event seems to be "not pay attention to AI risks" as a whole.

Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI's conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

Comment author: Rain 23 April 2012 02:04:01PM *  5 points [-]

But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI's are safely controllable.

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

"Really? Awesome! Let's get right on that." (ref. early Eliezer)

Alternatively: "<distracted> Hmm? Yes, that's interesting... it doesn't apply to my current grant / paper, so... <insert topics from current grant / paper>."

Comment author: XiXiDu 23 April 2012 03:08:06PM 1 point [-]

What reaction do you anticipate?

"Really? Awesome! Let's get right on that." (ref. early Eliezer)

Alternatively: "<distracted> Hmm? Yes, that's interesting... it doesn't apply to my current grant / paper, so... <insert topics from current grant / paper>."

I didn't expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.

Comment author: Rain 23 April 2012 04:14:43PM 0 points [-]

Sure, that too.

Comment author: Kaj_Sotala 24 April 2012 05:57:45AM *  11 points [-]

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

One neuroscientist thought about it for a while, then said "yes, you're probably right". Then he co-authored with me a paper touching upon that topic. :-)

(Okay, probably not a very typical case.)

Comment author: wedrifid 24 April 2012 06:02:26AM *  0 points [-]

One neuroscientist thought about it for a while, then said "yes, you're probably right". Then he co-authored with me a paper touching upon that topic. :-)

Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much 'citation needed' as 'citation would have really powered home the point there!')

Comment author: Kaj_Sotala 24 April 2012 08:18:51AM 0 points [-]

Edited citations to the original comment.

Comment author: timtyler 23 April 2012 10:49:31PM *  0 points [-]

But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

Well, that happening doesn't seem terribly likely. That might be what happens if civilization is daydreaming during the process - but there's probably going to be a "throttle" - and it will probably be carefully monitored - precisely in order to prevent anything untoward from happening.

Comment author: John_Maxwell_IV 24 April 2012 05:00:54AM -2 points [-]

Hey Tim, you can create another AI safety nonprofit to make sure things happen that way!

;-)

Seriously, I will donate!

Comment author: Luke_A_Somers 23 April 2012 03:51:04PM 5 points [-]

Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments.

Poor analogy. Physicists considered this possibility carefully and came up a superfluity of totally airtight reasons to dismiss the concern.

Comment author: semianonymous 23 April 2012 12:53:35PM *  1 point [-]

I think you must first consider simpler possibility that SIAI actually has a very bad argument, and isn't making any positive contribution to saving mankind from anything. When you have very good reasons to think it isn't so (high iq test scores don't suffice), very well verified given all the biases, you can consider possibility that it is miscommunication.