shminux comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 23 April 2012 07:29:24AM 3 points [-]

If we want the support of the AGI community, it seems we'll have to improve our communication.

Yes, this does seem to be an issue. When people in academia write something like "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.", the communication must be at an all-time low.

Comment author: XiXiDu 23 April 2012 09:41:15AM 5 points [-]

The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.", the communication must be at an all-time low.

Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn't written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.

Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?

Comment author: Rain 23 April 2012 12:29:55PM 9 points [-]

I wouldn't be as worried if they took it upon themselves to study AI risk independently, but rather than "not listen to Eliezer", the actual event seems to be "not pay attention to AI risks" as a whole.

Comment author: XiXiDu 23 April 2012 01:56:53PM *  8 points [-]

I wouldn't be as worried if they took it upon themselves to study AI risk independently, but rather than "not listen to Eliezer", the actual event seems to be "not pay attention to AI risks" as a whole.

Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI's conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

Comment author: Rain 23 April 2012 02:04:01PM *  5 points [-]

But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI's are safely controllable.

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

"Really? Awesome! Let's get right on that." (ref. early Eliezer)

Alternatively: "<distracted> Hmm? Yes, that's interesting... it doesn't apply to my current grant / paper, so... <insert topics from current grant / paper>."

Comment author: XiXiDu 23 April 2012 03:08:06PM 1 point [-]

What reaction do you anticipate?

"Really? Awesome! Let's get right on that." (ref. early Eliezer)

Alternatively: "<distracted> Hmm? Yes, that's interesting... it doesn't apply to my current grant / paper, so... <insert topics from current grant / paper>."

I didn't expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.

Comment author: Rain 23 April 2012 04:14:43PM 0 points [-]

Sure, that too.

Comment author: Kaj_Sotala 24 April 2012 05:57:45AM *  11 points [-]

Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?

One neuroscientist thought about it for a while, then said "yes, you're probably right". Then he co-authored with me a paper touching upon that topic. :-)

(Okay, probably not a very typical case.)

Comment author: wedrifid 24 April 2012 06:02:26AM *  0 points [-]

One neuroscientist thought about it for a while, then said "yes, you're probably right". Then he co-authored with me a paper touching upon that topic. :-)

Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much 'citation needed' as 'citation would have really powered home the point there!')

Comment author: Kaj_Sotala 24 April 2012 08:18:51AM 0 points [-]

Edited citations to the original comment.

Comment author: timtyler 23 April 2012 10:49:31PM *  0 points [-]

But most AI researchers, including Pei Wang, do not buy the idea of AGI's that can quickly and vastly self-improve themselves to the point of getting out of control.

Well, that happening doesn't seem terribly likely. That might be what happens if civilization is daydreaming during the process - but there's probably going to be a "throttle" - and it will probably be carefully monitored - precisely in order to prevent anything untoward from happening.

Comment author: John_Maxwell_IV 24 April 2012 05:00:54AM -2 points [-]

Hey Tim, you can create another AI safety nonprofit to make sure things happen that way!

;-)

Seriously, I will donate!

Comment author: Luke_A_Somers 23 April 2012 03:51:04PM 5 points [-]

Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments.

Poor analogy. Physicists considered this possibility carefully and came up a superfluity of totally airtight reasons to dismiss the concern.