shminux comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 23 April 2012 06:58:17AM 11 points [-]

For example, should mankind vigorously pursue research on how to make Ron Fouchier's alteration of the H5N1 bird flu virus even more dangerous and deadly to humans, because “higher safety can only be achieved by more research on all related topics”?

Yeah, I remember reading this argument and thinking how it does not hold water. The flu virus is a well-research area. It may yet hold some surprises, sure, but we think that we know quite a bit about it. We know enough to tell what is dangerous and what is not. AGI research is nowhere near this stage. My comparison would be someone screaming at Dmitri Ivanovsky in 1892 "do not research viruses until you know that this research is safe!".

My answer is that much of the research in this outline of open problems doesn't require us to know which AGI architecture will succeed first, for example the problem of representing human values coherently.

Do other AI researchers agree with your list of open problems worth researching? If you asked Dr. Wang about it, what was his reaction?

Comment author: eurg 24 April 2012 09:59:05PM 2 points [-]

My comparison would be someone screaming at Dmitri Ivanovsky in 1892 "do not research viruses until you know that this research is safe!".

I want to second that. Also, when reading through this (and feeling the -- probably imagined -- tension of both parties to stay polite) the viral point was the first one that triggered the "this is clearly an attack!" emotion in my head. I was feeling sad about that, and had hoped that luke would find another ingenious example.

Comment author: Rain 23 April 2012 12:55:51PM *  0 points [-]

My comparison would be someone screaming at Dmitri Ivanovsky in 1892 "do not research viruses until you know that this research is safe!".

Well, bioengineered viruses are on the list of existential threats...

Comment author: Jack 23 April 2012 02:12:53PM 6 points [-]

And there aren't naturally occurring AIs scampering around killing millions of people... It's a poor analogy.

Comment author: Rain 23 April 2012 04:16:29PM *  7 points [-]

"Natural AI" is an oxymoron. There are lots of NIs (natural intelligences) scampering around killing millions of people.

And we're only a little over a hundred years into virus research, much less on intelligence. Give it another hundred.