lukeprog comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 23 April 2012 02:07:36AM *  16 points [-]

First, thank you for publishing this illuminating exchange.

I must say that Pei Wang sounds way more convincing to an uninitiated, but curious and mildly intelligent lay person (that would be me). Does not mean he is right, but he sure does make sense.

When Luke goes on to make a point, I often get lost in a jargon ("manifest convergent instrumental goals") or have to look up a paper that Pei (or other AGI researchers) does not hold in high regard. When Pei Wang makes an argument, it is intuitively clear and does not require going through a complex chain of reasoning outlined in the works of one Eliezer Yudkowsky and not vetted by the AI community at large. This is, of course, not a guarantee of its validity, but it sure is easier to follow.

Some of the statements are quite damning, actually: "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it." If one were to replace AI with physics, I would tend to dismiss EY as a crank just based on this statement, assuming it is accurate.

What makes me trust Pei Wang more than Luke is the common-sense statements like "to make AGI safe, to control their experience will probably be the main approach (which is what “education” is all about), but even that cannot guarantee safety." and "unless you get a right idea about what AGI is and how it can be built, it is very unlikely for you to know how to make it safe". Similarly, the SIAI position of “accelerate AI safety research and decelerate AI capabilities research so that we develop safe superhuman AGI first, rather than arbitrary superhuman AGI” rubs me the wrong way. While it does not necessarily mean it is wrong, the inability to convince outside experts that it is right is not a good sign.

This might be my confirmation bias, but I would be hard pressed to disagree with "To develop a non-trivial education theory of AGI requires a good understanding about how the system works, so if we don’t know how to build an AGI, there is no chance for us to know how to make it safe. I don’t think a good education theory can be “proved” in advance, pure theoretically. Rather, we’ll learn most of it by interacting with baby AGIs, just like how many of us learn how to educate children."

As a side point, I cannot help but wonder if the outcome of this discussion would have been different were it EY and not LM involved in it.

Comment author: lukeprog 23 April 2012 06:49:18AM 4 points [-]

What makes me trust Pei Wang more than Luke is the common-sense statements like "to make AGI safe, to control their experience will probably be the main approach (which is what “education” is all about), but even that cannot guarantee safety." and "unless you get a right idea about what AGI is and how it can be built, it is very unlikely for you to know how to make it safe".

Um... but these are statements I agreed with.

I wish Pei had taken the time to read the articles I repeatedly linked to, for they were written precisely to explain why his position is misguided.

Comment author: Wei_Dai 24 April 2012 12:07:33AM *  10 points [-]

I wish Pei had taken the time to read the articles I repeatedly linked to, for they were written precisely to explain why his position is misguided.

I think you should have listed a couple of the most important articles at the beginning as necessary background reading to understand your positions and terminology (like Pei did with his papers), and then only used links very sparingly afterwards. Unless you already know your conversation partner takes you very seriously, you can't put 5 hyperlinks in an email and expect the other person to read them all. When they see that many links, they'll probably just ignore all of them. (Not to mention the signaling issues that others already pointed out.)

Comment author: shminux 23 April 2012 07:12:56AM 3 points [-]

Hmm, maybe it is possible to summarize them in a language that an AI expert would find both meaningful and convincing. How is your mental model of Dr Wang?

Comment author: jsteinhardt 24 April 2012 12:20:24AM -1 points [-]

Nitpick, but it's Professor Wang, not Doctor Wang.

Comment author: pedanterrific 24 April 2012 12:35:41AM *  1 point [-]

The page linked at the top of the article says Dr. Wang. And his CV says he's a Ph.D.

Comment author: jsteinhardt 24 April 2012 12:55:21AM *  1 point [-]

The title of Professor supersedes the title of Doctor, at least in the case of a PhD (I'm not sure about MD, but would assume similarly). His CV indicates pretty clearly that he is an Associate Professor at temple university, so the correct title is Professor.

Again, I am being somewhat super-pedantic here, and I apologize for any annoyance this causes. But hopefully it will help you in your future signalling endeavors.

Also, in most situations it is okay to just go by first name, or full name (without any titles); I have I think exclusively referred to Pei as Pei.

ETA: Although also yes, his homepage suggests that he may be okay with being addressed as Doctor. I still advocate the general strategy of avoiding titles altogether, and if you do use titles, refer to Professors as Professors (failure to do so will not offend anyone, but may make you look silly).

Comment author: pedanterrific 24 April 2012 01:15:45AM 3 points [-]

The title of Professor supersedes the title of Doctor

...Not in my experience. Do you have some particular reason to believe this is the case in Philadelphia?

Comment author: shminux 24 April 2012 01:36:18AM 1 point [-]

The situation in the US and Canada is quite relaxed, actually, nothing like in, say, Germany. Dr is a perfectly valid form of address to any faculty member.

Comment author: pedanterrific 24 April 2012 01:44:37AM 0 points [-]

Well, at least in my experience the Professors who don't actually have doctorates tend not to appreciate having to correct you on that point. But yeah.

Comment author: Kaj_Sotala 24 April 2012 05:46:42AM *  2 points [-]

When I received the proofs for my IJMC papers, the e-mail addressed me as "dear professor Sotala" (for those who aren't aware, I don't even have a Master's degree, let alone a professorship). When I mentioned this on Facebook, some people mentioned that there are countries where it's a huge faux pas to address a professor as anything else than a professor. So since "professor" is the highest form of address, everyone tends to get called that in academic communication, just to make sure that nobody'll be offended - even if the sender is 95% sure that the other isn't actually a professor.

Comment author: XiXiDu 23 April 2012 09:27:43AM *  4 points [-]

I wish Pei had taken the time to read the articles I repeatedly linked to, for they were written precisely to explain why his position is misguided.

The reactions I got (from a cognitive scientist and another researcher) is that Bostrom is a "sloppy thinker" (original words) and that SI's understanding of AGI is naive.

Michael Littman told me he is going to read some of the stuff too. I haven't got an answer yet though.

Comment author: Rain 23 April 2012 12:26:13PM 1 point [-]

Yeah, it was pretty easy for me to nod my head along with most of it, pointing to my "SI failure mode" bucket.

Comment author: Luke_A_Somers 23 April 2012 03:47:37PM 1 point [-]

Please clarify.

Comment author: Rain 23 April 2012 04:13:49PM 3 points [-]

I think AI is dangerous, that making safe AI is difficult, and that SI will likely fail in their mission. I donate to them in the hopes that this improves their chances.