ryjm comments on Muehlhauser-Wang Dialogue - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (284)
First, thank you for publishing this illuminating exchange.
I must say that Pei Wang sounds way more convincing to an uninitiated, but curious and mildly intelligent lay person (that would be me). Does not mean he is right, but he sure does make sense.
When Luke goes on to make a point, I often get lost in a jargon ("manifest convergent instrumental goals") or have to look up a paper that Pei (or other AGI researchers) does not hold in high regard. When Pei Wang makes an argument, it is intuitively clear and does not require going through a complex chain of reasoning outlined in the works of one Eliezer Yudkowsky and not vetted by the AI community at large. This is, of course, not a guarantee of its validity, but it sure is easier to follow.
Some of the statements are quite damning, actually: "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it." If one were to replace AI with physics, I would tend to dismiss EY as a crank just based on this statement, assuming it is accurate.
What makes me trust Pei Wang more than Luke is the common-sense statements like "to make AGI safe, to control their experience will probably be the main approach (which is what “education” is all about), but even that cannot guarantee safety." and "unless you get a right idea about what AGI is and how it can be built, it is very unlikely for you to know how to make it safe". Similarly, the SIAI position of “accelerate AI safety research and decelerate AI capabilities research so that we develop safe superhuman AGI first, rather than arbitrary superhuman AGI” rubs me the wrong way. While it does not necessarily mean it is wrong, the inability to convince outside experts that it is right is not a good sign.
This might be my confirmation bias, but I would be hard pressed to disagree with "To develop a non-trivial education theory of AGI requires a good understanding about how the system works, so if we don’t know how to build an AGI, there is no chance for us to know how to make it safe. I don’t think a good education theory can be “proved” in advance, pure theoretically. Rather, we’ll learn most of it by interacting with baby AGIs, just like how many of us learn how to educate children."
As a side point, I cannot help but wonder if the outcome of this discussion would have been different were it EY and not LM involved in it.
I think I am in the same position as you are (uninitiated but curious) and I had the same immediate reaction that Pei was more convincing. However, for me, I think this was the result of two factors
Maybe the 2nd point isn't entirely true, but that was what immediately stuck out after thinking about why I was drawn to Pei's arguments. Once I eliminated his status as a barometer for his arguments... it just became (1) an issue of my own lack of knowledge and (2) the tone of the responses.
For one thing, why the hell should I understand this in the first place? This is a dialogue between two prominent AI researchers. What I would expect from such a dialogue would be exactly what I would expect from sitting in on a graduate philosophy seminar or a computer science colloquium - I would be able to follow the gist of it, but not the gritty details. I would expect to hear some complex arguments that would require a couple textbooks and a dozen tabs open in my browser to be able to follow.
But I was able to understand Pei's arguments and play with them! If solving these kinds of conceptual problems is this easy, I might try to take over the world myself.
Not to say that the appearance of "complexity" is necessary for a good argument (EY's essays are proof), but here it seems like this lack of complexity (or as someone else said, the appeal to common sense) is a warning for the easily persuaded. Rereading with these things in mind illuminates the discussion a bit better.
I was actually a bit depressed by this dialogue. It seemed like an earnest (but maybe a little over the top with the LW references) attempt by lukeprog to communicate interesting ideas. I may be setting my expectations a little high, but Pei seemed to think he was engaging an undergraduate asking about sorting algorithms.
Of course, I could be completely misinterpreting things. I thought I would share my thought process after I came to the same conclusion as you did.
What? This was a dialog between Pei and lukeprog, right?
I'm curious about what you mean by the appellation "prominent AI researcher" that you would apply it to lukeprog, and whether he considers himself as a member of that category.
And he thought the undergrad terribly naive for not understanding that all sorting algorithms are actually just bubble sort.
This is why I find that unless the individual is remarkably open - to the point of being peculiar - it is usually pointless to try to communicate across status barriers. Status makes people (have the tendency and social incentives that make them act) stupid when it comes to comprehending others.
That's an incredibly sweeping statement. Are all pop-sci publications useless?
Reference.
Do you think that generalises to academics? Wouldn't a researcher who never changed their mind about anything be dismissed as a hidebound fogey?