RobinHanson comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
From that AAAI panel's interim report:
Given this description it is hard to imagine they haven't imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.
I don't see any arguments listed, though. I know there's at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on.
Why are you so optimistic about this sort of thing, Robin? You're usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don't know will be so unlike the cases we do know?
The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on "silly" topics? Is there never any point in listening to academics who haven't explicitly told you how they've broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don't, no matter what else their other features? And so on.
I confess, it doesn't seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn't high enough to do work, than actually working. If someone shows up with amazing analyses I haven't considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven't seen, when the prior is so much in favor of them having made a snap judgment, and it's not clear why if they've got a deep analysis they wouldn't just present it?
I think that on a purely pragmatic level there's a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn't seem like what ideal Bayesians would do.
You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?
...and I've held and stated this same position pretty much from the beginning, no? E.g. http://lesswrong.com/lw/gr/the_modesty_argument/
I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior.
Well (and I'm pretty sure this matches what I've been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn't mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I'm not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor
I'd spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I've never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.)
No such condition is remotely approached by disagreeing with the AAAI panel, so I don't think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)
Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are "rational." Guess I should elaborate my view in a separate post.
There's certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another's meta-rationality). As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.
I don't actually spend time obsessing about that sort of thing except when you're asking me those sorts of questions - putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn't considered.
I'll say again: I think there's much to be said for the Traditional Rationalist ideal of - once you're at least inside a science and have enough expertise to evaluate the arguments - paying attention only when people lay out their arguments on the table, rather than trying to guess authority (or arguing over who's most meta-rational). That's not saying "there's no point in considering the views of others". It's focusing your energy on the object level, where your thought time is most likely to be productive.
Is it that awful to say: "Show me your reasons"? Think of the prior probabilities!
You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to "show up" for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won't ask them for their reasons, and you won't make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won't since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your "traditional" (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
You're being slightly silly. I simply don't expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I'd immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.
FYI, I've talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem - I don't think we discussed hard takeoffs much per se. I certainly wouldn't have brushed him off if he'd started asking!
If there is a status pissing contest, they started it! ;-)
"On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population."
Agree with them that there is much scaremongering going on in the field - but disagree with them about there not being much chance of an intelligence explosion.
Almost surely world class academic AI experts do "know something you do not" about the future possibilities of AI. To declare that topic to be your field and them to be "outside" it seems hubris of the first order.
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer's (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin's initial question, informed somewhat by Eliezer's interpretation.
I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um... there are various complicated ways I could put this... but, well, so what?
(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
From that AAAI document:
"The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes".
"Radical outcomes" seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.
The AAAI interim report is really too vague to bother much with - but I suspect they are making another error.
Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario - but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool - and then we will want to become more like them.
Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains - and because the capability to satisfy those selection pressures has finally arrived.
The process has already resulted in enormous data-centres, the size of factories. As I have said:
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine - rather than gradually arising out of the growth of today's already self-improving economies and industries.
I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.
Did you take a look at my "The Intelligence Explosion Is Happening Now"? The point is surely a matter of history - not futurism.
Yes and you are right.
Great - thanks for your effort and input.
Re: "overall skepticism about the prospect of an intelligence explosion"...?
My guess would be that they are unfamiliar with the issues or haven't thought things through very much. Or maybe they don't have a good understanding of what that concept refers to (see link to my explanation - hopefully above). They present no useful analysis of the point - so it is hard to know why they think what they think.
The AAAI seem to have publicly come to these issues later than many in the community - and it seems to be playing catch-up.
It looks as though we will be hearing more from these folk soon:
"Futurists' report reviews dangers of smart robots"
http://www.pittsburghlive.com/x/pittsburghtrib/news/pittsburgh/s_651056.html
It doesn't sound much better than the first time around.