RobinHanson comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobinHanson 12 November 2009 09:23:06PM *  16 points [-]

You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?

Comment author: Eliezer_Yudkowsky 12 November 2009 09:35:34PM *  5 points [-]

I'm sorry, but I don't really have a proper lesson plan laid out - although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.

If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn't matter if they'd done it on their own or by reading my stuff.

E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff... with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn't clear from the presentation.

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

Comment author: Steve_Rayhawk 16 November 2009 04:31:19AM *  15 points [-]

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:

  • What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?

    • What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of "implement a proxy intelligence"?

      • What fraction of them have thought carefully about when there might be future practical AI architectures that could do this?
      • What fraction use a process for answering questions about the category distinctions that will be known in the future, which uses as an unconscious default the category distinctions known in the present?
  • What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?

    • How much do those decision rules refer to modular causal analyses of the object of a claim and of the fact that people are making the claim?

    • How much do those decision rules refer to intuitions about other peoples' states of mind and social category memberships?

    • How much do those decision rules refer to intuitions about other peoples' intuitive decision rules?

    • Historically, have peoples' own abilities to do modular causal analyses been good enough to make them reliably safe from losing prestige by being associated with false claims? What fraction of academics have the intuitive impression that their own ability to do analysis isn't good enough to make them reliably safe from losing prestige by association with a false claim, so that they can only be safe if they use intuitions about the states of mind and social category memberships of a claim's proponents?

  • Of those AI academics who believe that a machine intelligence could exist which could outmaneuver humans if motivated, how do they think about the possible motivations of a machine intelligence?

    • What fraction of them think about AI design in terms of a formalism such as approximating optimal sequential decision theory under a utility function? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?

    • What fraction of them think about AI design in terms of intuitively justified decision heuristics? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?

    • What fraction of them understand enough evolutionary psychology and/or cognitive psychology to recognize moral evaluations as algorithmically caused, so that they can reject the default intuitive explanation of the cause of moral evaluations, which seems to be: "there are intrinsic moral qualities attached to objects in the world, and when any intelligent agent apprehends an object with a moral quality, the action of the moral quality on the agent's intelligence is to cause the agent to experience a moral evaluation"?

      • What combination of specializations in AI, moral philosophy, and cognitive psychology would an academic need to have, to be an "expert" whose disagreements about the material causes and implementation of moral evaluations were significant?
  • On the question of takeoff speeds, what fraction of the AI academics have a good enough intuitive understanding of decision theory to see that a point estimate or default scenario should not be substituted for a marginal posterior distribution, even in a situation where it would be socially costly in the default scenario to take actions which prevent large losses in one tail of the distribution?

    • What fraction recognized that they had a prior belief distribution over possible takeoff speeds at all?

    • What fraction understood that, regarding a variable which is underconstrained by evidence, "other people would disapprove of my belief distribution about this variable" is not an indicator for "my belief distribution about this variable puts mass in the wrong places", except insofar as there is some causal reason to expect that disapproval would be somehow correlated with falsehood?

  • What other popular concerns have academics historically needed to dismiss? What decision rules have they learned to decide whether they need to dismiss a current popular concern?

    • After they make a decision to dismiss a popular concern, what kinds of causal explanations of the existence of that concern do they make reference to, when arguing to other people that they should agree with the decision?

    • How much do the true decision rules depend on those causal explanations?

    • How much do the decision rules depend on intuitions about the concerned peoples' states of mind and social category memberships?

    • How much do the causal explanations use concepts which are implicitly defined by reference to hidden intuitions about states of mind and social category memberships?

      • Can these intuitively defined concepts carry the full weight of the causal explanations they are used to support, or does their power to cause agreement come from their ability to activate social intuitions?
  • Which people are the AI academics aware of, who have argued that intelligence explosion is a concern? What social categories do they intuit those people to be members of? What arguments are they aware of? What states of mind do they intuit those arguments to be indicators of (e.g. as in intuitively computed separating equilibria)?

    • What people and arguments did the AI academics think the other AI academics were thinking of? If only a few of the academics were thinking of people and arguments who they intuited to come from credible social categories and rational states of mind, would they have been able to communicate this to the others?
  • When the AI academics made the decision to dismiss concern about an intelligence explosion, what kinds of causal explanations of the existence of that concern did they intuitively expect that they would be able make reference to, if they later had to argue to other people that they should agree with the decision?

It is also possible to model the social process in the panel:

  • Are there factors that might make a joint statement by a panel of AI academics reflect different conclusions than they would have individually reached if they had been outsiders to the AI profession with the same AI expertise?

    • One salient consideration would be that agreeing with popular concern about an intelligence explosion would result in their funding being cut. What effects would this have had?

      • Would it have affected the order in which they became consciously aware of lines of argument that might make an intelligence explosion seem less or more deserving of concern?
      • Would it have made them associate concern about an intelligence explosion with unpopularity? In doubtful situations, unpopularity of an argument is one cue for its unjustifiability. Would they associate unpopularity with logical unjustifiability, and then lose willingness to support logically justifiable lines of argument that made an intelligence explosion seem deserving of concern, just as if they had felt those lines of argument to be logically unjustifiable, but without any actual unjustifiability?
    • There are social norms to justify taking prestige away from people who push a claim that an argument is justifiable while knowing that other prestigious people think the argument to to be a marker of a non-credible social category or state of mind. How would this have affected the discussion?

    • If there were panelists who personally thought the intelligence explosion argument was plausible, and they were in the minority, would the authors of the panel's report mention it?

      • Would the authors know about it?
      • If the authors knew about it, would they feel any justification or need to mention those opinions in the report, given that the other panelists may have imposed on the authors an implicit social obligation to not write a report that would "unfairly" associate them with anything they think will cause them to lose prestige?
      • If panelists in such a minority knew that the report would not mention their opinions, would they feel any need or justification to object, given the existence of that same implicit social obligation?
  • How good are groups of people at making judgments about arguments that unprecedented things will have grave consequences?

    • How common is a reflective, causal understanding of the intuitions people use when judging popular concerns and arguments about unprecedented things, of the sort that would be needed to compute conditional probabilities like "Pr( we would decide that concern is not justified | we made our decision according to intuition X ∧ concern was justified )"?

    • How common is the ability to communicate the epistemic implications of that understanding in real-time while a discussion is happening, to keep it from going wrong?

Comment author: RobinHanson 12 November 2009 11:34:47PM *  2 points [-]

From that AAAI panel's interim report:

Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. ... There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems.

Given this description it is hard to imagine they haven't imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.

Comment author: Eliezer_Yudkowsky 12 November 2009 11:43:48PM 8 points [-]

I don't see any arguments listed, though. I know there's at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on.

Why are you so optimistic about this sort of thing, Robin? You're usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don't know will be so unlike the cases we do know?

Comment author: RobinHanson 13 November 2009 04:06:20AM *  2 points [-]

The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on "silly" topics? Is there never any point in listening to academics who haven't explicitly told you how they've broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don't, no matter what else their other features? And so on.

Comment author: Eliezer_Yudkowsky 13 November 2009 04:56:35AM 12 points [-]

I confess, it doesn't seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn't high enough to do work, than actually working. If someone shows up with amazing analyses I haven't considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven't seen, when the prior is so much in favor of them having made a snap judgment, and it's not clear why if they've got a deep analysis they wouldn't just present it?

I think that on a purely pragmatic level there's a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn't seem like what ideal Bayesians would do.

Comment author: RobinHanson 13 November 2009 01:37:43PM *  3 points [-]

You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?

Comment author: Eliezer_Yudkowsky 13 November 2009 02:17:43PM *  5 points [-]

You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair but of time discussing it.

...and I've held and stated this same position pretty much from the beginning, no? E.g. http://lesswrong.com/lw/gr/the_modesty_argument/

t seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement.

I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior.

If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?

Well (and I'm pretty sure this matches what I've been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn't mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I'm not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor

I'd spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I've never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.)

No such condition is remotely approached by disagreeing with the AAAI panel, so I don't think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)

Comment author: RobinHanson 13 November 2009 06:12:24PM 2 points [-]

Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are "rational." Guess I should elaborate my view in a separate post.

Comment author: Eliezer_Yudkowsky 13 November 2009 09:13:11PM *  9 points [-]

There's certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another's meta-rationality). As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.

I don't actually spend time obsessing about that sort of thing except when you're asking me those sorts of questions - putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn't considered.

I'll say again: I think there's much to be said for the Traditional Rationalist ideal of - once you're at least inside a science and have enough expertise to evaluate the arguments - paying attention only when people lay out their arguments on the table, rather than trying to guess authority (or arguing over who's most meta-rational). That's not saying "there's no point in considering the views of others". It's focusing your energy on the object level, where your thought time is most likely to be productive.

Is it that awful to say: "Show me your reasons"? Think of the prior probabilities!

Comment author: timtyler 13 November 2009 10:31:47PM *  2 points [-]

From that AAAI document:

"The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes".

"Radical outcomes" seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.

Comment author: timtyler 14 November 2009 10:41:57PM *  1 point [-]

The AAAI interim report is really too vague to bother much with - but I suspect they are making another error.

Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario - but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool - and then we will want to become more like them.

Comment author: timtyler 13 November 2009 10:27:39PM *  2 points [-]

Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains - and because the capability to satisfy those selection pressures has finally arrived.

The process has already resulted in enormous data-centres, the size of factories. As I have said:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Comment author: timtyler 14 November 2009 09:12:38PM 1 point [-]

Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine - rather than gradually arising out of the growth of today's already self-improving economies and industries.

Comment author: Thomas 14 November 2009 09:24:19PM 0 points [-]

I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.

Comment author: timtyler 14 November 2009 09:34:49PM -1 points [-]

Did you take a look at my "The Intelligence Explosion Is Happening Now"? The point is surely a matter of history - not futurism.

Comment author: Thomas 14 November 2009 09:39:55PM 0 points [-]

Yes and you are right.

Comment author: timtyler 13 November 2009 10:22:36PM *  1 point [-]

Re: "overall skepticism about the prospect of an intelligence explosion"...?

My guess would be that they are unfamiliar with the issues or haven't thought things through very much. Or maybe they don't have a good understanding of what that concept refers to (see link to my explanation - hopefully above). They present no useful analysis of the point - so it is hard to know why they think what they think.

The AAAI seem to have publicly come to these issues later than many in the community - and it seems to be playing catch-up.

Comment author: timtyler 14 November 2009 06:30:24PM 0 points [-]

It looks as though we will be hearing more from these folk soon:

"Futurists' report reviews dangers of smart robots"

http://www.pittsburghlive.com/x/pittsburghtrib/news/pittsburgh/s_651056.html

It doesn't sound much better than the first time around.

Comment author: JoshuaFox 16 December 2009 08:14:11AM 1 point [-]

It must be possible to engage at least some of these people in some sort of conversation to understand their positions, whether a public dialog as with Scott Aaronson or in private.

Comment author: timtyler 14 November 2009 12:26:06AM 1 point [-]

Chalmers reached some odd conclusions. Probably not as odd as his material about zombies and consciousness, though.