Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MichaelVassar comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread.

Comment author: MichaelVassar 19 August 2011 02:17:49PM 21 points [-]

I'm pretty sure that I endorse the same method you do, and that the "EEV" approach is a straw man.
It's also the case that while I can endorse "being hesitant to embrace arguments that seem to have anti-common-sense implications (unless the evidence behind these arguments is strong) ", I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

Comment author: multifoliaterose 19 August 2011 04:17:49PM *  10 points [-]

I'm pretty sure that I endorse the same method you do, and that the "EEV" approach is a straw man.

The post doesn't highlight you as an example of someone who uses the EEV approach and I agree that there's no evidence that you do so. That said, it doesn't seem like the EEV approach under discussion is a straw man in full generality. Some examples:

  1. As lukeprog mentions, Anna Salamon gave the impression of using the EEV approach in one of her 2009 Singularity Summit talks.

  2. One also sees this sort of thing on LW from time to time, e.g. [1], [2].

  3. As Holden mentions, the issue came up in the 2010 exchange with Giving What We Can.

Comment author: XiXiDu 19 August 2011 04:02:21PM 2 points [-]

I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

I agree with the first sentence but don't know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.

Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?

Comment author: MichaelVassar 20 August 2011 12:40:19AM 4 points [-]

In practice, when I seen a chance to do high return work on other x-risks, such as synthetic bio, I do such work. It can't always be done publicly though. It doesn't seem likely at all to me that UFAI isn't a solvable problem, given enough capable people working hard on it for a couple decades, and at the margin it's by far the least well funded major x-risk, so the real question, IMHO, is simply what organization has the best chance of actually turning funds into a solution. SIAI, FHI or build your own org, but saying it's impossible without checking is just being lazy/stingy, and is particularly non-credible from someone who isn't making a serious effort on any other x-risk either.

Comment author: timtyler 21 August 2011 08:53:24PM 2 points [-]

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

I don't think so - assuming we are trying to maximise p(save all humans).

It appears that at least one of us is making a math mistake.

Comment author: saturn 21 August 2011 09:00:12PM 2 points [-]

It's not clear whether "confidence in averting" means P(avert disaster) or P(avert disaster|disaster).

Comment author: CarlShulman 22 August 2011 03:14:09AM *  1 point [-]

I don't think so - assuming we are trying to maximise p(save all humans).

Likewise. ETA: on what I take as the default meaning of "confidence in averting" in this context, P(avert disaster|disaster otherwise impending).

Comment author: multifoliaterose 19 August 2011 04:20:22PM *  0 points [-]

I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

Agree here. Do you think that there's a strong case for direct focus on the FAI problem rather than indirectly working toward FAI via nuclear deproliferation [1] [2]? If so I'd be interested in hearing more.

Comment author: MichaelVassar 20 August 2011 12:36:14AM 5 points [-]

Given enough financial resources to actually endow research chairs and make a credible commitment to researchers, and given good enough researchers, I'd definitely focus SIAI more directly on FAI.

Comment author: multifoliaterose 20 August 2011 12:55:29AM 3 points [-]

I totally understand holding off on hiring research faculty until having more funding, but what would the researchers hypothetically do in the presence of such funding? Does anyone have any ideas for how to do Friendly AI research?

I think (but am not sure) that I would give top priority to FAI if I had the impression that there are viable paths for research that have yet to be explored (that are systematically more likely reduce x-risk than to increase x-risk), but I haven't seen a clear argument that this is the case.

Comment author: lessdazed 20 August 2011 10:56:20AM 0 points [-]

"Nuclear proliferation" were not words I was expecting to see at the end of that sentence.

I don't see how nuclear war is an existential risk. It's not capable of destroying humanity, as far as I can tell, and would give more time to think and less ability to do with respect to AI. Someone could set cobalt bombs affixed to rockets in hidden silos and set them such that one explodes in the atmosphere every year for a thousand years or something, but I don't see how accidental nuclear war would end humanity outside of some unknown unknown sort of effect.

As far as rebuilding goes, I'm pretty confident it wouldn't be too hard so long as information survives, and I'm pretty confident it would. We don't need to build aircraft carriers, eat cows, and have millions of buildings with massive glass windows at 72 degrees, and we don't need to waste years upon years of productivity with school (babysitting for the very young and expensive signalling for young adults). Instead, we could try education, Polgar-sister style.

Comment author: multifoliaterose 20 August 2011 06:54:34PM *  0 points [-]

See the pair of links in the grandparent, especially the ensuing discussion in the thread linked in [2].

Comment author: lessdazed 20 August 2011 08:32:40PM 1 point [-]

A few words on my personal theory of history.

Societies begin within a competitive environment in which only a few societies survive, namely those involving internal cooperation to get wealth. The wealth can be produced, exploited, realized through exchange, and/or taken outright from others. As the society succeeds, it grows, and the incentive to cheat the system of cooperation grows. Mutual cooperation decays as more selfish strategies become better and better - also attitudes towards outgroups soften from those that had led to ascendance. Eventually a successful enough society will have wealth, and contain agents competing over its inheritance rather than creating new wealth, and people will move away from social codes benefiting the society to those benefiting themselves or towards similar luxuries like believing things because they are true rather than useful.

Within this model I see America as in a late stage. The education system is a wasteful signalling game because so much wealth is in America that fighting for a piece of the pie is a better strategy than creating wealth. Jingoism is despised, there is no national religion, history, race, or idea uniting Americans. So I see present effort as scarcely directed towards production at all. Once GNP was thought the most important economic statistic, now it is GDP. The ipad is an iconic modern achievement, like other products, it is designed so consumers can be as ignorant as possible. Nothing like it needs to be produced - all the more so reality TV, massive sugar consumption and health neglect, etc.

A new society would begin in a productive, survivalist mode, but with modern technology. Instead of producing Wiis and X-Boxes, I think a post nuclear society would go from slide rules to mass transit and solar power in no time, even with a fraction of the resources, as it would begin with our information but have an ethic from its circumstances. The survivalist, cooperative, productive public ethos would be exhibited fully during an internet age, rather than after having been decaying logarithmically for so long.

Comment author: Alex_Altair 21 August 2011 02:17:18PM 0 points [-]

I find this quite fascinating. Thanks for your perspective!