Can we talk about changing the world? Or saving the world?

I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.

One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.

If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?

I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.

For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.

However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?

New to LessWrong?

New Comment
160 comments, sorted by Click to highlight new comments since: Today at 9:05 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It's not obvious that the best way to reduce existential risk is to actually work on the problem. Imagine if every farmer put down his plow and came to the university to study artificial intelligence research. Everyone would starve. It may well be that someone's best contribution is to continue to write software to do billing for health insurance, because that helps keep society running, which causes increased wealth, which then funds and supports people who specialize in researching risks among other fields.

I suspect that actually, only a small percentage of people, even of people here, could usefully learn the political truths relevant to existential risk mitigation via the kind of discussion you are proposing. Very few people are in a position to cause political change. The marginal utility gain for the average person to learn the truth on a political matter is practically zero due to his lack of influence on the political process. The many arguments against voting apply to this question as well, of seeking political truth; and even more so, because it's harder to ascertain political truths than to vote.

Most interest in politics is IMO similar to interest in sports or movies. It... (read more)

The set of people seriously working to reduce existential risks is very small (perhaps a few hundred, depending on who and how you count). This gives strong general reason to suppose that the marginal impact of an individual can be large, in cases where the individual aims to reduce existential risks directly and is strategic/sane/rational about how (and not in cases where the individual simply goes about their business as one of billions in the larger economy).

Many LW readers are capable of understanding that there are risks, thinking through the differential impact their donations would have on different kinds of risk mitigation, and donating money in a manner that would help. Fewer, but still many, are also capable of improving the quality of thought regarding existential risks in relevant communities (e.g., in the academic departments where they study or work, or on LW or other portions of the blogosphere). And while I agree with Hal's point that most politics is used as entertainment, there is reason to suppose that improving the quality of discussion of a very-high-impact, under-researched, tiny-numbers-of-people-currently-involved topic like existential risks can improve both (a) the well-directedness of resources like mine that are already being put toward existential risks, and (b) the amount of such resources, in dollars and in brainpower.

4steven046115y
Would increased average wealth help risk-fighters more than risk-creators? It's not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
4MBlume15y
I believe Eliezer has made a strong case that Moore's Law, for example, mostly benefits the risk-producers
3MichaelHoward15y
"...Every 18 months, the minimum IQ to destroy the world drops by one point."
0[anonymous]15y
"Every 18 months, the minimum IQ to destroy the world drops by one point."
0[anonymous]15y
Every 18 months, the minimum IQ to destroy the world drops by one point.
1mattnewport15y
That's not obvious to me, and even if it were I don't take a utilitarian perspective. If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don't rely on a utilitarian perspective since most people don't take that perspective when making decisions. Or you can try and find ways of increasing investment that don't rely on persuading large numbers of people.
1steven046115y
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller. I agree with your second paragraph.
2mattnewport15y
Even granting that, it's not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
1steven046115y
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I'm thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
-2rwallace15y
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
5steven046115y
"Actions taken with the intent to prevent event X make event X less likely" is going to be my default belief unless there's some strong evidence to the contrary.
9AnnaSalamon15y
Or, more particularly: "Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely". mattnewport's counterexamples are good, but they are examples of what happens when "intent to reduce X" is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better -- at least, there's reason to assign a high enough probability to "we may be able to do better" for it to be clearly worth the costs of investigating particular issue X's.
0mattnewport15y
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That's not a reason not to try but it seems we can agree that mere intent is far from sufficient. Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
7AnnaSalamon15y
This depends partly on what sort of "course of action" is devised, and how many people are needed to put it into action. Francis Bacon's successful spread of the scientific method, Louis Pasteur's germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed... provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
1AnnaSalamon15y
[I'd edited my previous comment just before mattnewport wrote this; I'd previously left my comment at "There is reason to suppose we can do better", then had decided that that was overstating the evidence and added the "--at least...". mattnewport probably wrote this in response to the previous version; my apologies.] As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
8MichaelVassar15y
givewell.net
5mattnewport15y
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners... All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
2rwallace15y
To what Matt said, I will add: actions taken with the intent of preventing harmful event X, often have no effect on X but greatly contribute to equally harmful event Y.
1Kakun15y
I'm not totally sure what you mean by this. With that said, it does matter very much how the government distributes its resources. While the government is admittedly inefficient, that doesn't mean that it can't be improved. Since politics determines how those resources are distributed, wouldn't becoming involved in politics be a valid and important way to gain your favored causes-i.e. existential risk mitigation- support? Declaring one method of gaining support to be automatically invalid, no matter the circumstances, won't help you.
0mtraven15y
The arguments against voting are mostly puerile, and so is this one against political judgment. See here for an alternative view.
0Vladimir_Nesov15y
Are there currently enough soldiers? What is the best way to recruit them? Existential risks is a high-payoff and generally-misunderstood issue. It looks like there is no strong community of professionals to work on it at the moment. In any case, there are existing organizations, and their merits and professional opinion should be considered before anyone commits to anything.

I agree with ciphergoth that we would probably have an easier time discussing political issues than some other communities, and I agree with HalFinney that it's probably not a very good use of our time anyway. Let's say that everyone on LessWrong agrees on a solution to some political problem. So what? We already have lots of good ideas no one will listen to. It doesn't take a long-time reader of Overcoming Bias to realize marijuana criminalization isn't working so well, but so far the efforts of groups with far more resources than ourselves have been most... (read more)

6steven046115y
Politics discussion by rationalists is likely to have the most impact when it's about issues that are important, but that aren't widely recognized as such and therefore have relatively few people pulling on the rope. I don't see any point in discussing the Iraq war, say.
4Scott Alexander15y
Politics action by rationalists is likely to have the most impact on such topics. But since there are already some such topics we know about (global existential risk, for example, or teaching rationality in schools). What do we gain by discovering several more of these and then discussing them?
1JGWeissman15y
I agree that it is not a good use of our time to discuss political issue on Less Wrong. In fact, I think it would be harmful, because it would drown out other discussion and attract people who are not prepared to discuss it rationally. However, we should discuss politics in other forums, using what we have learned here. We should be able to avoid seeing arguments as soldiers. I would like to spread rationality techniques among those who regularly participate in politics. (Though I am not sure how. Leading by example has been to subtle in my experience, and direct instruction leads to emotional defensiveness. It might be interesting to have debates moderated by a Less Wrong member, where it could be seen as their proper role to point out biases.)
1orthonormal15y
I immediately thought of the Confessors...

"We already have lots of good ideas no one will listen to."

this is my primary thought on all such sentiments. the best thing for people here to do would probably be to stop worrying about altruism and start trying to get rich. Once you're rich your altruism will actually mean something.

Most of you are rich by historical standards, and by the standards of the world. So think carefully about just how "rich" will be "enough" to "actually mean something."

4Daniel_Burfoot15y
I'm not sure your standard of wealth is the correct one. Most modern Americans aren't wealthy enough to hire full-time servants; by that standard of wealth there are probably more wealthy people in India, and were probably more wealthy Americans per capita in the 1920s. I interpret NN's statement as follows: "the wealth distribution has a long tail, so that the majority of philanthropic impact is caused by outliers (Extremistan); it's more important to try to become an outlier yourself than to worry about whether to donate your yearly $50 to Greenpeace".
6Z_M_Davis15y
Don't you think modern household convenience machines are more useful than a servant? Think of electric lights, dishwashers, clotheswashers, personal computers, &c., &c.
1nazgulnarsil15y
money is a stand in for other (harder to quantify) metrics for impact on the future. resource distribution in general would be better if it were allocated rationally would it not? thus we should try to take as much control of resource distribution as we can. In contrast you're speaking from the perspective of satisfying material wants, by which standard we all already live like kings of other ages.

what traps can we defuse in advance?

We care about saving the world and we care about the truth, so sometimes we start caring too much about the ideas that we think represent those things. How can we foster detachment? How can we encourage people to consider an idea even if they don't like it, and then encourage people to relinquish an idea after it's been considered and evenly rejected?

The following paradigm has worked for me:

It's natural to be afraid of considering an idea that we know is false. Thus it is useful to occasionally practice considering id... (read more)

5Steve_Rayhawk15y
Related: Is That Your True Rejection?, Words as Mental Paintbrush Handles (arguments as paintbrush handles for emotional responses). Eliezer's counter-argument in "The Pascal's Wager Fallacy Fallacy" is an example of this mistake. Arguments from the Pascal's Wager Fallacy aren't paintbrush handles for expected utility computations, they're paintbrush handles for the fear of being tricked in confusing situations and the fear of exhibiting markers of membership in a ridiculed or rejected group.

Survey: are you motivated to improve or save the world?

This survey aims to determine if there is significant consensus or disparity. It is in response to the datapoint presented here.

If you would like to qualify or explain your response, feel free to do so as a comment to the appropriate response.

  • Note that this is a general solution to the problem of conducting a quick off-the-cuff survey on LW without affecting karma, but you need to be able to view negative scoring comments.

  • If you want to leave me with positive karma, please keep the survey neutral a

... (read more)

Upvote here if your answer is 'yes'.

(And downvote the downvote post to neutralize karma.)

1[anonymous]15y
I like your solution Byrnema! Something you could consider is adding "(and downvote the downvote post to neutralise karma)" to the two alternatives. This somewhat aleviates the problem of the non-displayed negative karma post. Edit: Pardon me, I just realised that me replying obfuscates the survey. Since people will inevitably have comments on a topic that don't qualify as either 'yes' or 'no', an extra post by the surveyer into which replies can be made would be useful.
0byrnema15y
Upvote here if your answer is 'no'. (And downvote the downvote post to neutralize karma.)
-28byrnema15y

Here's Wikipedia's list of Forbidden Words, which I think has some good examples of how language can be subtly loaded on controversial / emotionally charged issues. Diligently watching out for that sort of thing is probably one of the best things we could do to avoid political discussions degenerating.

0Vladimir_Nesov15y
That doesn't cut it. An easy-to-use, fairly effective technique, but not a game-defining one. Try enforcing that on a random crowd.
6steven046115y
We could consider making a list of similar guidelines that we wouldn't want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in. "Never mention how someone's false beliefs could arise from flaws in their personality even when that's actually happening" seems like another important one. Probably ban sarcasm. Possibly even ban anecdotes and analogies.
7MBlume15y
If two people have a persistent disagreement of fact, eventually the inescapable conclusion is that they do not fully trust one another for rationalists. Exploring how this came to be the case is the first step to changing the situation. I think ideally what we need is a space in which we can suggest flaws in a person's personality, and still be friends the next day. Is that possible?
3steven046115y
Discussions among rationalists needn't involve differences of opinion; they can instead involve differences of personal impression. That said, there are real differences of opinion among rationalists. I'm not sure, however, that we need to resort to psychoanalysis to resolve them -- after all, argument screens off personality.
6AnnaSalamon15y
Great idea. I'd say the biggest useful guideline here is that on mind-killing subjects we should make a norm of only saying the pieces we actually know. That is, we should cite evidence for all conclusions, or, better still, cite the real causes of our beliefs, and we should keep our conclusions really carefully to only what is almost tautologically implied by that evidence. We should be extra-precise. And we should not, really really not, bringing in extraneous issues if there's any way to avoid them. When people try to talk about AI risks, say, without background, they often come up with plausible this and plausible that, and the topics and misconceptions multiply faster than one can sort them out. Whereas interested interlocutors even without much rationality background who have taken the time to sort through the sub-issues one at a time, slowly, sorting through the causes of each intuition and the sum total of evidence on that point, in my experience generally have managed useful conversations.
0Vladimir_Nesov15y
That's just generally raising the level of fallacy alert, maybe specifically around the politics-induced fallacies. It should be default behavior whenever the fallacious arguments start raining down, around any issue. A typical battle ground for x-rationality skills in action, not a special case.
1steven046115y
There's a difference between just being hypersensitive to bad reasoning (usually a good idea), and being hypersensitive to anything that could directly or indirectly cause emotions to flare up (usually not worth the bother).
0Relsqui14y
Molybdenumblue said it really well elsewhere: Yes, hypersensitivity is by definition uncalled for, but when attempting to communicate with human beings and encourage their reply, it's clearly useful to choose words which are less likely to invoke negative emotions. It's possible to keep the juggling balls of precision, reason, and sensitivity all in the air at the same time; that it can be difficult is not sufficient reason not to try.
0Vladimir_Nesov15y
Hence I mentioned escalation of your level of sensitivity, meaning to refer to any factors that (potentially) deteriorate constructive thinking. Being hypersensitive to bad reasoning isn't always a good idea, for example if you don't care to reeducate the interlocutor.

I think it will be very necessary to carefully frame what it would be that we might wish to accomplish as a group, and what not. I say this because I'm one of those who thinks that humanity has less than a 50% chance of surviving the next 100 years, but I have no interest in trying to avert this. I am very much in favour of humanity evolving into something a lot more rational than what it is now, and I don't really see how one can justify saying that such a race would still be 'humanity'. On the other hand, if the worry is the extinction of all rational th... (read more)

0byrnema15y
I wonder how many rationalists share this view. If a significant number, it would be worthwhile to even discuss this first, in hopes to muster a broader consensus about what the group should do or even to just be aware of the reasons for lack of agreement.

If politicians start following expected utility consequentialism, special interest groups will be able to exploit the system by manufacturing in themselves "offense" (extreme emotional disutility) at unfavored measures, forcing your maximizer to give in to their demands. To avoid this, you need a procedure for distinguishing "warranted" offense from "unwarranted" offense: some baseline of personal rights ultimately derived from something other than self-assessed emotional utility.

If you see a way around this difficulty, let m... (read more)

0Vladimir_Nesov15y
I don't see the object of attack in the room. An exploration of potential utility-maximization political frameworks and their practical pitfalls would possibly be interesting, although in practice I expect this sort of institution to turn into a kind of market, not so much politician-mediated.
2cousin_it15y
I meant to attack this part of ciphergoth's post: I didn't intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
0Vladimir_Nesov15y
Well, since you've easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won't be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
7cousin_it15y
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn't sound trivial to me. Remember, groups don't admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell "warranted" emotional disutility from "unwarranted" in a fair way?
3steven046115y
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it's answerable in principle.
1Paul Crowley15y
I don't see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
1cousin_it15y
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well... which should I pick, then? Looks like we've run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else's, to avoid "mugging by suffering". Let's model the problem mathematically and look for other solutions - I love this stuff.
4loqi15y
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn't reflect a preference, it's purely instrumental.
0cousin_it15y
Thanks, this clicked.
0Vladimir_Nesov15y
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.

Come on, Ciphergoth, the problem of saving humanity would be too easy if you could convince a large number of humans to go along with your proposals! You have a harder challenge: save humanity in spite of the apathy, and in many cases intransigent opposition, of the humans.

I have a hard time believing that anyone in power is serious about saving humanity. There are so many obvious and easy things that could be done, that would clearly be enormously helpful, that no one with power is doing or even suggesting. Politics is almost entirely a signalling game.

A... (read more)

I will admit to an estimate higher than 95% that humanity or its uploads will survive the next hundred years. Many of the "apocalyptic" scenarios people are concerned about seem unlikely to wipe out all of humanity; so long as we have a breeding population, we can recover.

1Nick_Tarleton15y
No significant risk of unFriendly AI (especially since you apparently consider uploading within 100 years plausible)? Nanotech war? Even engineered disease? I'm surprised.
1mattnewport15y
The comment appears to me to be saying there is no significant risk of wiping out all of humanity, not that there is no significant risk of any of the dangers you describe causing significant harm. I think an unfriendly AI is somewhat likely for example but put a very low probability on an unfriendly AI completely wiping out humanity. The consequences could be quite unpleasant and worth working to avoid but I don't think it's an existential threat with any significant probability.
8Vladimir_Nesov15y
That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)
2[anonymous]15y
It's not very strange. It's a perspective that tends to match most human intuitions. It is, however, a very wrong perspective.
0Nominull15y
Consider: humanity is an intelligence, one not particularly friendly to, say, the fieldmouse. Fieldmice are not yet extinct.
7MBlume15y
I think it is worth considering the number of species to which humanity is largely indifferent which are extinct as a result of humanity optimizing other criteria
2Nick_Tarleton15y
Humans satisfice, and not very well at that compared to what an AGI could do. If we effectively optimized for... almost any goal not referring to fieldmice... fieldmice would be extinct.
1Vladimir_Nesov15y
Humanity is weak.
0Nominull15y
Humanity is pretty damn impressive from a fieldmouse's perspective, I dare say!
0MBlume15y
yet humanity cannot create technology on the level of a fieldmouse.
-6byrnema15y
0MichaelHoward15y
Fieldmice (outside of Douglas Adams fiction) aren't any particular threat to us in the way we might be to the Unfriendly AI. They're not likely to program another us to fight us for resources. If fieldmice were in danger of extinction we'd probably move to protect them, not that that would necessarily help them.
-1loqi15y
Not on another planet, no. But I wonder how practical a constantly accelerating seed ship will turn out to be.
-2mattnewport15y
You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict. As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.
2mattnewport15y
For people who've voted this down, I'd be interested in your answers to the following questions: 1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity? 2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity? 3) If you answered no to 1), what makes you certain that such a scenario is not possible?
0Mario15y
I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
1Nick_Tarleton15y
See In Praise of Boredom and Sympathetic Minds: random evolved intelligent species are not guaranteed to be anything we would consider valuable.
1Nominull15y
I like humans. I think they're cute :3
0mattnewport15y
We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.
0Mario15y
The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives. EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more "us" than the humans, and we would survive through the former, not the latter.
0mattnewport15y
I don't see any fundamental reason why intelligence should be restricted to humans. I think it's quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome? I disagree that we would have more in common with the electric toothbrush wielding squirrels. I've elaborated more on that in another comment.
1Mario15y
Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.
0Vladimir_Nesov15y
Does this imply that you are OK with a Paperclip AI wiping out humanity, since it will be an intelligent life form much more developed than we are?
0Mario15y
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don't see why the species that is doing the rebuilding is all that important -- they aren't "us" in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
3Paul Crowley15y
Bear in mind, the paperclip AI won't ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn't feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
0Mario15y
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn't mean you can read, after all.
3Paul Crowley15y
What I think you're driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
1Mario15y
I'm just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn't enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn't expect an AI to find value in everything we have produced, just as we don't. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs "us." That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
1Paul Crowley15y
I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.
1Vladimir_Nesov15y
The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.
0mattnewport15y
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin. If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture's direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
1Mario15y
I don't consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don't pretend to be fully able to do so), but I think it's important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational. I'm not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I'm concerned. It might as well have been anything else.
0loqi15y
I'm curious as to what sorts of goals you think a "solely rational" creature possesses. Do you have a particular point of disagreement with Eliezer's take on the biological heritage of our values?
0Mario15y
Oh, I don't know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn't affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology). I don't have any disagreement with Eliezer's description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
1loqi15y
That depends on the resolution of the simulation. Wouldn't you agree? I think you're using the word "biological" to denote some kind of unnatural category. The reasons you see for why any of us "should" do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
1Mario15y
Not unnatural, obviously, but a contaminant to intelligence. Manure is a great fertilizer, but you wash it off before you use the vegetable.
0loqi15y
I meant this kind of unnatural category. I don't quite know what you mean by "biological" in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same "biological" drives.
-1[anonymous]15y
I'm certain.
-1mattnewport15y
I take much the same position.

My impression is that the material covered on OB/LW is more than sufficient to allow people that really understand the material to talk politics without exploding. I don't think we need any politics specific tricks for those that are likely to be helpful contributors.

This came up in the Santa Barbara LW meetup, and I felt like that group could have talked politics the right way. The implicit consensus seemed to be "Yeah, it'd probably work", though we didn't try.

Of course, with a smaller group and stronger selection pressures it is less likely to... (read more)

1Vladimir_Nesov15y
You also need to sufficiently care about the specific question to work on it, which is not a given. Less general, less popular.
0davidr15y
Im not sure its just a matter of rationality (which it is), but also of complexity, ie predicting or estimating utility for policy A vs B can be impossible to model because of chaotic effects etc. Just because most of the mistakes we see when people argue politics are rather obvious (from a rationalistic pov) doesnt mean they are the only ones. Otherwise social science and economics would be sciences, with capital S.
[-][anonymous]15y10

If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.

And you say that like it is a bad thing! The possibility of creating just such a utopia sounds like a damn good motivating influence for concerted altruistic effort and existintial risk mitigation to me!

0kpreid15y
I understood ciphergoth's description as “what we have been discussing being useful for nothing more than these tasks”, not a world where those tasks are all you need to deal with.
0[anonymous]15y
As did I Kpreid, and I do appreciate Cipher's overall message. I don't, however accept the implicit argument (the implication clear in the loaded language) that those basic activities are inadequate or evidence that greater political influence is necessary. I give you credit for noticing that the second, larger of my exclaimations does not particularly refute ciphergoth. In fact, it was a tangent which served as a filler and to lighten the contradiction somewhat. It also hints at one reason that I consider those activities of value. If something would exist in a utopia that I would accept then chances are that making it in this banal reality is a good thing in itself.
1Paul Crowley15y
Not sure what you're driving at. I value both getting up and getting laid, though I'm not sure I appreciate the preparation for Omega so much. If you agree that we could usefully spend more time talking about concerted altruistic effort and existential risk mitigation, not least in order to change the world so that we can concentrate more on fun, then I think you agree with the thrust of the paragraph you quote.
0[anonymous]15y
I don't agree, but it is not really a disagreement worth breaking down to our respective implicit and explicit premises, arguments and conclusions and any potential conflict between the two positions.

I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it.

For the record I would put it at levels overwhelmingly higher than 95%. More like 99.999%.

5jimmy15y
You can't get away with having such extreme probabilities when a bunch of smart and rational people disagree. There are reasons why the whole Aumann agreement thing doesn't work perfectly in real life, but this is an extreme failure. If a bunch of people on LW think it's only 50% likely and you think theres only a 0.1% chance that they're right and you're wrong (which is already ridiculously low) it still brings your probability estimate down to around 99.95. This is a 50 fold increase in the probability that the world is going to end over what you stated. Either you have some magic information that you haven't shared, or you're hugely overconfident. http://lesswrong.com/lw/9x/metauncertainty/ http://lesswrong.com/lw/3j/rationality_cryonics_and_pascals_wager/69t#comments
1taw15y
You cannot selectively apply Aumann agreement. If you want to count tiny bunch of people who believe in AI foom, you must also take into account 7 billion people, many of them really smart, who definitely don't. I don't have this problem, as I don't really believe that using Aumann agreement is useful with real humans. Or you could count my awareness of insider overconfidence as magic information: http://www.overcomingbias.com/2007/07/beware-the-insi.html
0jimmy15y
This is Less Wrong we're talking about. Insider overconfidence isn't "magic information". See my top level post for a full response.
0homunq15y
Large groups of smart people are frequently wrong about the future, and overwhelmingly so about the non-immediate future. 0.1% may be low but it's not ridiculously so. (Also "they're right and you're wrong" is redundant. This has nothing to do with any set of scenario probabilities being "right". And any debate of "p=.9" "no, p=.1" is essentially silly because it misunderstands both the meaning of probability as a function of knowledge and our ability to create models which give meaningfully-accurate probabilities.)
0Vladimir_Nesov15y
Subjective probability is (in particular) a tool for elicitation of model parameters from expert human gut-feelings, which you can then use to find further probabilities and align them with other gut-feelings and decisions, gaining precision from redundancy and removing inconsistencies. The subjective probabilities don't promise to immediately align with physical frequencies, even where the notion makes sense. It is a well-studied and useful process, you'd need a substantially more constructive reference than "it's silly" (or you could just seek a reasonable interpretation).
1homunq15y
As you explain it, it's not silly. Do you have a link for a top-level post that puts this kind of caveat on probability assignments? Personally, I think that if most people here understood it that way, they'd use more qualified language when talking about subjective probability. I also think that developing and standardizing such qualified language would be a useful project.
0Vladimir_Nesov15y
It is the sense in which the term "probability" is generally understood on OB/LW, with varying levels of comprehension by specific individuals. There are many posts on probability, both as an imprecise tool and an ideal (but subjective) construction. They should probably be organized in the Bayesian probability article on the wiki. In the meantime, you are welcome to look for references in the Overcoming Bias archives. You may be interested in the following two posts, related to this discussion: Probability is in the Mind When (Not) To Use Probabilities
3homunq15y
I myself would be disappointed if over half of LW put the probability of a single biological human (not an upload, not a reconstruction - an actual descendent with the appropriate number of ancestors alive today) alive in 100 years under 95%. I would consider that to be a gross instance of all kinds of biases. I'm not going to argue about scenarios, here, just point out that there any scenario which tends inevitably to wipe out humanity within one lifetime is totally unimaginable. That doesn't mean implausible, but it does mean improbable. Personally, I do not believe that any person, group of people, or human-built model to date can consistently predict the probability of defined classes of black-swan events ("something that's never happened before which causes X" where X is a defined consequence such as humanity's extinction) to within even an order of magnitude for p/(1-p). I doubt anybody can get even to within two orders of magnitude consistently. (I also doubt that this hypothesis of mine will be clearly decidable within the next 20 years, so I'm not particularly inclined to listen to philosophical arguments from people who'd like to discard it.) What I'm saying is, we should stop trying to put numbers on this without big error bars. And I've yet to see anybody propose an intelligent way to deal with probabilities like 10^(-6 +/- 4); just meta-averaging it over the distribution of possible probabilities, to come up with something like 10^-3 seems to be discarding data and to lead to problems. However, that's the kind of probability I'd put on this lemma. ("Earth made uninhabitable by normal cosmic event and rescue plans fail" would probably put a floor somewhere above 10^-22 per year.) "The chance we're all wrong about something totally unprecedented has got to be less than 99.9%" is total hubris. Yes, totally unprecedented things happen every day. But telling yourselves stories about AGI and foom does not make these stories likely. This is not, by the way
0homunq15y
Oh, also, I'd accept that the risk of humanity being seriously hosed within 100 years, or extinct within 1000 years, is significant - say, 10^(-3 +/- 4) which meta-averages to something like 15%. ("Seriously hosed" means gigadeath events, total enslavement, or the like. Note that we're already moderately hosed and always have been, but that seriously hosed is still distinguishable.)
-1Vladimir_Nesov15y
This is an assertion of your confidence in extinction risk being below 5%. Not understanding a phenomenon, being unable to estimate its probability, doesn't give you an ability to place its probability below a strict bound. Your assertion of confidence contradicts your assertion of confusion.
3homunq15y
I have confidence that nobody here has secret information that makes human extinction much more likely - because almost no information which currently exists could have more than a marginal bearing on a result which, if likely, is a result of human (that is, intelligent) interaction. Therefore I have confidence that the difference in estimates is largely not due to information, but to models. I have confidence that inductive models - say, "how often does a random species survive any hundred year period, correcting for initial population" give answers over 95% which should be considered the default. Therefore, I have confidence that a community of people who generally give lower estimates is subject to some biases (such as narrative bias). Doesn't mean LW's wrong and I'm right. But to believe that human extinction within a century is likely clearly puts LW in the minority of humanity in your beliefs - even in the minority of rational atheists. And the fact that there is substantial agreement within the LW community on this, when uncertainty is clearly so high that orders of magnitude of disagreement are possible, makes me suspect bias. Also, I find it funny that people will argue passionately over estimates that differ in log(p/q) from -1 to +1 (~10% to ~90%), but couldn't care less over the difference from say -9 to -7 (.0001% vs .000001%) or 7 to 9. This is in one sense the right attitude for people who think they can do something about it, but it ends up biasing numbers towards log(p/q)=0 [ie 50%], since you are more likely to get argument from somebody who has an estimate on the other side of 50% as yours is.
0Vladimir_Nesov15y
The fact that we believe something unusual is only weak evidence for the validity of that unusual belief, you are right on that. And given the hypothesis that we are wrong, which is dominant while all you have is the observation that we believe something unusual, you can draw a conclusion that we are wrong because of some systematic error of judgment that makes most here to claim the unusual belief. To move past this point, you have to consider the specific arguments, and decide for yourself whether to accept them. Most of the beliefs people can hold intuitively are about 50% in certainty. The beliefs far away from this point aren't useful as primitive concepts, classifying the possible events on one side or the other, as most everything is only on one side, and human mind can't keep track of their levels of certainty. New concepts get constructed, that are more native to human mind and express the high-certainty concepts in question only in combinations, or that are supported by non-intuitive procedures for processing levels of certainty. But if the argument is dependent on use of intuition, you aren't always capable of moving towards certainty, so you remain in doubt. This is the case for unknown unknowns, in particular.
2homunq15y
You clipped out "to within an order of magnitude". I stated that my best-guess probability for human extinction within a century was 10^(-6 +/- 4). This is a huge confusion - 9 orders of magnitude on the probability - yet still means that I have over 80% confidence that the probability is under 10^-2. There is no contradiction here. (It also means that, despite believing that extinction is probably one-in-a-million, I should treat it as more like one-in-a-thousand, because averaging over the meta-probability distribution naturally weights the high end. It would be a pity if this effect, of uncertainty inflating small probabilities, resulted in social feedback. When you hear me say "we should treat it as a .1% risk", I am implicitly stating that all models I can credit give a significantly lower risk. If your best model's risk-estimate is .01%, I am actually telling you that I think your model overestimates the risk.)
0Vladimir_Nesov15y
So, where did you get those numbers from? 10^-6? 10^-2? Why not, say, 1-10^-6 instead? Gut feeling again, and that's inevitable. You either name a number, or make decisions without the help of even this feeble model, choosing directly. From what people on this site know, they believe differently from you. I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 (expectation of relatively low-hanging fruit, it levels off as time goes on) if nothing goes wrong with the world, 3/4 of that to Unfriendly AI, given my understanding of how hard it is to find the right answer from many efficient world-eating possibilities, and human irrationality, making it likely that the person to invent the first mind won't think about the consequences. That's already 55% total extinction risk, add some more for biological (at least, human-inhabiting) weapons, such as an engineered pandemic (not total extinction, but easily 90%), and new possible goodies the future has to offer. It'll only get worse until it gets better. On second thought, I should lower my confidence from these explicit models, they seem too much like planning. Make that 50%.
0steven046115y
When you speak of "the probability", what information do you mean that to take into account and what information do you mean that not to take into account? What things does a rational agent need to know for the agent's subjective probability to become equal to the probability? (Not a rhetorical question.)
2homunq15y
"the probability" means something like the following: take a random selection of universe-histories starting with a state consistent with my/your observable past and proceeding 100 years forward, with no uncaused discontinuities in the laws of physics, to a compact portion of a wave function (that is "one quantum universe", modulo quantum computers which are turned on). What portion of those universes satisfy the given end state? Yes, I'm doing what I can to duck the measure problem of universes, sorry. And of course this is underdefined and unobservable. Yet it contains the basic elements: both knowledge and uncertainty about the current state of the universe, and definite laws of physics, assumed to independently exist, which strongly constrain the possible outcomes from a given initial state. On a more practical level, it seems to be the case that, given enough information and study of a class of situations, post-hoc polynomial-computable models which use non-determinism to model the effects of details which have been abstracted out, can provide predictions about some salient aspects of that situation under certain constraints. For instance, the statement "42% of technological societies of intelligent biological agents with access to fissile materiels destroy themselves in a nuclear holocaust" could, subject to the definitions of terms that would be necessary to build a useful model, be a true or false statement. Note that this allows for three completely different kinds of uncertainty: uncertainty about the appropriate model(s), uncertainty about the correct parameters for those model(s), and uncertainty inherent within a given model. In almost all questions involving predicting nonlinear interactions of intelligent agents, the first kind of uncertainty currently dominates. That is the kind of uncertainty I'm trying (and of course failing) to capture with the error bar in the exponent. Still, I think my failure, which at least acknowledges the overwhelming pr
0Vladimir_Nesov15y
See the posts "Priors as Mathematical Objects", "Probability is Subjectively Objective" linked from the Priors wiki article.
0homunq15y
To get the right answer, you need to make a honest effort to construct a model which is an unbiased composite of evidence-based models. Metaphorical reasoning is permitted as weak evidence, but cannot be the only sort of evidence. And you also need to be lucky. I mean, unless you have the resources to fully simulate universes, you can never know that you have the right answer. But the process above, iterated, will tend to improve your answer.
2orthonormal15y
Without even going into different specific risks, you should beware the conjunction fallacy (or, more accurately, its flip side) when assigning such a high probability. A lack of details tends to depress estimates of an event that could occur as a result of many different causes, since if you aren't visualizing a full scenario it's tempting to say there's no way for it to occur. You're effectively asserting that not only are all of the proposed risks to humanity's survival this minuscule in aggregate, but that you're also better than 99.9% confident that there won't be invented or discovered anything else that presents a plausible existential threat. How do you arrive at such confidence of that?
1Vladimir_Nesov15y
Then, as a necessary condition (leaving other risks from the discussion for the moment), you either don't believe in the feasibility of AGI, or you believe in the objective morality, which any AGI will "discover". Which one is that?
4taw15y
I don't believe in feasibility of any scenario like AGI foom. First, I fail to see how anybody taking an outside view on AI research - which is a clear instance of class of sciences with extraordinary claims and very long history of failure to deliver in spite of unusually adequate funding - can think otherwise - to me it all seems like extreme case of insider bias to assign non-negligible probabilities to scenarios like that. Virtually none sciences with this characteristics delivered what they promised (even if they delivered something useful and vaguely related). Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like. Both extraordinarily unlikely events would have to occur before we would be exposed to risk of AGI-caused destruction of humanity, which even in this case is far from certain.
1loqi15y
It seems like you're reversing stupidity here. What correlation does a failed prediction have with the future?
4taw15y
It's not reverse stupidity - it's "reference class forecasting", which is a more specific instance of our generic "outside view" concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case. http://en.wikipedia.org/wiki/Reference_class_forecasting
8AnnaSalamon15y
I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an "outside view" or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of "rare" here should be on the order of 1 in 20, not of 1 in 100,000.) With more precision: let's say that there's a "true probability", p, that any given project's "AI will be created by us" claim is correct. And let's model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace's rule of succession). If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1/8 probability of some "our project will make AI" forecast being correct in the next decade.
2loqi15y
Oops. You're totally right. That said, I still take issue with reference class forecasting as support for this statement: Considering that the general question "is the foom scenario feasible?" doesn't have any concrete timelines attached to it, the speed and direction of AI research don't bear too heavily on it. All you can say about it based on reference class forecasting is that it's a long way away if it's both possible and requires much AI research progress. I'm not sure "disruptive technology" is the obvious category for AGI. The term basically dereferences to "engineered human-level intelligence", easily suggesting comparisons to various humans, hominids, primates, etc.
-1Vladimir_Nesov15y
A reasonable position, so long as you remain truly ignorant of what AI is specifically about.
2taw15y
I don't know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class. http://www.overcomingbias.com/2007/07/beware-the-insi.html http://en.wikipedia.org/wiki/Reference_class_forecasting
1Vladimir_Nesov15y
Try applying that to physics, engineering, biology, or any other technical field. In many cases, the outside view doesn't stand a chance.

One observation and a related suggestion:

(1) We've gone off-topic regarding the demands of this post. Ciphergoth asks what traps can we defuse in advance, before we start to talk about specific ideas to do with what one does in order to change the world. However, I'm neutral about not following instructions -- perhaps Ciphergoph hasn't asked the right question after all, and we need to triangulate towards the right question.

(2) I've got no idea how to begin answering some of the other problems that are being posed. (E.g., how can we best help the world?) S... (read more)

Richard Posner on the economics of the flu epidemic:

We need an overall "catastrophe budget" that would match expenditures to the net expected benefits of particular measures targeted at particular catastrophic threats.

If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.

Sounds like good work if you can get it. ;-)

More seriously, though, if you can't handle the getting out of bed part, it seems like taking on much bigger tasks might be off the agenda. And if more people were getting laid in the evening, we might have less violent conflict in the world.

But I'm de... (read more)

So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make.

This sentence smuggles in the assumption that we are in a position to reduce existential risk.

Two big risk are global warming and nuclear war.

The projections for large changes in climate depend on continuing growth in wealth and population in order to get the high levels of carbon dioxide emission needed to create the change. If it really goes horribly w... (read more)

5Vladimir_Nesov15y
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with). Robyn Dawes:
4Nick_Tarleton15y
Seconded. Also see: Nick Bostrom's Existential Risks paper from 2002 Global Catastrophic Risks (Agreed, though, that global warming isn't a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
0homunq15y
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as "unknown unknowns" any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.) I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer. And I dispute the characterization that, if I'm correct about the poster's attitudes, they're "carefully hiding conclusions [they] disagree with"; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.
4mattnewport15y
I agree with your characterization of the risks of global warming and nuclear war. I get the impression that people allow the reasonably high probability of a few degrees of warming or a few nuclear attacks to unduly influence their estimates of the probability of true existential risk from these sources. In both cases I'm much more receptive to discussions of harm reduction than to scaremongering about 'the end of the world as we know it'. The twentieth century has quite a few examples of events that caused 10s of millions of deaths and yet did not represent existential risks. Moderate global warming or a few nuclear detonations in or over major cities would be highly disruptive events and would have a high cost in human lives and are certainly legitimate concerns but they are not existential risks and talking of them as such is unhelpful in my opinion.