Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: _rpd 23 February 2016 06:36:55AM 1 point [-]

I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.

Comment author: G0W51 06 March 2016 04:25:24PM 0 points [-]

Panexistential risk is a good, intuitive, name.

Comment author: philh 23 February 2016 01:56:10AM 0 points [-]

G0W51 is talking about universal x-risk versus local x-risk. Global thermonuclear war would be relevant for the great filter, but doesn't endanger anyone else in the universe. Whereas if Earth creates UFAI, that's bad for everyone in our light cone.

Comment author: G0W51 23 February 2016 05:32:12AM 0 points [-]

True. Also, the Great Filter is more akin to an existential catastrophe than to existential risk, that is, the risk of an existential catastrophe.

Comment author: G0W51 20 February 2016 07:25:54PM 3 points [-]

Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life in the universe to begin with. Perhaps this would make physics experiments more concerning than other existential risks, as even if their chance of causing the extincion of Earth-originating life is much smaller than other existential risks, their chance of eliminating all life in the universe may be higher.

Comment author: ChristianKl 15 February 2016 09:26:03PM 1 point [-]

I think it's often double. Retiring in 40 years and expecting the intelligence explosion in 80 years.

Comment author: G0W51 20 February 2016 07:18:03PM 0 points [-]

That sounds about right.

Comment author: ChristianKl 14 February 2016 05:18:32PM *  3 points [-]

Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future?

Why do you think they are in similar distance in the future? If you take the LW median of a likely arrival of the intelligence explosion that's later than when most people are going to retire.

If you look at the general population most people consider the intelligence explosion even less likely.

Comment author: G0W51 15 February 2016 07:58:38PM 0 points [-]

It's later, but, unless I am mistaken, the arrival of the intelligence explosion isn't that much later than when most people will retire, so I don't think that fully explains it.

Comment author: gjm 14 February 2016 11:23:18AM 4 points [-]

First: Most people haven't encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven't given it serious thought.

Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it'll make it worse? Personally, I can't think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have.

(Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There's awfully little evidence that anything they're doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world's countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don't have any more useful things to do about it than I do. Etc.)

Incidentally, do many people actually spend much time worrying about their retirement plans? (Note: this is not the same question as "do people worry about their retirement plans?" or "are people worried about their retirement plans?".)

Comment author: G0W51 15 February 2016 07:55:07PM 0 points [-]

People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?

Comment author: G0W51 14 February 2016 04:49:39AM 0 points [-]

Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?

Comment author: [deleted] 22 December 2015 04:07:29AM 0 points [-]

Yes.

Epistemic rationality or Instrumental rationality? If the former, what specific aspects of it are you looking to improve, if the latter, what specific goals are you looking to achieve.

In response to comment by [deleted] on Open thread, Dec. 14 - Dec. 20, 2015
Comment author: G0W51 23 December 2015 09:48:32PM 0 points [-]

I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I'm not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.

Comment author: [deleted] 20 December 2015 04:38:53AM 1 point [-]

Should is one of those sticky words that needs context. What are your goals for using LW?

In response to comment by [deleted] on Open thread, Dec. 14 - Dec. 20, 2015
Comment author: G0W51 22 December 2015 12:06:07AM 0 points [-]

Improving my rationality. Are you looking for something more specific?

Comment author: G0W51 20 December 2015 02:43:33AM 0 points [-]

How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?

View more: Next