Comment author: timtyler 14 July 2009 08:33:26PM 0 points [-]

In many of the hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans. That seems a practically inevitable long-term outcome to me (humans are rather obviously too primitive and slug-like to go the distance).

The classification of such outcomes as "disasters" needs a serious rethink, IMO.

Comment author: infotropism 14 July 2009 08:57:03PM 0 points [-]

hypothetical "disasters", civilisation doesn't end - it is just that it is no longer led by humans

You'd think that's actually pretty much what most of us humans care about.

Comment author: taw 13 July 2009 09:21:08AM 0 points [-]

We're much safer against even very rare natural disasters like Toba (and others that act through climate) than it was historically. The kind of disaster that could wipe as out gets less and less probable every decade. I'm not even sure if the kind of asteroid that wiped out dinosaurs would be enough to wipe out humanity now, given a few years of prior warning (well, it would kill most people, but that's not even close to getting rid of the entire humanity).

I seriously dispute the idea that we were very close to nuclear war. I even more seriously dispute the idea that it would have any long term effects on human civilization if it happened. Even in the middle of WW2 people's life expectancy was far higher than historically typical, violence death rates were far lower, and I'd even take a guess that average personal freedoms compared quite well to the historical record.

Comment author: infotropism 14 July 2009 07:08:41PM 2 points [-]

Whether those catastrophes could destroy present humanity wasn't the point, which was whether or not near misses in potential extinction events have ever occurred during our past.

Consider it that way : under your assumptions of our world being more robust nowadays, what would count as a near miss today, would certainly have wiped the frailer humanity out back then; conversely what counted as a near miss back then, would not be nearly that bad nowadays. This basically means, by constraining the definition of a "near miss" in that way, that it is impossible to show any such near miss in our history. That is at best one step away from saying we're actually safe and shouldn't worry all that much about existential risks.

Speaking of which, when arguing the definition of an existential risk, and from that arguing that such catastrophes as a nuclear war, aren't existential risks, blurs the point. Let us rephrase the question : how much would you want to avoid a nuclear war, or a supereruption, or an asteroid strike ? How much effort, time, money should we put into the cause of avoiding such catastrophes ?

While it is true a catastrophe that doesn't wipe out humanity forever, isn't as bad as one that does, such an event can still be awfully bad, and deserving of our attention and efforts, so as to prevent it. We're talking billions of human lives lost or spent in awful conditions for decades, centuries, millennia, etc. If that is no cause to serious worry, pray tell what is ?

Comment author: Alicorn 12 July 2009 03:29:50PM *  2 points [-]

Beware generalizing from fictional evidence. "Dysgenic pressures" in particular don't seem like they're actually worth fearing in reality, given the Flynn effect, no matter how many times you've seen Idiocracy.

Comment author: infotropism 12 July 2009 04:38:54PM 1 point [-]

Though that doesn't immediately make it non fictional evidence, dysgenic pressure (as well as the flynn effect and the possibility of genetic engineering as possible counters) is also being briefly mentioned in Nick Bostrom's fundamental paper Existential Risks - 5.3.

Comment author: taw 12 July 2009 10:31:04AM *  1 point [-]

You cannot use anthropic principle here. Unless you postulate some really weird distribution of risks unlike any other distribution of anything in the universe (and by the outside view you cannot do that), then if risks were likely we would have many near misses - either barely getting away from total destruction of humanity, or events that caused widespread but not complete destruction. We have neither.

Global warming and Iraq war are tiny problems, vastly below any potential to threaten survival of civilization. Totalitarian regimes have very short half-lifes. Threat of the Cuban missile crisis seems vastly overstated, especially considering how many wars by proxy United Stated and Soviet Union fought without getting anywhere close to using nuclear weapons, and how nothing indicated intention of either party to resort to nuclear attack.

Communist Russia didn't went that badly by historical standards - standards of living when it ended were a lot higher than standard of living when it started, and if it shows anything is how remarkably resistant civilization is, restoring itself so smoothly after Stalin in such a hostile environment. You see the same pattern in China and so many other totalitarian regimes worldwide - how they get softer and more civilized given time, peace, and economic prosperity. We seem very well protected here.

Comment author: infotropism 12 July 2009 11:13:04AM 1 point [-]

Well, there possibly was the Toba supereruption, which would fit being a near miss.

Arguably, we were very close too during the cold war, and several times over - not total extinction, but a nuclear war would've left us very crippled.

Comment author: infotropism 15 June 2009 05:21:06PM *  1 point [-]

Minor quibble, interesting info :

"like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description"

The particular example of the planet's orbit is actually one where such a simple rule exists : see the law of Titius Bode

Comment author: infotropism 27 April 2009 05:25:41PM 2 points [-]

In maybe 15 years of time, Wednesday comes to this place, or what this place has become by then. She is still a Mormon, and is welcomed. She is interested in participating, because she is open minded enough, educated, and the community is tolerant and helpful. So she gets to learn about rationality, and is taken into the process of becoming a rationalist herself, and a productive, healthy member of the rationalist community.

My question : and after a few months or years of that, does she still remain a Mormon, or a believer in the supernatural ?

If yes, how does she reconcile that with the fact that a few of the priors behind religions are wrong ? That religion requires self deception to work, at some level ? Will there be some projects in which she won't be able to participate, simply because they are at odds with those beliefs ?

If not, then how does she reconcile that with her past life ? How will it impact her already established relationships ? How easy or difficult will it be for her to change her mind ?

Comment author: Yvain 26 April 2009 10:27:24PM 5 points [-]

I specifically excluded "not committed" as an option on the political views section, because a lot of rationalists have a tendency to go towards "not committed" to signal how they're not blind followers of a party, when really they have very well defined political views.

I, for example, absolutely refuse to register with a political party, answer "independent" to any questions about my political affiliation, talk a good talk about how both parties are equally crooks, and then proceed to vote for the Democrat nine times out of ten. I would kind of like to force people like me to put "Democrat" on there so that we get more data.

I will change this if enough people agree with Vladimir.

Comment author: infotropism 27 April 2009 01:45:55PM 0 points [-]

I agree with Vladimir too, you can't always pinpoint people like that.

I'd say I'm uncommitted too. By that I mean to encompass the general idea that I agree with a lot of the ideas that come from, for instance, libertarianism, and at the same time, with a lot of the ideas behind communism. As I never heard of a good synthesis between the two, so I stand uncommitted.

Comment author: infotropism 26 April 2009 12:15:56AM *  2 points [-]

Self fulfilling prophecies are only epistemically wrong when you fail to act upon them. Failing, maybe out of cynicism, sophistication or simply being too clever, rationalizing them away; the result will be the same.

There's a potential barrier there. You can tunnel through it, or not. Tunneling can sound magical and counterintuitive. It's not. There are definite reasons why it can work.

Sometimes, however, you don't know those reasons, but can observe it appears to work for other people anyway. Then you may want to find a way to bootstrap the process, like self pretending. Or trying to copy someone else

I can say these words but not the rule that generates them, or the rule behind the rule; one can only hope that by using the ideas, perhaps, similar machinery might be born inside you.

For instance, the party example. This might not apply to everyone but, I think the issue starts with even trying to find that optimal way to attract a partner. Do you expect to find a way to be attractive enough to score on your first try ? Perhaps, to really minimize the risk of being rejected ?

Different people have different tastes, and even a single person may react differently to the same stimulus, depending on the conditions in which they find themselves.

Some people don't have an issue with that. They are ready to try as many times and different people as it takes to find one receptive, moving on as soon as it appears as though it won't work with this person in particular.

Not only will they eventually find someone - and some will indeed have to search for longer - but will also act with more confidence, knowing they will succeed at some point.

I know that's how it works for me anyway. I can't take failure, nor rejection. So I try my very best to avoid it, and this involves trying to push the efforts I invest in a single encounter, to their limits, which isn't as efficient as trying as many different persons as is needed to succeed.

Comment author: Eliezer_Yudkowsky 24 April 2009 04:21:57AM 6 points [-]

Probably not, for two reasons. One, Kantian-type reasoning: Someone has to lead the way through the transition, since the ideal would be enough people cryosuspending that they could just integrate the organ donation protocols into it. Two, and more important, there's a nonzero possibility that someone ends up wanting my brain for something interesting Before It's Over - that I wouldn't literally be out of the game.

Comment author: infotropism 24 April 2009 11:11:55AM 4 points [-]

Do you also, simply, desire to live ?

Or do you mean to say that if your life didn't possess those useful qualities, then it would be better, for you, to forfeit cryonics, and have your organs donated, for instance ?

And I'm actually asking that question to other people here as well, who have altruistic arguments against cryonics. Is there an utility, a value your life has to have, like if you can contribute to something useful, in order to be cryopreserved ? For then that would be for the greatest good for the greatest number of people ?

A value below which, your life would be best not cryopreserved, and your body, used, for organ donations, or something equally destructive to you, but equally beneficial to other people (and certainly more beneficial than whatever value you could create yourself if you were alive) ?

Comment author: ciphergoth 23 April 2009 12:54:00PM 5 points [-]
  • I think that for the most part, where rationality is easily assessed it is already well understood; it is in extending the art to hard-to-assess areas that the material here is most valuable.
  • For all I know all of Eliezer's original work apart from his essays on rationality could be worthless.

Both of these things mean that we're assessing this material on a different basis than demonstrated efficacy.

Comment author: infotropism 23 April 2009 04:06:59PM 1 point [-]

where rationality is easily assessed it is already well understood; it is in extending the art to hard-to-assess areas that the material here is most valuable.

My question is : as well understood as it is, how much of it do any single individual here, know, understand, and is able to use on a recurring basis ?

We'll want to develop more than what exists, but we'll build that upon - once we have it - a firm basis. So I wonder, how much knowledge and practice of those well understood parts of rationality, does it require of the would-be builders of the next tier ? Otherwise, we stand the risk, of being so eager as to hurriedly build sky high ivory towers on sand, with untrained hands.

View more: Prev | Next