Vladimir_Nesov comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 02 May 2009 08:50:36PM *  5 points [-]

Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).

Robyn Dawes:

Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.
[...]
I have often used this edict with groups I have led - particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately.

Comment author: Nick_Tarleton 03 May 2009 05:30:47AM *  4 points [-]

Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).

Seconded. Also see:

Nick Bostrom's Existential Risks paper from 2002

Global Catastrophic Risks

(Agreed, though, that global warming isn't a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)

Comment author: homunq 14 May 2009 04:32:17PM 0 points [-]

I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as "unknown unknowns" any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)

I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.

And I dispute the characterization that, if I'm correct about the poster's attitudes, they're "carefully hiding conclusions [they] disagree with"; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.