You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult - Less Wrong Discussion

12 Post author: Algernoq 13 July 2014 05:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (192)

You are viewing a single comment's thread.

Comment author: ChristianKl 13 July 2014 07:55:15PM 8 points [-]

Rationality doesn't guarantee correctness

That's a strawman. I don't think a majority of LW thinks that's true.

In particular, AI risk is overstated

The LW consensus on the matter of AI risk isn't that it's the biggest X-risk. If you look at the census you will find that different community members think different X-risks are the biggest and more people fear bioengineered pandemics than an UFAI event.

LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to "training camps" for a few months

I don't know what you mean with training camps but the CFAR events are 4 day camps.

If you mean App Academy with training camp, then yes some people might do it instead of a PHD program and then go on to work. There a trend that companies like Google do evidence-based hiring and care less about degrees of employees than they care about skills. AS companies get better at evaluating the skill of potential hires the signaling value of a degree gets less. Learning practical skills in App academy might be more useful for some people but it's of course no straightforward choice.

My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.

Having a lot of surplus income that gets thrown in a brute-force way at research problems might increases Xrisk instead of reducing it.