Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lightwave2 21 September 2008 08:08:24AM 1 point [-]

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

Comment author: Lightwave2 18 September 2008 09:04:02AM 1 point [-]

"Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what's right far better than a human being could."

Sounds like Bill Hibbard, doesn't it?

Comment author: Lightwave2 05 September 2008 12:59:25PM 0 points [-]

There's a dilemma or a paradox here only if both agents are perfectly rational intelligences. In the case of humans vs aliens, the logical choice would be "cooperate on the first round, and on succeeding rounds do whatever its opponent did last time". The risk of losing the first round (1 million people lost) is worth taking because of the extra 98-99 million people you can potentially save if the other side also cooperates.

Comment author: Lightwave2 03 September 2008 01:50:38PM 1 point [-]

The soldier protects your rights to do any of those actions, and as there always are people, who want to take them away from you, it is the soldier who is stopping them from doing so.

Comment author: Lightwave2 30 August 2008 11:24:56AM 0 points [-]

Just like you wouldn't want an AI to optimize for only some of the humans, you wouldn't want an AI to optimize for only some of the values. And, as I keep emphasizing for exactly this reason, we've got a lot of values.

What if the AI emulates some/many/all human brains in order to get a complete list of our values? It could design its own value system better than any human.

In response to Magical Categories
Comment author: Lightwave2 26 August 2008 02:58:41PM 0 points [-]

I wonder if you'd consider a superintelligent human have the same flaws as a superintelligent AI (and will eventually destroy the world). What about a group of superintelligent humans (assuming they have to cooperate in order to act)?