PhDre comments on The Robbers Cave Experiment - Less Wrong

40 Post author: Eliezer_Yudkowsky 10 December 2007 06:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Justin_Corwin 10 December 2007 07:50:28AM 5 points [-]

I have also speculated on the need for a strong exterior threat. The problem is that there isn't one that wouldn't either be solved too quickly, or introduce it's own polarizing problems.

A super villain doesn't work because they lose too quickly, see Archimedes, Giorgio Rosa, et al.

Berserkers are bad because they either won't work or work too well. I can't see any way to make them a long term stable threat without explicitly programming them to lose.

Rogue AI doesn't work, again because it either self-destructs or kills us too quickly, or possibly sublimes, depending on quality and goal structure.

The best proposal I've ever heard is a rival species, something like an Ant the size of a dog, whose lack of individual intelligence was offset by stealth hives, co-op, and physical toughness. But it would be hard to engineer one.

Comment author: ericn 26 December 2010 09:11:52AM 3 points [-]

My friend had the idea that we need a race of bunnies from another planet to infest Earth. They would be a nuisance, nothing more. They would breed and eat crops. But they would be enough trouble that we would have to work together to stop them.

Comment author: DSimon 20 November 2011 05:09:09AM 11 points [-]

You ever heard the phrase "X is like violence; if it's not solving your problems, it's because you're not using enough of it."? This is the very first time I've heard somebody propose "problems" as the value of X.