I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.
I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.
It is absolutely NOT a trick question.
There are an infinite number of hypotheses for what an 'Awesome Triplet' could be. Here are some example hypotheses that could be true based on our initial evidence '2 4 6 is an awesome triplet':
We cannot falsify every possible hypothesis, so we need a strategy to falsify hypotheses, starting from the most likely. All hypotheses are not created equal.
I want to falsify as much of the hypotheses-space as possible (where simple hypthoses take up more space), so I design a test that should do so. My first test was '3 integers in descending order', because it can falsify #1, the simplest hypothesis. I find from this test that #1 is false. My second test is to distinguish between #2 & #3; '3 integers in ascending order, but not successive multiples of the same number', '1 2 5' I find from this test that #2 is still plausible, but #3 is falsified.
You can continue falsifying smaller and smaller areas of the hypothesis-space with additional tests, up until you're happy with your confidence level or you're bored of testing.
For much better coverage of this entire area, see the following posts by Eliezer:
For a good overview of additional related posts, see the list.
Edit: Learning Markdown, fixing style.
At the time of this comment, thomblake's above comment is at -3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief. Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community's.
I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.
Hello from Perth! I'm 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I've also been thinking how I can "position myself to make a difference", and have finally overcome my akrasia; here's what I'm doing.
I'll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons: