Comment author: Craig_Morgan 20 August 2010 07:03:43AM *  4 points [-]

Hello from Perth! I'm 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I've also been thinking how I can "position myself to make a difference", and have finally overcome my akrasia; here's what I'm doing.

I'll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons:

  • To meet and get to know some people in the AI community. Marcus Hutter will presenting his talk on Universal Artificial Intelligence at MLSS2010.
  • To immerse myself in the current topics of the AI research community.
  • To figure out whether I'm capable of contributing to that research.
  • To figure out whether contributing to that research will actually help in the building of a FAI.
Comment author: multifoliaterose 15 June 2010 12:10:43AM 0 points [-]

Vladimir, I agree with you that people should be thinking intelligence explosion, that there's a very poor level of awareness of the problem, and that the intellectual standards for discourse about this problem in the general public are poor.

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.

Comment author: Craig_Morgan 15 June 2010 03:35:31AM *  4 points [-]

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.

Comment author: Cameron_Taylor 21 May 2009 01:49:28AM 0 points [-]

"Susceptability to Trick Question Bias"

Comment author: Craig_Morgan 21 May 2009 05:05:37AM *  3 points [-]

It is absolutely NOT a trick question.

There are an infinite number of hypotheses for what an 'Awesome Triplet' could be. Here are some example hypotheses that could be true based on our initial evidence '2 4 6 is an awesome triplet':
1. Any three integers
2. Any three integers in ascending order
3. Three successive multiples of the same number
4. The sequence '2 4 6'
5. Three integers not contained in the set '512 231123 691 9834 91238 1'

We cannot falsify every possible hypothesis, so we need a strategy to falsify hypotheses, starting from the most likely. All hypotheses are not created equal.

I want to falsify as much of the hypotheses-space as possible (where simple hypthoses take up more space), so I design a test that should do so. My first test was '3 integers in descending order', because it can falsify #1, the simplest hypothesis. I find from this test that #1 is false. My second test is to distinguish between #2 & #3; '3 integers in ascending order, but not successive multiples of the same number', '1 2 5' I find from this test that #2 is still plausible, but #3 is falsified.

You can continue falsifying smaller and smaller areas of the hypothesis-space with additional tests, up until you're happy with your confidence level or you're bored of testing.

For much better coverage of this entire area, see the following posts by Eliezer:
* What is Evidence?
* The Lens That Sees Its Flaws
* How Much Evidence Does It Take?
* Occam's Razor

For a good overview of additional related posts, see the list.

Edit: Learning Markdown, fixing style.

Comment author: John_Maxwell_IV 13 May 2009 05:30:43AM *  4 points [-]

Most astronomers seem to put the odds of an asteroid strike at below 1 in 1000. I'd be interested to hear the person's other 299 ideas for race-ending catastrophes, each worthy of its own category (!).

Comment author: Craig_Morgan 13 May 2009 08:48:43AM 9 points [-]

I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.