All of Craig_Morgan's Comments + Replies

Hello from Perth! I'm 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I've also been thinking how I can "position myself to make a difference", and have finally overcome my akrasia; here's what I'm doing.

I'll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons:

  • To meet and get to know some people in the AI community. Marcus Hutter will presenting his talk on Universal Artificial Intelligence at MLSS2010.
... (read more)
1Daniel_Burfoot
Uf. I hope you have a large supply of coffee (or something stronger), and a high tolerance for PowerPoint presentations.

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point a

... (read more)
3CarlShulman
That paper makes a convincing case that the 'generic' AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I'm working on a write up of those issues.
1multifoliaterose
Thanks Craig, I'll check it out!

It is absolutely NOT a trick question.

There are an infinite number of hypotheses for what an 'Awesome Triplet' could be. Here are some example hypotheses that could be true based on our initial evidence '2 4 6 is an awesome triplet':

  1. Any three integers
  2. Any three integers in ascending order
  3. Three successive multiples of the same number
  4. The sequence '2 4 6'
  5. Three integers not contained in the set '512 231123 691 9834 91238 1'

We cannot falsify every possible hypothesis, so we need a strategy to falsify hypotheses, starting from the most likely.... (read more)

1[anonymous]
There is a real positive bias and this program helps confirm it. Something that must be considered is whether the form of the test could have an influence on the outcome for reasons other than an intrinsic positive bias. More specifically, I note that the form of the question the participant has been given resembles that of question style that I have encountered many a time. In most of these cases I am expected to elicit the questioner's intended meaning, usually something specific. Were I to give the answer "actually, it could be any integers in ascending order" I would expect less marks or mild dissaproval for being a smart ass. The test is set up to confirm the positive bias without eliminating the possibility of simple cultural training on the test format and initial priming. I would like to look at alternate tests setups, perhaps including explicitly declared random triplets and some betting. As it stands, the test strikes me as a little ironic!

At the time of this comment, thomblake's above comment is at -3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief. Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community's.

3Z_M_Davis
While I agree that voting shouldn't be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion. If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I've downvoted thomblake's comment for this reason, and I've downvoted your comment because I don't think it advances the discourse to discourage downvotes of poor comments.
2jimrandomh
This sounds great in theory, but other communities have applied that policy with terrible results. Whether I agree with something or not is the only information I have as to whether it's true/wise, and that should be the main factor determining score. Excluding disagreement as grounds for downvoting leaves only presentation, resulting in posts that are eloquent, highly rated, and wrong. Those are mental poison.

I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.

1John_Maxwell
Hmmm... I think "something I can't think of" should qualify as a category, myself.