Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Irrationality Game

38 Post author: Will_Newsome 03 October 2010 02:43AM

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

Comments (910)

Comment author: [deleted] 28 October 2010 05:00:36AM *  51 points [-]

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Comment author: Jack 31 October 2010 09:13:17AM 10 points [-]

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

Comment author: Tuna-Fish 03 November 2010 01:20:43PM 16 points [-]

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

Comment author: nick012000 11 October 2010 03:32:07PM *  54 points [-]

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

Comment author: Nick_Tarleton 11 October 2010 10:43:18PM 19 points [-]

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

Comment author: RobinZ 11 October 2010 04:57:20PM 18 points [-]

What reason do you have for assigning such high probability to time travel being possible?

Comment author: Perplexed 11 October 2010 11:18:28PM *  2 points [-]

And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation?

;)

Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.

Comment author: RobinZ 11 October 2010 11:28:27PM *  2 points [-]

I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.

Edit: Of course, evidence for that 95%+ would be appreciated.

Comment author: [deleted] 29 December 2012 11:04:38PM 9 points [-]

Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%

Comment author: PlaidX 03 October 2010 05:09:52AM *  106 points [-]

Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Comment author: Will_Newsome 06 October 2010 03:39:53PM 9 points [-]

Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.

Comment author: wedrifid 06 October 2010 04:11:13PM 3 points [-]

Given that most of the top comments are meta in one way or another it would seem that the 'top comments' list belongs somewhere other than on the front page. Can't we hide the link to it on the wiki somewhere?

Comment author: LukeStebbing 10 October 2010 02:08:17AM 2 points [-]

The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them.

Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).

Comment author: AngryParsley 03 October 2010 06:16:25AM 4 points [-]

Just to clarify: by "unknown entities" do you mean non-human intelligent beings?

Comment author: Will_Newsome 27 December 2011 10:05:22PM 2 points [-]

I would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.

Comment author: Yvain 03 October 2010 10:28:59AM 2 points [-]

I upvoted you because 95% is way high, but I agree with you that it's non-negligible. There's way too much weirdness in some of the cases to be easily explainable by mass hysteria or hoaxes or any of that stuff - and I'm glad you pointed out Fatima, because that was the one that got me thinking, too.

That having been said, I don't know what they are. Best guess is easter eggs in the program that's simulating the universe.

Comment author: Will_Newsome 03 October 2010 09:27:54PM 3 points [-]

Prior before having learned of Fatima, roughly? Best guess at current probability?

Comment author: nick012000 11 October 2010 03:08:48PM 55 points [-]

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

Comment author: Swimmy 16 October 2010 08:04:12PM 5 points [-]

You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.

Comment author: wedrifid 16 October 2010 08:24:56PM 4 points [-]

Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!

Comment author: RobinZ 21 October 2010 10:31:51PM 5 points [-]

It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.

Comment author: RobinZ 11 October 2010 04:55:14PM 5 points [-]

I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?

Comment author: Raemon 05 October 2010 03:46:12PM 64 points [-]

Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%

Comment author: jimrandomh 05 October 2010 05:39:44PM 15 points [-]

I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.

Comment author: ata 05 October 2010 08:16:34PM *  6 points [-]

Agreed. I think they've explicitly denied that they're working on AGI, but I'm not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they're probably among the entities most likely (along with, I'd say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).

Comment author: sketerpot 06 October 2010 01:38:42AM 10 points [-]

If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.

Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.

Comment author: Kevin 08 October 2010 09:32:58AM *  5 points [-]

Google has one employee working (sometimes) on AGI.

http://research.google.com/pubs/author37920.html

Comment author: khafra 08 October 2010 04:42:33PM *  5 points [-]

It's comforting, friendliness-wise, that one of his papers cites "personal communication with Steve Rayhawk."

Comment author: Multiheaded 08 April 2012 08:55:21AM 15 points [-]

Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)

Comment author: nwthomas 04 July 2011 09:47:22PM 30 points [-]

I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.

Comment author: SimonF 05 October 2010 04:17:18PM *  36 points [-]

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

Comment author: wedrifid 05 October 2010 05:02:03PM 3 points [-]

Do you apply this to yourself?

Comment author: SimonF 05 October 2010 05:13:49PM *  3 points [-]

Yes!

Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.

Comment author: Pavitra 04 October 2010 07:23:51AM *  37 points [-]

75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.

At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.

(Edited for clarity.)

(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)

Comment author: JamesAndrix 03 October 2010 09:45:50PM *  65 points [-]

Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Comment author: Eugine_Nier 03 October 2010 10:42:40PM 15 points [-]

Can you rephrase this statement tabooing the words experience and qualia.

Comment author: orthonormal 04 October 2010 02:48:45AM 32 points [-]

If he could, he wouldn't be making that mistake in the first place.

Comment author: dyokomizo 03 October 2010 01:44:46PM 46 points [-]

There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.

Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.

Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.

Comment author: orthonormal 04 October 2010 02:43:14AM 5 points [-]

This (modulo the chance it was made up) is pretty strong evidence that you're wrong. I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

Comment author: Blueberry 22 January 2011 02:49:56AM 10 points [-]

Here's another case:

"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.

Comment author: AdeleneDawner 04 October 2010 02:48:49AM 3 points [-]

I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

They could probably do some relevant research by talking to Alzheimer's patients - they wouldn't get anything as clear as that, I think, but I expect they'd be able to get statistically-significant data.

Comment author: [deleted] 03 October 2010 07:34:17PM 5 points [-]

How detailed of a model are you thinking of? It seems like there are at least easy and somewhat trivial predictions we could make e.g. that a human will eat chocolate instead of motor oil.

Comment author: dyokomizo 03 October 2010 07:47:20PM 3 points [-]

I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.

Comment author: AdeleneDawner 03 October 2010 10:53:50PM 5 points [-]

How about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)

Comment author: Douglas_Knight 04 October 2010 12:37:16AM *  3 points [-]

I think "vague" is a poor word choice for that concept. "(not) informative" is a technical term with this meaning. There are probably words which are clearer to the layman.

Comment author: Perplexed 03 October 2010 06:32:14PM 4 points [-]

Downvoted in agreement. But I think that the randomness comes from what programmers call "race conditions" in the timing of external stimuli vs internal stimuli. Still, these race conditions make prediction impossible as a practical matter.

Comment author: erratio 03 October 2010 11:06:28PM 38 points [-]

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

Comment author: LucasSloan 03 October 2010 11:52:08PM 15 points [-]

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?

Comment author: wedrifid 04 October 2010 04:43:33AM *  10 points [-]

Upvoted for 'not even being wrong'.

Comment author: NihilCredo 03 October 2010 11:39:56PM 2 points [-]

Could you expand a little on this?

Comment author: erratio 03 October 2010 11:59:14PM 5 points [-]

Sure. Here's a version of the analogy that first got me thinking about it:

If I turn on a lamp at night, it sheds both heat and light. But I wouldn't say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn't produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn't do much for us. There's a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.

Comment author: Perplexed 05 October 2010 12:48:52AM 3 points [-]

(I'm not sure why I pushed the button to reply, but here I am so I guess I'll just make something up to cover my confusion.)

Do you also believe that we use language - speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?

Comment author: MattMahoney 26 April 2011 04:29:04PM 23 points [-]

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

Comment author: wedrifid 26 April 2011 05:57:08PM 4 points [-]

How do the votes work in this game again? "Upvote for insane", right?

Comment author: James_Miller 04 October 2010 03:34:46AM 42 points [-]

Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)

Comment author: Pavitra 04 October 2010 07:11:03AM 16 points [-]

I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.

Comment author: JoshuaZ 04 October 2010 03:38:03AM *  3 points [-]

Upvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.

Comment author: James_Miller 04 October 2010 03:58:55AM 5 points [-]

I wrote about it here:

http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html

Once we have identified genes that play a key role in intelligence then eugenics through massive embryo selection has a good chance at producing lots of super-geniuses especially if you are willing to tolerate a high "error rate." The Chinese are actively looking for the genetic keys to intelligence. (See http://vladtepesblog.com/?p=24064) The Chinese have a long pro-eugenics history (See Imperfect Conceptions by Frank Dikötter) and I suspect have a plan to implement a serious eugenics program as soon as it becomes practical which will likely be within the next five years.

Comment author: JoshuaZ 04 October 2010 04:12:07AM 3 points [-]

I think the main point of disagreement is the estimate that such a program would be practical in five years (hence my longer-term estimate). My impression is that actual studies of the genetic roots of intelligence are progressing but at a fairly slow pace. I'd give a much lower than 40% chance that we'll have that good an understanding in five years.

Comment author: Jack 31 October 2010 08:26:24AM 2 points [-]

Can you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.

Comment author: mattnewport 03 October 2010 08:21:27PM 42 points [-]
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

Comment author: wedrifid 04 October 2010 04:38:46AM 3 points [-]

I want to upvote each of these points a dozen times. Then another few for the first.

A Singleton AI is not a stable equilibrium

It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.

Comment author: mattnewport 04 October 2010 04:53:52AM 2 points [-]

I guess I'm playing the game right then :)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Comment author: Mass_Driver 06 October 2010 05:59:03AM 3 points [-]

Funny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)

Comment author: Tenek 04 October 2010 03:29:46PM 37 points [-]

The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)

Comment author: Will_Newsome 06 October 2010 06:58:59AM 2 points [-]

This seems reasonable with the help of FAI, though I doubt CEV would do it; or are you thinking of possible non-FAI technologies?

Comment author: MrShaggy 08 October 2010 05:02:41AM 22 points [-]

Eating lots of bacon fat and sour cream can reverse heart disease. Very confident (>95%).

Comment author: RomanDavis 17 December 2010 10:58:45PM 2 points [-]

Downvoted. I've seen the evidence, too.

Comment author: MrShaggy 24 December 2010 03:43:52AM 2 points [-]

Downvoted means you agree (on this thread), correct? If so, I've wanted to see a post on rationality and nutrition for a while (on the benefits of high-animal fat diet for health and the rationality lessons behind why so many demonize that and so few know it).

Comment author: JGWeissman 08 October 2010 05:13:28AM 2 points [-]

You have to actually think your degree of belief is rational.

I doubt you are following this rule.

Comment author: MrShaggy 09 October 2010 06:09:36AM *  3 points [-]

I was worried people would think that, but if I posted links to present evidence, I ran the risk of convincing them so they wouldn't vote it up! All I've eaten in the past three weeks is: pork belly, butter, egg yolks (and a few whites), cheese, sour cream (like a tub every three days), ground beef, bacon fat (saved from cooking bacon) and such. Now, that's no proof about the medical claim but I hope it's an indication that I'm not just bullshiting. But for a few links: http://www.ncbi.nlm.nih.gov/pubmed/19179058 (the K2 in question is virtually found only in animal fats and meats, see http://www.westonaprice.org/abcs-of-nutrition/175-x-factor-is-vitamin-k2.html#fig4)--the pubmed is on prevention of heart disease in humans http://wholehealthsource.blogspot.com/2008/11/can-vitamin-k2-reverse-arterial.html shows reversal in rat studies from K2 http://trackyourplaque.com/ -- a clinic that uses K2 among other things to reverse heart disease note that I am not trying to construct a rational argument but to convince people that I do hold this belief. I do think a rational argument can be constructed but this is not it.

Comment author: jkaufman 14 September 2011 06:44:46PM 3 points [-]

This was about a year ago: do you still hold this belief? Has eating like you described worked out?

Comment author: Eugine_Nier 03 October 2010 10:30:58PM 28 points [-]

The many worlds interpretation of Quantum Mechanics is false in the strong sense that the correct theory of everything will incorporate wave-function collapse as a natural part of itself. ~40%

Comment author: Will_Newsome 03 October 2010 03:01:34AM *  55 points [-]

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

Comment author: Will_Newsome 03 October 2010 06:11:06AM *  10 points [-]

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

Comment author: Nick_Tarleton 05 October 2010 08:16:38AM *  3 points [-]

99.5%

I'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

Comment author: Will_Newsome 05 October 2010 10:48:02PM 2 points [-]

Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

That is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%.

What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.

Comment author: LucasSloan 03 October 2010 07:09:48AM 3 points [-]

What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.

Comment author: Mass_Driver 03 October 2010 05:14:07AM 4 points [-]

Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.

Comment author: Jonathan_Graehl 03 October 2010 07:38:25AM 2 points [-]

Yep. Over-reliance on anthropic arguments IMO.

Comment author: Will_Newsome 03 October 2010 08:15:21AM *  2 points [-]

Huh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe.

(ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)

Comment author: Will_Newsome 03 October 2010 05:16:01AM 2 points [-]

Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.

Comment author: AlephNeil 07 October 2010 07:35:24AM *  2 points [-]

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect.

On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)

Comment author: Will_Newsome 07 October 2010 10:28:34AM *  3 points [-]

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything.

It's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot.

First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way.

(Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).)

[S]urely everything we know about science - the uniformity and universality of physical laws - suggests that this is false.

What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?

Comment author: dilaudid 13 October 2010 07:40:08AM *  19 points [-]

There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)

Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.

Comment author: Relsqui 13 October 2010 07:54:40AM *  10 points [-]

I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.

(I agree with your premise, but not your conclusion.)

Comment author: RichardKennaway 13 October 2010 07:43:36AM 3 points [-]

Did you have this in mind? Cognitive Surplus.

Comment author: homunq 14 October 2010 02:09:03AM 13 points [-]

The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)

NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.

I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.

Comment author: MattMahoney 26 April 2011 04:01:43PM 2 points [-]

I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

Comment author: WrongBot 04 October 2010 08:55:41AM 38 points [-]

There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)

Comment author: JenniferRM 04 October 2010 10:10:42PM 6 points [-]

If I'm interpreting the terms charitably, I think I put this more like 70%... which seems like a big enough numerical spread to count as disagreement -- so upvoted!

My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe's Leviathan, and personal musings about Fukuyama's End Of History extrapolated into transhuman contexts, and more ideas in this vein.

It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out... but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a "theory of morality".

But even then, being able to generate evidence about the absence of an objective object level "theory of morality" would itself seem to offer a strategy for taking a universally acceptable position on the general subject... which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel's "Last Word": "If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it."

Comment author: jimrandomh 04 October 2010 01:02:38PM *  5 points [-]

This probably isn't what you had in mind, but any single complete human brain is a (or contains a) morality, and it's objectively real.

Comment author: WrongBot 04 October 2010 04:29:36PM 2 points [-]

Indeed, that was not at all what I meant.

Comment author: Will_Newsome 04 October 2010 10:20:02PM 2 points [-]

Does the morality apply to paperclippers? Babyeaters?

Comment author: nazgulnarsil 04 October 2010 07:09:13AM *  31 points [-]

the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)

*that humans have invented so far

Comment author: Mass_Driver 06 October 2010 05:55:18AM 5 points [-]

The proposition strikes me as either circular or wrong, depending on your definitions of "peaceful" and "prosperous."

If by "peaceful" you mean "devoid of violence," and by "violence" you essentially mean "transfers of wealth that are contrary to just laws," and by "just laws" you mean "laws that honor private property rights above all else," then you should not be surprised if joint stock corporations are the most peaceful entities the world has seen so far, because joint stock corporations are dependent on private property rights for their creation and legitimacy.

If by "prosperous" you mean "full of the kind of wealth that can be reported on an objective balance sheet," and if by "objective balance sheet" you mean "an accounting that will satisfy a plurality of diverse, decentralized and marginally involved investors," then you should likewise not be surprised if joint stock corporations increase prosperity, because joint stock corporations are designed so as to maximize just this sort of prosperity.

Unfortunately, they do it by offloading negative externalities in the form of pollution, alienation, lower wages, censored speech, and cyclical instability of investments onto individual people.

When your 'goals' are the lowest common denominator of materialistic consumption, joint stock corporations might be unbeatable. If your goals include providing a social safety net, education, immunizations, a free marketplace of ideas, biodiversity, and clean air, you might want to consider using a liberal democracy.

Using the most charitable definitions I can think of for your proposition, my estimate for the probability that a joint-stock system would best achieve a fair and honest mix of humanity's crasser and nobler goals is somewhere around 15%, and so I'm upvoting you for overconfidence.

Comment author: blogospheroid 06 October 2010 11:18:49AM 3 points [-]

Coming from the angle of competition in governance, I think you might be mixing up a lot of stuff. A joint stock corporation which is sovereign is trying to compete in the wider world for customers , i.e. willing taxpayers.

If the people desire the values you have mentioned then the joint-stock government will try to provide those cost effectively.

Clean Air and Immunizations will almost certainly be on the agenda of a city government

Biodiversity will be important to a government which includes forests in its assets and wants to sustainably maintain the same.

A free marketplace of ideas, free education and social safety nets would purely be determined by the market for people. Is it an important value enough that people would not come to your country and would go to another? if it is, then the joint stock government would try to provide the same. If not, then they wouldn't.

Comment author: wedrifid 06 October 2010 11:30:56AM *  4 points [-]

All of this makes sense in principle.

(I'm assuming you're not thinking that any of it would actually work in practice with either humans or ideal rational agents, right?)

Comment author: blogospheroid 05 October 2010 04:57:41AM 2 points [-]

Or how I would call it, no representation without taxation. Those who contribute equity to society rule it. Everyone else contracts with the corporate in some way or another.

Comment author: Kevin 03 October 2010 07:43:11AM *  34 points [-]

It does not all add up to normality. We are living in a weird universe. (75%)

Comment author: Eugine_Nier 03 October 2010 07:53:59AM 5 points [-]

Please specify what you mean by a weird universe.

Comment author: Kevin 03 October 2010 08:13:53AM 5 points [-]

We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.

Comment author: [deleted] 08 October 2010 04:54:18AM 3 points [-]

The more I hear about this the more intrigued I get. Could someone with a strong belief in this hypothesis write a post about it? Or at the very least throw out hints about how you updated in this direction?

Comment author: Interpolate 03 October 2010 11:20:26AM *  4 points [-]

It does not all add up to normality. We are living in a weird universe. (75%)

My initial reaction was that this is not a statement of belief but one of opinion, and to think like reality.

We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.

I'm still not entirely sure what you mean (further elaboration would be very welcome), but going by a naive understanding I upvoted your comment based on the principle of Occam's Razor - whatever your reasons for believing this (presumably perceived inconsistencies, paradoxes etc. in the observable world, physics etc.) I doubt your conceived "weird" universe would the simplest explanation. Additionally, that conceived weird universe in addition to lacking epistemic/empirical ground begs for more explanation than the understanding/lack thereof of the universe/reality that's more of less shared by current scientific consensus.

If I'm understanding correctly, your argument for the existence of a "weird universe" is analagous to an argument for the existence of God (or the supernatural, for that matter): where by introducing some cosmic force beyond reason and empiricism, we eliminate the problem of there being phenomena which can't be explained by it.

Comment author: Risto_Saarelma 03 October 2010 10:10:40AM *  4 points [-]

Would "Fortean phenomena really do occur, and some type of anthropic effect keeps them from being verifiable by scientific observers" fit under this statement?

Comment author: Will_Newsome 03 October 2010 07:50:01AM *  2 points [-]

Downvoted in agreement (I happen to know generally what Kevin's talking about here, but it's really hard to concisely explain the intuition).

Comment author: [deleted] 13 April 2012 11:48:23AM *  10 points [-]

I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).

Comment author: Zetetic 17 April 2012 12:39:44AM *  2 points [-]

essentially erasing the distiction of map and territory

This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.

In more detail:

Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism, so the map/territory distinction is not dissolved.

Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us.

So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.

Comment author: [deleted] 03 October 2010 03:00:00AM *  36 points [-]

I think that there are better-than-placebo methods for causing significant fat loss. (60%)

ETA: apparently I need to clarify.

It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.

Comment author: magfrump 03 October 2010 04:30:19AM 24 points [-]

voted up because 60% seems WAAAAAYYYY underconfident to me.

Comment author: Eugine_Nier 03 October 2010 04:40:39AM 4 points [-]

Now that we're up-voting underconfidence I changed my vote.

Comment author: magfrump 03 October 2010 04:56:43AM 2 points [-]

From the OP:

Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

Comment author: [deleted] 03 October 2010 05:57:46AM 3 points [-]

shoot... I'm just scared to bet, is all. You can tell I'm no fun at Casino Night.

Comment author: Will_Newsome 03 October 2010 06:07:49AM 5 points [-]

Ah, but betting for a proposition is equivalent to betting against its opposite. Why are you so certain that there's no better-than-placebo methods for causing significant fat loss?

But If you do change your mind, please don't change the original, as then everyone's comments would be irrelevant.

Comment author: Jonathan_Graehl 03 October 2010 07:43:14AM 4 points [-]

Absolutely right. This is an important point that many people miss. If you're uncertain about your estimated probability, or even merely risk averse, then you may want to take neither side of the implied bet. Fine, but at least figure out some odds where you feel like you should have an indifferent expectation.

Comment author: Will_Newsome 03 October 2010 03:03:50AM *  2 points [-]

Voted down for agreement! (Liposuction... do you mean dietary methods? I'd still agree with you though.)

Edit: On reflection, 60% does seem too low. Changed to upvote.

Comment author: knb 04 October 2010 10:39:24PM 16 points [-]

Life on earth was seeded, accidentally or on purpose, from outer space.

Comment author: Perplexed 03 October 2010 04:49:29AM 21 points [-]

Unless you are familiar with the work of a German patent attorney named Gunter Wachtershauser, just about everything you have read about the origin of life on earth is wrong. More specifically, there was no "prebiotic soup" providing organic nutrient molecules to the first cells or proto-cells, there was no RNA world in which self-replicating molecules evolved into cells, the Miller experiment is a red herring and the chemical processes it deals with never happened on earth until Miller came along. Life didn't invent proteins for a long time after life first originated. 500 million years or so. About as long as the time from the "Cambrian explosion" to us.

I'm not saying Wachtershauser got it all right. But I am saying that everyone else except people inspired by Wachtershauser definitely got it all wrong. (70%)

Comment author: khafra 04 October 2010 02:50:13PM 40 points [-]

Meh. What's the chances of some germanic guy sitting around looking at patents all day coming up with a theory that revolutionizes some field of science?

Comment author: JohannesDahlstrom 03 October 2010 10:29:52PM 3 points [-]

You make the "metabolism first" school of thought sound like a minority contrarian position to the mainstream "genes first" hypothesis. I was under the impression that they were simply competing hypotheses with the jury being still out on the big question. That's how they presented the issue in my astrobiology class, anyway.

Comment author: andrewbreese 14 October 2010 08:22:54PM 12 points [-]

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.

Comment author: dfranke 13 October 2010 12:55:03PM 15 points [-]

Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.

Comment author: Wrongnesslessness 13 April 2012 05:02:12PM 6 points [-]

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

Comment author: Incorrect 13 April 2012 07:01:32PM 7 points [-]

All existence is intrinsically meaningless

I'm trying to figure out what this statement means. What would the universe look like if it were false?

Comment author: TheOtherDave 13 April 2012 08:12:54PM 6 points [-]

In context, I took it to predict something like "Above a certain limit, as a system becomes more intelligent and thus more able to discern the true nature of existence, it will become less able to motivate itself to achieve goals."

Comment author: Locaha 21 January 2014 05:09:27PM 2 points [-]

I'm trying to figure out what this statement means.

You can't. We live in an intrinsically meaningless universe, where all statements are intrinsically meaningless. :-)

Comment author: Eugine_Nier 03 October 2010 03:27:31AM 26 points [-]

Religion is a net positive force in society. Or to put it another way religious memes, (particularly ones that have survived for a long time) are more symbiotic than parasitic. Probably true (70%).

Comment author: orthonormal 04 October 2010 02:26:32AM *  8 points [-]

If you changed "is" to "has been", I'd downvote you for agreement. But as stated, I'm upvoting you because I put it at about 10%.

Comment author: Eugine_Nier 04 October 2010 02:35:45AM 4 points [-]

I'd be curious to know when you think the crossover point was.

Comment author: orthonormal 04 October 2010 02:55:55AM *  12 points [-]

Around the time of J. S. Mill, I think. The Industrial Revolution helped crystallize an elite political and academic movement which had the germs of scientific and quantitative thinking; but this movement has been far too busy fighting for its life each time it conflicts with religious mores, instead of being able to examine and improve itself. It should have developed far more productively by now if atheism had really caught on in Victorian England.

Anyway, I'm not as confident of the above as I am that we've passed the crossover point now. (Aside from the obvious political effects, the persistence of religion creates mental antibodies in atheists that make them extremely wary of anything reminiscent of some aspect of religion; this too is a source of bias that wouldn't exist were it not for religion's ubiquity.)

Comment author: Perplexed 03 October 2010 05:00:01AM 3 points [-]

I think this is ambiguous. It might be interpreted as

  • Christianity is good for its believers - they are better off to believe than to be atheist.
  • Christianity is good for Christendom - it is a positive force for majority Christian societies, as compared to if those societies were mostly atheist.
  • Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

Which of these do you mean?

Comment author: Jayson_Virissimo 03 October 2010 06:23:50PM 3 points [-]

Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

I think a better question is "would the world a better place if people who are currently Christian became their next most likely alternative belief system?". I'm going to go out on a limb here and speculate that if the median Christian lost his faith he wouldn't become a rational-empiricist.

Comment author: Eugine_Nier 03 October 2010 05:15:53AM *  3 points [-]

Christianity is good for its believers - they are better off to believe than to be atheist.

I'd change this one to:

  • Christianity is good for most of its believers - they are better off to believe than to be atheist.

~62%

Christianity is good for Christendom - it is a positive force for majority Christian societies, as compared to if those societies were mostly atheist.

~69%

Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

~58%

Edit: I case it wasn't clear the 70% refers to the disjunction of the above 3.

Comment author: Vladimir_M 03 October 2010 10:45:08AM *  26 points [-]

Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)

Comment author: novalis 03 October 2010 10:42:43PM 8 points [-]

I want to vote you down in agreement, but I don't have enough karma.

Comment author: Alicorn 03 October 2010 02:23:43PM 8 points [-]

(Absolutely certain.)

I'm not sure whether to chide you or giggle at the self-reference. I suspect, though, that "absolutely certain" is not a confidence level.

Comment author: komponisto 03 October 2010 02:56:45PM 5 points [-]

Upvoted. Definitely can't back you on this one.

Are you sure you're not just worried about poor calibration?

Comment author: wedrifid 03 October 2010 03:02:16PM *  3 points [-]

Another upvote. That's crazy talk.

Comment author: Vladimir_M 03 October 2010 07:45:28PM *  2 points [-]

komponisto:

Are you sure you're not just worried about poor calibration?

No, my objection is fundamental. I provide a brief explanation in the comment I linked to, but I'll restate it here briefly.

The problem is that the algorithms that your brain uses to perform common-sense reasoning are not transparent to your conscious mind, which has access only to their final output. This output does not provide a numerical probability estimate, but only a rough and vague feeling of certainty. Yet in most situations, the output of your common sense is all you have. There are very few interesting things you can reason about by performing mathematically rigorous probability calculations (and even when you can, you still have to use common sense to establish the correspondence between the mathematical model and reality).

Therefore, there are only two ways in which you can arrive at a numerical probability estimate for a common-sense belief:

  1. Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes the number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

  2. Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Honestly, all this seems entirely obvious to me. I would be curious to see which points in the above reasoning are supposed to be even controversial, let alone outright false.

Comment author: komponisto 03 October 2010 10:33:09PM *  7 points [-]

Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes this number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

Disagree here. Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.

Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Disagree here also. The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.

For example, consider my current probability estimate of 10^(-3) that Amanda Knox killed her roommate. On my current analysis, this is obtained as follows: I start with a prior of 10^(-4) (from a general homicide rate of about 10^(-3), plus reasoning that Knox is demographically an order of magnitude less likely to kill than the typical person; the figure happens to match my intuitive sense that I'd have to meet about 10,000 similar people before I'd have any fear for my life). Then all the evidence in the case raises the probability by about an order of magnitude at most, yielding 10^(-3).

Now, this is just a rough order-of-magnitude argument. But it's already much more meaningful and useful than my just saying "I don't think she did it". It provides a way of breaking down the reasoning, so that points of disagreement can be precisely identified in an efficient manner. (If you happened to disagree, the next step would be to say something like "but surely evidence X alone raises the odds by more than a factor of ten", and then we'd iterate the process specifically on X rather than the original proposition.)

It's a very useful technique for keeping debates informative, and preventing them from turning into (pure) status signaling contests.

Comment author: mattnewport 03 October 2010 08:00:14PM 4 points [-]

It seems plausible to me that routinely assigning numerical probabilities to predictions/beliefs that can be tested and tracking these over time to see how accurate your probabilities are (calibration) can lead to a better ability to reliably translate vague feelings of certainty into numerical probabilities.

There are practical benefits to developing this ability. I would speculate that successful bookies and professional sports bettors are better at this than average for example and that this is an ability they have developed through practice and experience. Anyone who has to make decisions under uncertainty seems like they could benefit from a well developed ability to assign well calibrated numerical probability estimates to vague feelings of certainty. Investors, managers, engineers and others who must deal with uncertainty on a regular basis would surely find this ability useful.

I think a certain degree of skepticism is justified regarding the utility of various specific methods for developing this ability (things like predictionbook.com don't yet have hard evidence for their effectiveness) but it certainly seems like it is a useful ability to have and so there are good reasons to experiment with various methods that promise to improve calibration.

Comment author: Perplexed 04 October 2010 02:27:15PM 7 points [-]

assigning numerical probabilities to common-sense conclusions and beliefs is meaningless

It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.

The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.

Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian probability:

  • P(~A) = 1 - P(A)
  • P(B|A)*P(A)=P(A&B)=P(A|B)*P(B)

So, if you want to deprecate as "meaningless" my estimate that the Democrats have a 40% chance to maintain their House majority in the next election, go ahead. But you cannot then also deprecate my estimate that the Republicans have a 70% of reaching a House majority. Because the conjunction of those two probability estimates is not meaningless. It is quite respectably false.

Comment author: Vladimir_M 04 October 2010 08:15:43PM *  2 points [-]

I think you're not drawing a clear enough distinction between two different things, namely the mathematical relationships between numbers, and the correspondence between numbers and reality.

If you ask an astronomer what is the mass of some asteroid, he will presumably give you a number with a few significant digits and and uncertainty interval. If you ask him to justify this number, he will be able to point to some observations that are incompatible with the assumption that the mass is outside this interval, which follows from a mathematical argument based on our best knowledge of physics. If you ask for more significant digits, he will say that we don't know (and that beyond a certain accuracy, the question doesn't even make sense, since it's constantly losing and gathering small bits of mass). That's what it means for a number to be rigorously justified.

But now imagine that I make an uneducated guess of how heavy this asteroid might be, based on no actual astronomical observation. I do of course know that it must be heavier than a few tons or otherwise it wouldn't be noticeable from Earth as an identifiable object, and that it must be lighter than 10^20 or so tons since that's roughly the range where smaller planets are, but it's clearly nonsensical for me to express that guess with even one digit of precision. Yet I could insist on a precise guess, and claim that it's "meaningful" in a way analogous to your above justification of subjective probability estimates, by deriving various mathematical and physical implications of this fact. If you deprecate my claim that its mass is 4.5237 x 10^15kg, then you cannot also deprecate my claim that it is a sphere of radius 1km and average density 1000kg/m^3, since the conjunction of these claims is by the sheer force of mathematics false.

Therefore, I don't see how you can argue that a number is meaningful by merely noting its relationships with other numbers that follow from pure mathematics. Or am I missing something with this analogy?

Comment author: prase 05 October 2010 02:31:33PM *  4 points [-]

I have read most of the responses and still am not sure whether to upvote or not. I doubt among several (possibly overlapping) interpretations of your statement. Could you tell to what extent the following interpretations really reflect what you think?

  1. Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless.
  2. Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.)
  3. Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure.
  4. The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason.
  5. Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning of subjective probabilities is preserved after monotonous rescaling.
  6. Although, strictly speaking, human reasoning can be modelled as a Bayesian network where beliefs have numerical strengths, human introspection is poor at assessing their values. Declared values more likely depend on anchoring than on the real strength of the belief. Speaking about numbers actually introduces noise into reasoning.
  7. Human reasoning cannot be modelled by Bayesian inference, not even in approximation.
Comment author: Vladimir_M 05 October 2010 10:42:20PM *  3 points [-]

That’s an excellent list of questions! It will help me greatly to systematize my thinking on the topic.

Before replying to the specific items you list, perhaps I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight. Therefore, I believe that whenever one encounters people talking about numbers of any sort that look even slightly suspicious, they should be considered guilty until proven otherwise -- and this entire business with subjective probability estimates for common-sense beliefs doesn’t come even close to clearing that bar for me.

Now to reply to your list.


(1) Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless.

(2) Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.)

My answer to (1) follows from my opinion about (2).

In my view, a number that gives any information about the real world must ultimately refer, either directly or via some calculation, to something that can be measured or counted (at least in principle, perhaps using a thought-experiment). This doesn’t mean that all sensible numbers have to be derived from concrete empirical measurements; they can also follow from common-sense insight and generalization. For example, reading about Newton’s theory leads to the common-sense insight that it’s a very close approximation of reality under certain assumptions. Now, if we look at the gravity formula F=m1*m2/r^2 (in units set so that G=1), the number 2 in the denominator is not a product of any concrete measurement, but a generalization from common sense. Yet what makes it sensible is that it ultimately refers to measurable reality via a well-defined formula: measure the force between two bodies of known masses at distance r, and you’ll get log(m1*m2/F)/log(r) = 2.

Now, what can we make out of probabilities from this viewpoint? I honestly can’t think of any sensible non-frequentist answer to this question. Subjectivist Bayesian phrases such as “the degree of belief” sound to me entirely ghostlike unless this “degree” is verifiable via some frequentist practical test, at least in principle. In this sense, I do confess frequentism. (Though I don’t wish to subscribe to all the related baggage from various controversies in statistics, much of which is frankly over my head.)

(3) Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure.

That depends on the concrete problem under consideration, and on the thinker who is considering it. The thinker’s brain produces an answer alongside a more or less fuzzy feeling of confidence, and the human language has the capacity to express these feelings with about the same level of fuziness as that signal. It can be sensible to compare intuitive confidence levels, if such comparison can be put to a practical (i.e. frequentist) test. Eight ordered intuitive levels of certainty might perhaps be too much, but with, say, four levels, I could produce four lists of predictions labeled “almost impossible,” “unlikely,” “likely,” and “almost certain,” such that common-sense would tell us that, with near-certainty, those in each subsequent list would turn out to be true in ever greater proportion.

If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified in the sense discussed above under (1) and (2). This requires justification both in the sense of defining what aspect of reality they refer to (where frequentism seems like the only answer), and guaranteeing that they will be accurate under empirical tests. If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice, and there are only two major exceptions.

The first possible path towards accurate calibration is when the same person performs essentially the same judgment many times, and from the past performance we extract the frequency with which their brain tends to produce the right answer. If this level of accuracy remains roughly constant in time, then it makes sense to attach it as the probability to that person’s future judgments on the topic. This approach treats the relevant operations in the brain as a black box whose behavior, being roughly constant, can be subjected to such extrapolation.

The second possible path is reached when someone has a sufficient level of insight about some problem to cross the fuzzy limit between common-sense thinking and an actual scientific model. Increasingly subtle and accurate thinking about a problem can result in the construction of a mathematical model that approximates reality well enough that when applied in a shut-up-and-calculate way, it yields probability estimates that will be subsequently vindicated empirically.

(Still, deciding whether the model is applicable in some particular situation remains a common-sense problem, and the probabilities yielded by the model do not capture this uncertainty. If a well-established physical theory, applied by competent people, says that p=0.9999 for some event, common sense tells me that I should treat this event as near-certain -- and, if repeated many times, that it will come out the unlikely way very close to one in 10,000 times. On the other hand, if p=0.9999 is produced by some suspicious model that looks like it might be a product of data-dredging rather than real insight about reality, common sense tells me that the event is not at all certain. But there is no way to capture this intuitive uncertainty with a sensible number. The probabilities coming from calibration of repeated judgment are subject to analogous unquantifiable uncertainty.)

There is also a third logical possibility, namely that some people in some situations have precise enough intuitions of certaintly that they can quantify them in an accurate way, just like some people can guess what time it is with remarkable precision without looking at the clock. But I see little evidence of this occurring in reality, and even if it does, these are very rare special cases.

(4) The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason.

I disagree with this, as explained above. Calibration can be done successfully in the special cases I mentioned. However, in cases where it cannot be done, which includes the great majority of the actual beliefs and conclusions made by human brains, devising numerical probabilities makes no sense.

(5) Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning subjective probabilities is preserved after monotonous rescaling.

This should be clear from the answer to (3).


[Continued in a separate comment below due to excessive length.]

Comment author: komponisto 06 October 2010 06:45:20AM *  3 points [-]

I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight.

I'll point out here that reversed stupidity is not intelligence, and that for every possible error, there is an opposite possible error.

In my view, if someone's numbers are wrong, that should be dealt with on the object level (e.g. "0.001 is too low", with arguments for why), rather than retreating to the meta level of "using numbers caused you to err". The perspective I come from is wanting to avoid the opposite problem, where being vague about one's beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.)

But I'll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:

If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified... If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice...

As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they're well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it's a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice.

If my internal "Bayesian calculator" believes P(X) = 0.001, and X turns out to be true, I'm not made less wrong by having concealed the number, saying "I don't think X is true" instead. Less embarrassed, perhaps, but not less wrong.

Comment author: Vladimir_M 05 October 2010 10:43:07PM *  3 points [-]

[Continued from the parent comment.]

(6) Although, strictly speaking, human reasoning can be modelled as a Bayesian network where beliefs have numerical strengths, human introspection is poor at assessing their values. Declared values more likely depend on anchoring than on the real strength of the belief. Speaking about numbers actually introduces noise into reasoning.

I have revised my view about this somewhat thanks to a shrewd comment by xv15. The use of unjustified numerical probabilities can sometimes be a useful figure of speech that will convey an intuitive feeling of certainty to other people more faithfully than verbal expressions. But the important thing to note here is that the numbers in such situations are mere figures of speech, i.e. expressions that exploit various idiosyncrasies of human language and thinking to transmit hard-to-convey intuitive points via non-literal meanings. It is not legitimate to use these numbers for any other purpose.

Otherwise, I agree. Except in the above-discussed cases, subjective probabilities extracted from common-sense reasoning are at best an unnecessary addition to arguments that would be just as valid and rigorous without them. At worst, they can lead to muddled and incorrect thinking based on a false impression of accuracy, rigor, and insight where there is none, and ultimately to numerological pseudoscience.

Also, we still don’t know whether and to what extent various parts of our brains involved in common-sense reasoning approximate Bayesian networks. It may well be that some, or even all of them do, but the problem is that we cannot look at them and calculate the exact probabilities involved, and these are not available to introspection. The fallacy of radical Bayesianism that is often seen on LW is in the assumption that one can somehow work around this problem so as to meaningfully attach an explicit Bayesian procedure and a numerical probability to each judgment one makes.

Note also that even if my case turns out to be significantly weaker under scrutiny, it may still be a valid counterargument to the frequently voiced position that one can, and should, attach a numerical probability to every judgment one makes.


So, that would be a statement of my position; I’m looking forward to any comments.

Comment author: jimrandomh 05 October 2010 11:58:53PM 3 points [-]

Suppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn't been conducted, you wouldn't give up and act like you didn't have any probability at all; you'd use the one from the small study. You might have to do some extra sanity checks, and your results wouldn't be as reliable, but they'd still be better than if you didn't have a probability at all.

A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you're really trying to do is set a minimum quality level. Since probabilities that're based on studies and calculation are generally better than probabilities that aren't, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren't any relevant statistical calculations or studies to compare against.

I think what's confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. But suppose you had collected five large studies, and someone gave you the results of a sixth. You wouldn't take that probability as-is, you'd have to combine it with the other five studies somehow. You would only use the new probability as-is if it was significantly better (larger sample, more trustworthy procedure, etc) than the ones you already had, or you didn't have any before. Now if there are no good studies, and someone gives you a probability that came from their common-sense reasoning, you almost certainly have a comparably good probability already: your own common-sense reasoning. So you have to combine it. So in a sense, those sorts of probabilities are less meaningful - you discard them when they compete with better probabilities, or at least weight them less - but there's still a nonzero amount of meaning there.

(Aside: I've been stuck for awhile on an article I'm writing called "What Probability Requires", dealing with this same topic, and seeing you argue the other side has been extremely helpful. I think I'm unstuck now; thank you for that.)

Comment author: xv15 04 October 2010 03:56:14AM 2 points [-]

I tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error.

Meaningless is a very strong word.

In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.

Comment author: Vladimir_M 04 October 2010 05:43:37AM *  2 points [-]

So why stop there? If you can justify 54%, then why not go further and calculate a dozen or two more significant digits, and stand behind them all with unshaken resolve?

Comment author: wnoise 04 October 2010 10:12:59AM 5 points [-]

You can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is.

You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it's not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right?

Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data.

It seems to me that this is somewhat related to the problem of logical uncertainty.

Comment author: xv15 04 October 2010 03:09:50PM 4 points [-]

Again, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision.

In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning.

Now, maybe I will hold the line at 54% exactly, not feeling any gain to thinking harder about the cutoff (as it gets harder AND less important to nail down further digits). Heck, maybe on some other issue I only care to go out to the nearest 10%. But so what? There are plenty of cases where I know my common sense belief probability to within 10%. That suggests such an estimate is not meaningless.

Comment author: wedrifid 04 October 2010 06:34:01AM 3 points [-]

Or, you could slide up your arbitrary and fallacious slippery slope and end up with Shultz.

Comment author: torekp 03 October 2010 04:14:00PM 2 points [-]

Upvoted, because I think you're only probably right. And you not only stole my thunder, you made it more thunderous :(

Comment author: orthonormal 04 October 2010 02:39:47AM 2 points [-]

Um, so when Nate Silver tells us he's calculated odds of 2 in 3 that Republicans will control the house after the election, this number should be discarded as noise because it's a common-sense belief that the Republicans will gain that many seats?

Comment author: Vladimir_M 04 October 2010 05:29:34AM *  2 points [-]

Boy did I hit a hornets' nest with this one!

No, of course I didn't mean anything like that. Here is how I see this situation. Silver has a model, which is ultimately a piece of mathematics telling us that some p=0.667, and for reasons of common sense, Silver believes (assuming he's being upfront with all this) that this model closely approximates reality in such a way that p can be interpreted, with reasonable accuracy, as the probability of Republicans winning a House majority this November.

Now, when you ask someone which party is likely to win this election, this person's brain will activate some algorithm that will produce an answer along with some rough level of confidence. Someone completely ignorant about politics might answer that he has no idea, and cannot say anything with any certainty. Other people will predict different results with varying (informally expressed) confidence. Silver himself, or someone else who agrees with his model, might reply that the best answer is whatever the model says (i.e. Republicans win with p=0.667), since it is completely superior to the opaque common-sense algorithms used by the brains of non-mathy political analysts. Others will have greater or lesser confidence in the accuracy of the model, and might take its results into account, with varying weight, alongside other common-sense considerations.

Ultimately, the status of this number depends on the relation between Silver's model and reality. If you believe that the model is a vast improvement over any informal common-sense considerations in predicting election results, just like Newton's theory is a vast improvement over any common-sense considerations in predicting the motions of planets, then we're not talking about a common-sense conclusion any more. On the other hand, if you believe that the model is completely out of touch with reality, then you would discard its result as noise. Finally, if you believe that it's somewhat accurate, but still not reliably superior to common sense, you might revise its conclusion using common sense.

What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.

Comment author: wedrifid 04 October 2010 06:36:45AM 4 points [-]

What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.

Want to make a bet on that?

Comment author: Eugine_Nier 03 October 2010 10:34:12PM 18 points [-]

Conditional on this universe being a simulation, the universe doing the stimulating has laws vastly different from our own. For example, it might contain more than 3 extended-spacial dimensions, or bear a similar relation to our universe as our universe does to second life. 99.999%

Comment author: wedrifid 04 October 2010 04:45:21AM 7 points [-]

Upvoted for excessive use of nines. :)

(ie. Gross overcondidence.)

Comment author: [deleted] 11 April 2012 01:22:41PM 2 points [-]

I'm supposed to downvote if I think the probability of that is >= 99.999% and upvote otherwise? I'm upvoting, but I still the probability of that is > 90%.

Comment author: Snowyowl 04 October 2010 09:23:31PM 2 points [-]

Upvoted for disagreement. The most detailed simulations our current technology is used to create (namely, large networks of computers operating in parallel) are created for research purposes, to understand our own universe better. Galaxy/star formation, protein folding, etc. are fields where we understand enough to make a simulation but not enough that such a simulation is without value. A lot of our video games have three spatial dimensions, one temporal one, and roughly Newtonian physics. Even Second Life (which you named in your post) is designed to resemble our universe in certain aspects.

Basically, I fail to see why anyone would create such a detailed simulation if it bore absolutely no resemblance to reality. Some small differences, yes (I bet quantum mechanics works differently), but I would give a ~50% chance that, conditional on our universe being a simulation, the parent universe has 3 spatial dimensions, one temporal dimension, matter and antimatter, and something that approximates to General Relativity.

Comment author: NancyLebovitz 05 October 2010 04:45:38PM 2 points [-]

This is much less than obvious-- if the parent universe has sufficient resources, it's entirely plausible that it would include detailed simulations for fun-- art or gaming or some costly motivation that we don't have.

Comment author: Apprentice 05 October 2010 07:44:25PM 13 points [-]

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Comment author: Mass_Driver 06 October 2010 05:30:14AM *  12 points [-]

Far too confident.

The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.

I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.

Comment author: Apprentice 06 October 2010 01:50:01PM *  7 points [-]

All right, I'll try to mount a defence.

I would be modestly surprised if any member of Congress has an IQ below 100. You just need to have a bit of smarts to get elected. Even if the seat you want is safe, i.e. repeatedly won by the same party, you likely have to win a competitive primary. To win elections you need to make speeches, answer questions, participate in debates and so on. It's hard. And you'll have opponents that are ready to pounce on every mistake you make and try make a big deal out of it. Even smart people make lots of mistakes and say stupid things when put on the spot. I doubt a person of below average intelligence even has a chance.

Even George W. Bush, who's said and done a lot of stupid things and is often considered dim for a politician, likely has an IQ above 120.

As for decency and honesty, a useful rule of thumb is that most people are good. Crooked people are certainly a significant minority but most of them don't hide their crookedness very well. And you can't be visibly crooked and still win elections. Your opponents are motivated to dig up the dirt on you.

As for honestly trying to serve their country I admit that this is a bit tricky. Congresspeople certainly have a structural incentive to put the interests of their district above that of their country. But they are not completely short-sighted and neither are their constitutents. Conditions in congressional district X are very dependent on conditions in the US as a whole. So I do think congresspeople try to honestly serve both their district and their country.

Non-corruption is again a bit tricky but here I side with Matt Yglesias and Paul Waldman:

The truth, however, is that Congress is probably less corrupt than at any point in our history. Real old-fashioned corruption, of the briefcase-full-of-cash kind, is extremely rare (though it still happens, as with William Jefferson, he of the $90,000 stuffed in the freezer).

Real old-school corruption like you have in third world countries and like you used to have more of in Congress is now very rare. There's still a real debate to be had about the role of lobbyists, campaign finance law, structural incentives and so on but that's not what I'm talking about here.

Are there still some bad apples? Definitely. But I stand by my view that the vast majority are not.

Comment author: Scott78704 06 October 2010 02:50:37PM 7 points [-]

Conflating people with politicians is an egregious category error.

Comment author: magfrump 06 October 2010 11:36:59PM 2 points [-]

If by not-corrupt you meant "would consciously and earnestly object to being offered money for the explicit purpose of pursuing a policy goal that they perceived as not in the favor of their electorate or the country" and by "above-average intelligence" you meant "IQ at least 101" then I would downvote for agreement.

But if you meant "tries to assure that their actions are in the favor of their constituents and country, and monitors their information diet to this end" and "IQ above 110 and conscientiousness above average" then I maintain my upvote.

When I think of not-corrupt I think of someone who takes care not to betray people, rather than someone who does not explicitly betray them. When I think "above average intelligence" I think of someone who regularly behaves more intelligently than most, not someone who happens to be just to the right of the bell curve.

Comment author: Vladimir_M 06 October 2010 09:32:30PM *  4 points [-]

Apprentice:

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Downvoted for agreement.

However, I must add that it would be extremely fallacious to conclude from this fact that the country is being run competently and not declining or even headed for disaster. This fallacy would be based on the false assumption that the country is actually run by the politicians in practice. (I am not arguing for these pessimistic conclusions, at least not in this context, but merely that given the present structure of the political system, optimistic conclusions from the above fact are generally unwarranted.)

Comment author: simplicio 07 October 2010 11:28:44PM *  9 points [-]

The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)

Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.

Comment author: RobinZ 08 October 2010 02:44:41PM 2 points [-]

Allow me to provide the obligatory complaint about (mainstream) conflation of sentience and sapience, said complaint of course being a display the former but not the latter.

Comment author: Angela 21 January 2014 03:41:13PM 3 points [-]

The hard problem of consciousness will be solved within the next decade (60%).

Comment author: Academian 04 October 2010 10:04:31PM *  11 points [-]

This comment currently (at the time of reading) has at least 10 net upvotes.

Confidence: 99%.

Comment author: Perplexed 05 October 2010 04:10:20AM 6 points [-]

You realize, of course, that your confidence level is too high. Eventually, the score should cycle between +9 and +10. Which means that the correct confidence level should be 50%.

Nonetheless, it is very cute. So, I'll upvote it for overconfidence, to say nothing of currently being wrong.

Comment author: JGWeissman 05 October 2010 05:57:59AM 4 points [-]

Once it gets to 10 points, it should be voted up for underconfidence.

Comment author: magfrump 05 October 2010 08:10:45AM 3 points [-]

Except that there's a chance that it's been downvoted by someone else that's sufficient for 99% to warrant agreement rather than a statement of underconfidence (if and only if people decide that this is true!) which would be easily broken if it got up to 11 but would be far more easily broken if the confidence was set at say, 75%.

Comment author: magfrump 08 October 2010 11:04:51PM 3 points [-]

Cycle's broken! Now upvoted for underconfidence.

Comment author: Eneasz 06 October 2010 09:14:06PM 7 points [-]

Predicated on MWI being correct, and Quantum Immortality being true:

It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%

Comment author: Risto_Saarelma 07 October 2010 01:10:50PM 2 points [-]

Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that'd be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.

However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.

Comment author: magfrump 06 October 2010 11:24:27PM 2 points [-]

Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don't work out.

This seems like a great reason not to trust quantum immortality.

Comment author: prase 03 October 2010 10:58:04PM 8 points [-]

Many-world interpretation of quantum physics is wrong. Reasonably certain (80%).

I suppose the MWI is an artifact of our formulation of physics, where we suppose systems can be in specific states that are indexed by several sets of observables. I think there is no such thing as a state of the physical system.

Comment author: Vladimir_M 04 October 2010 12:22:01AM 4 points [-]

prase:

I think there is no such thing as a state of the physical system.

Could you elaborate by any chance? I can't really figure out what exactly you mean by this, but I suspect it is very interesting.

Comment author: prase 04 October 2010 08:44:34PM *  7 points [-]

Disclaimer: If I had something well thought through, consistent, not vague and well supported, I would be sending it to Phys.Rev. instead of using it for karma-mining in the Irrationality thread on LW. Also, I don't know your background in physics, so I will probably either unnecessarily spend some time explaining banalities, or leave something crucial unexplained, or both. And I am not sure how much of what I have written is relevant. But let me try.

The standard formulation of the quantum theory is based on the Hamiltonian formalism. In its classical variant, it relies on the phase space, which is coordinatised by dynamical variables (or observables; the latter term is more frequent in the quantum context). The observables are conventionally divided into pairs of canonical coordinates and momenta. The set of observables is called complete if their values determine the points in the phase space uniquely.

I will distinguish between two notions of state of a physical system. First, the instantaneous state corresponds to a point in the phase space. Such a state evolves, which means that as time passes, the point moves through the phase space along a trajectory. It has sense to say "the system at time t is in instantaneous state s" or "the instantaneous state s corresponds to the set of observables q". In the quantum mechanics, the instantaneous state is described by state vectors in the Schrödinger picture.

Second, the permanent state is fixed and corresponds to a parametrised curve s=s(t). It has sense to say "the system in the state s corresponds to observable values q(t)". In quantum mechanics, this is described by the state vectors in the Heisenberg picture. The quantum observables are represented by operators, and either state vectors evolve and operators remain still (Schrödinger), or operators evolve and state vectors remain still (Heisenberg). The distinction may feel a bit more subtle on the classical level, where the observables aren't "reified", so to speak, but it is still possible.

Measuring all necessary observables one determines the instantaneous state of the system. To predict the values of observables in a different instant, one needs to calculate the evolution of the instantaneous state, or equivalently to find out the permanent state.

Now there's a problem already on the classical level: the time. We know that the microscopic laws are invariant with respect to the Lorentz transformation, which mix time and space, so it has no sense to treat time and space so differently (the former as a parameter of evolution and the latter as an observable), unless one is dealing with statistical physics where time is really special. Since the Hamiltonian formalism does treat space and time differently, the Lorentz invariance isn't manifest there and the relativistic theories look awkward. So to do relativistic physics efficiently, either one leaves the Hamiltonian formulation, or turns from mechanics to field theory (where time and space are both parameters). However the Hamiltonian formulation is needed for the standard formulation of quantum theory. The move to field theory does help in the classical physics, but one has to resuscitate the crucial role of time at the moment of quantisation, and then the elegance and Lorentz invariance is lost again.

Another problem comes with general relativity. The general relativity is formulated in such a way that neither time nor spatial coordinates have any physical meaning: any coordinates can be used to address the spacetime points, and no set of coordinates is prefered by the laws of nature. This is called general covariance and has important consequences. Strictly speaking, there isn't the time in general relativity. We can consider different times measured by particular clocks, but those are clearly not different from other observables.

Nevertheless, the Hamiltonian formalism can be salvaged. It's done by adding the time (and its associated momentum, which may or may not be interpreted as energy) to the phase space. (In the field theory, one adds also the spatial coordinates, but I'll limit myself to mechanics here.) The phase space has now two dimension more. The permanent (Heisenberg) states now correspond to trajectories q(τ), where the original time t is contained in q. The parameter τ has no physical meaning and the trajectory q(τ) can be reparametrised, while the state remains the same. For most realistic systems, one can choose such a parametrisation where t=τ, but there is no need to do so. This is the relativistic Hamiltonian formalism, whose field-theoretic version is used in attempts to quantise gravity (loop gravitists do that, string theorists do not).

The relativistic Hamiltonian formalism leads to surprising simplification of the Hamilton equations (at least when written in a coordinate-independent form) and Hamilton-Jacobi equations (written in any form). The Lorentz invariance is manifest in this formalism, too. Those facts suggest that this version of the formalism is closer to the real structure of nature than the standard, time-chauvinistic Hamiltonian formalism. An important point is that the notion of instantaneous state has no sense in the relativistic Hamiltonian formalism. Time and coordinates are treated equally, and to ask "in what state the system was at moment t" has roughly as much sense as to ask "in what state the system was at point x".

(Notice that the usual talk about MWI is done using the Schrödinger picture. It looks a lot less intuitive and clear in a Heisenberg picture. To be fair, the collapse postulate in the Heisenberg picture is litterally bizarre.)

Forfeiting the right to parametrise evolution by time, one has to be sort of careful when asking questions. The question "what was the particle's position x at time t" can be answered, but it's no more a natural formulation of the question. The trajectories aren't parametrised by t, they are parametrised by τ. (But to ask "what's the position at τ" is even worse: τ is an unphysical, arbitrary, meaningless auxiliary parameter that should be elliminated from all questions of fact. Put so it may seem trivial, but untrained people tend to ask meaningless questions in general relativity precisely because they intuitively feel that the spacetime coordinates have some meaning, and it is often difficult to resolve the paradoxes they obtain from such questions.)

The natural form of a question is rather "what doublets x,t can be measured in the (permanent) state s?" But if x and t form a complete set of observables, one measurement of that doublet does determine the state s. Therefore, we can formulate an alternative question: "is it possible to measure both x1,t1 and x2,t2 on a single system?" In this formulation, the mention of state has been omitted. In practice, however, states are indexed by measurement outcomes and those two formulations are isomorphic. It may not be so in quantum theory.

In the standard Hamiltonian quantum theory (the one with time as parameter), one can measure only half of the observables compared to the classical theory - either the canonical coordinates, or the canonical momenta. Furthermore, there is no one-to-one correspondence between the state and the observable values. Nevertheless each observable has a probability distribution in any given instantaneous (Schrödinger) state. It's possible to speak about Heisenberg states, but then, the probabilities which sum up to one are given by scalar products of the state vector and the eigenvectors of observable operators taken in one specific time instant. Measurement, as it happens, is supposed to be instantaneous. This poses a problem for relativistic theories, and consistent relativistic quantum mechanics is impossible (but see my remark at the bottom).

In particular, let's ask what happens when two measurements are done. The orthodox interpretation says that during the first measurement the state collapses into the eigenstate of the measured observables, which corresponds to the observed values. We then ask for the probability of the second set of values, which can then be calculated from the new, collapsed wave function. The decoherence interpretations, and MWI in particular, tell us that (in the Schrödinger picture) during the measurement the observer's own state vector becomes correlated. In the Heisenberg picture, this translates into a statement about the observable operators. The role of time can be obscured easily in such description, but in either interpretation, there have to be planes of simultaneous events defined in the space-time to normalise the state vector. Any such definition violates Lorentz invariance, of course. (See also the second remark.)

(Comment too long, continued in a subcomment.)

Comment author: prase 04 October 2010 08:44:46PM 5 points [-]

Like in the classical mechanics, one can resort to the relativistic Hamiltonian formalism. The formalism can be adopted to use in quantum theory, but now there are no observable operators q(t) with time-dependent eigenvectors: both q and t are (commuting) operators. There are indeed wave functions ψ(q,t), but their interpretation is not obvious. For details see here (the article partly overlaps with the one which I link in the remark 2, but gets deeper into the relativistic formalism). The space-time states discussed in the article are redundant - many distinct state vectors describe the same physical situation.

So what we have: either violation of the Lorentz symmetry, or a non-transparent representation of states. Of course, all physical questions in quantum physics can be formulated as questions of the second type as described four paragraphs above. One measures the observables twice (the first measurement is called preparation), and can then ask: "What's the probability of measuring q2, when we have prepared the system into q1?" Which is equivalent to "what's the probability of measuring q1 and q2 on the same system?"

And of course, there is the path integral formulation of quantum theory, which doesn't even need to speak about state space, and is manifestly Lorentz-covariant. So it seems to me that the notion of a state of a system is redundand. The problem with collapse (which is really a problem - my original statement doesn't mean an endorsement of collapse, although some readers may perceive it as such) doesn't exist when we don't speak about the states. Of course, the state vectors are useful in some calculations. I only don't give them independent ontological status.

Remarks:

  1. The fact that the quantum mechanics and relativity don't fit together is often presented as a "feature, not bug": it points out to the necessity of field theory, which, as we know, is a more precise description of the world. In my opinion, such declarations miss the mark, as they implicitly suggest that quantumness somehow doesn't fit well with relativity and mechanics. But the problem here isn't quantumness, the problem is the standard Hamiltonian formalism which singles out time as a special parameter. This can be concealed in the classical mechanics where, like time, dynamical variables are simple numbers, but it's no longer true in quantum setting. Using the relativistic Hamiltonian formalism instead of the standard one, a Lorentz-invariant quantum mechanics can be consistently formulated.

  2. In the decoherence interpretation, a measurement is thought of as an interaction between different parts of the world - the observer and the observed system - an interaction in principle no different from all other interactions. However, it is not so easy to describe such interaction. In any sensible definition the observer must retain memory of his observation. To do that, the interaction Hamiltonian has to be non-Hermitian or time-dependent; both are physically problematic properties. Non-Hermitian interactions are better choice, as they can model dissipation, which is actually the reason for memory in real observers. Another problem with measurement comes when one needs to think about resolution, as no detector can accurately measure the position of a particle with infinite precision. A finite precision of a position measurement is a trivial problem, but when it comes to time measurement, it can really be a mess. See this for a dicussion of a realistic measurement (collapse, but easily translatable into decoherence).

Comment author: wnoise 04 October 2010 06:57:45PM 2 points [-]

Of course it is wrong, because standard quantum physics is an approximate model that only applies in certain conditions.

Wrong, of course, is not the same as "not useful", nor does "MWI is wrong" mean "there is an objective collapse".

Comment author: vvineeth4u 04 October 2010 06:01:02PM *  10 points [-]

Talent is mostly a result of hard work, passion and sheer dumb luck. It's more nurture than nature (genes). People who are called born-geniuses more often than not had better access to facilities at the right age while their neural connections were still forming. (~90%)

Update: OK. It seems I've to substantiate. Take the case of Barrack Obama. Nobody would've expected a black guy to become the US President 50 years ago. Or take the case of Bill Gates, Bill Joy or Steve Jobs. They just happened to have the right kind of technological exposure at an early age and were ready when the technology boom arrived. Or take the case of mathematicians like Fibonacci, Cardano, the Bernoulli brothers. They were smart. But there were other smart mathematicians as well. What separates them is the passion and the hard work and the time when they lived and did the work. A century earlier, they would've died in obscurity after being tried and tortured for blasphemy. Take Mozart. He didn't start making beautiful original music until he was twenty-one by when he had enough musical exposure that there was no one to match him. Take Darwin and think what he would have become if he hadn't boarded the Beagle. He would have been some pastor studying bugs and would've died in obscurity.

In short a genius is made not born. I'm not denying that good genes would help you with memory and learning, but it takes more than genes to be a genius.

Comment author: erratio 04 October 2010 07:23:52PM 8 points [-]

I was with you right up until that second sentence. And then I thought about my sister who was speaking in full sentences by 1 and had taught herself to read by 3.

Comment author: Will_Sawin 04 October 2010 06:23:31PM 6 points [-]

the level of genius of geniuses, especially the non-hardworking ones, is too high & rare to be explained entirely by this.

Comment author: Risto_Saarelma 05 October 2010 04:10:42PM *  2 points [-]

Could this be more precisely rephrased as, "for a majority of people, say 80 %, there would have been a detailed sequence of life experiences that are not extraordinarily improbable or greatly unlike what you would expect to have in a 20th century first world country, which would have resulted them becoming what is regarded as genius by adulthood"?

Comment author: Perplexed 04 October 2010 06:58:34PM 2 points [-]

Upvoting, even though I agree with the first sentence. But I disagree with the rest because I'm pretty sure that hard work and passion have a strong genetic component as well.

Comment author: gwern 07 October 2010 02:08:56AM 3 points [-]

Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)

Comment author: [deleted] 03 October 2010 07:23:30AM *  1 point [-]

The gaming industry is going to be a major source of funding* for AGI research projects in the next 20 years. (85%)

*By "major" I mean contributing enough to have good odds of causing actual progress. By gaming industry I include joint ventures, so long as the game company invested a nontrivial portion of the funding for the project.

EDIT: I am referring to video game companies, not casinos.

Comment author: Eugine_Nier 03 October 2010 07:33:02AM 3 points [-]

I assume you mean designing better AI opponents, as this seems to be one type of very convenient problem for AI.

Needless to say having one of these go FOOM would be very, very bad.

Comment author: Risto_Saarelma 03 October 2010 09:57:40AM 8 points [-]

Opponents can be done reasonably well with even the simple AI we have now. The killer app for gaming would be AI characters who can respond meaningfully to the player talking to them, at the level of actually generating new prewritten game plot quality responses based on the stuff the player comes up with during the game.

This is quite different from chatbots and their ilk, I'm thinking of complex, multiagent player-instigated plots such as the player convincing AI NPC A to disguise itself as AI NPC B to fool AI NPC C who is expecting to interact with B, all without the game developer having anticipated that this can be done and without the player feeling like they have gone from playing a story game to hacking AI code.

So I do see a case here. The game industry has thus far been very conservative about weird AI techniques, but since cutting edge visuals seem to be approaching diminishing returns, there could be room for a gamedev enterprise going for something very different. The big problem is that when sorta-there visuals can be pretty impressive, sorta there general NPC AI will probably look quite weird and stupid in a game plot.

Comment author: Kaj_Sotala 03 October 2010 05:46:17PM 6 points [-]

Opponents can be done reasonably well with even the simple AI we have now.

Not for games like Civilization they can't. Especially not if they're also supposed to deal with mods that add entirely new features.

Some EURISKO-type engine that could play a lot of games against itself and then come up with good strategies (and which could be rerun after each rules change) would be a huge step forward.

Comment author: NancyLebovitz 03 October 2010 07:18:06PM 2 points [-]

Needless to say having one of these go FOOM would be very, very bad.

Maybe, but the purpose of such an opponent isn't to crush humans, it's to give them as good a game as possible. The big risk might be an AI which is inveigling people into playing the game more than is good for them, leading to a world which is indistinguishable from a world in which humans are competing to invent better superstimulus games.

Comment author: [deleted] 04 October 2010 01:41:22AM 4 points [-]

It would be very bad if an opponent AI went FOOM. Or even one which optimized for certain types of "fun", say, rescue scenarios.

But consider a game AI which optimized for features found in some games today (generalized):

  • The challenges of many games require you to learn to think faster as the game progresses.
  • They often require you to know more (and learn to transfer that knowledge, part of what I would call "thinking better").
  • Through roleplaying and story, some games lead you to act the part of a person more like who you wish you were.
  • Many social games encourage you to rapidly develop skills in cooperation and teamwork, to exchange trust and empathy in and out of the game. They want you to catch up to the players who already have an advantage: those who had grown up farther together.

There are more conditions to CEV as usually stated, and they are hard to correlate with goals that any existing game designers consciously implement. They might have to be a hard pitch, "social innovations" for a "revolutionary game".

If it was done consciously, it's conceivable that AI researchers could use game funding to implement Friendly AGI.

(Has there been a post or discussion yet on designing a Game AI that implements CEV? If so, I must read it. If not, I will write it.)

Comment author: blogospheroid 05 October 2010 05:15:42AM 2 points [-]

There will be a net positive to society by measures of overall health, wealth and quality of life if the government capped reproduction at a sustainable level and distributed tradeable reproductive credits for that amount to all fertile young women. (~85% confident)

Comment author: Alicorn 05 October 2010 01:15:56PM 5 points [-]

How I evaluate this statement depends very heavily on how the policy is enforced, so I'm presently abstaining; can you elaborate on how people would be prohibited from reproducing without the auspices of one of these credits?

Comment author: wedrifid 05 October 2010 05:21:27AM 2 points [-]

The implications of that on mating payoffs are fascinating.

Comment author: timujin 12 January 2014 04:59:12AM *  2 points [-]

Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we're giving to CFAR and MIRI aren't going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn't donate money when EY wants you to. ~5%, maybe?