You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Irrationality Game II

13 [deleted] 03 July 2012 06:50PM

I was very interested in the discussions and opinions that grew out of the last time this was played, but find digging through 800+ comments for a new game to start on the same thread annoying. I also don't want this game ruined by a potential sock puppet (whom ever it may be). So here's a non-sockpuppetiered Irrationality Game, if there's still interest. If there isn't, downvote to oblivion!

The original rules:


Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

Enjoy!

 

Comments (380)

Sort By: Controversial
Comment author: Kaj_Sotala 04 July 2012 08:00:16AM 2 points [-]

Irrationality game

I have a suspicion that some form of moral particularism is the most sensible moral theory. 10% confidence.

Moral particularism is the view that there are no moral principles and that moral judgement can be found only as one decides particular cases, either real or imagined. This stands in stark contrast to other prominent moral theories, such as deontology or utilitarianism. In the former, it is asserted that people have a set of duties (that are to be considered or respected); in the latter, people are to respect the happiness or the preferences of others in their actions. Particularism, to the contrary, asserts that there are no overriding principles that are applicable in every case, or that can be abstracted to apply to every case.

According to particularism, most notably defended by Jonathan Dancy, moral knowledge should be understood as knowledge of moral rules of thumb, which are not principles, and of particular solutions, which can be used by analogy in new cases.

Comment author: Jack 04 July 2012 05:51:05PM 7 points [-]

Upvoted for too low a probability.

Comment author: magfrump 04 July 2012 09:29:25AM 2 points [-]

What do you mean by the "most sensible moral theory"?

And what the hell does Dancy mean if he says that there are rules of thumb that aren't principles?

I would weight this lower than .01% just because of my credence that it's incoherent.

Comment author: Kaj_Sotala 05 July 2012 10:43:24AM *  5 points [-]

Perhaps a workable restatement would be something like:

"Any attempt to formalize and extract our moral intuitions and judgements of how we should act in various situations will just produce a hopelessly complicated and inconsistent mess, whose judgements are very different from those of prescribed by any form of utilitarianism, deontology, or any other ethical theory that strives to be consistent. In most cases, any attempt of using a reflective equilibrium / extrapolated volition -type approach to clarify matters will leave things essentially unchanged, except for a small fraction of individuals whose moral intuitions are highly atypical (and who tend to be vastly overrepresented on this site)."

(I don't actually know how well this describes the actual theories for particularism.)

Comment author: marchdown 04 July 2012 02:36:32AM -1 points [-]

Irrationality game

Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam's humanity, you would need very little additional information to get a good idea of Adam's morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it's not possible to cheat by formalizing moral separately.

Comment author: Eugine_Nier 04 July 2012 07:19:43AM 1 point [-]

Please attach a probability.

Comment author: asparisi 08 July 2012 07:05:21AM -1 points [-]

Irrationality Game

The Big Bang is not the beginning of the universe, nor is it even analagous to the beginning of the universe. (60% confident)

Comment author: [deleted] 16 July 2012 08:22:14PM -1 points [-]

Nonvoted. It might just be a 0 on the Real line, or analogous. I don't know the real laws of physics, but that seems sensible.

Comment author: Alejandro1 06 July 2012 07:36:55AM 2 points [-]

Irrationality Game:

The Occam argument against theism, in the forms typically used in LW invoking Kolmogorov complexity or equivalent notions, is a lousy argument: its premises and conclusions are not incorrect, but it is question-begging to the point that no intellectually sophisticated theist should move their credence significantly by it. 75%.

(It is difficult to attach meaningfully a probability to this kind of claim, which is not about hard facts. I guesstimated that in an ideally open-minded and reasoned philosophical discussion, there wold be a 25% chance of me being persuaded of the contrary.)

Comment author: Grognor 14 July 2012 03:09:33AM *  0 points [-]

Irrationality game comment

The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.

Comment author: HonoreDB 05 July 2012 03:32:58AM 5 points [-]

Irrationality Game

Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%

Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%

Comment author: [deleted] 05 July 2012 04:22:47PM 0 points [-]

If you think Prediction Markets are terrible, why don't you just do better and get rich from them?

Comment author: AspiringRationalist 05 July 2012 03:53:49AM 2 points [-]

Down-voted for semi-agreement.

There are simply too many irrational people with money, and as soon as it became popular to participate in prediction markets, the way it currently is to participate in the stock market, they will add huge amounts of noise.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:29:07PM 9 points [-]

The conventional reply is that noise traders improve markets by making rational prediction more profitable. This is almost certainly true for short-term noise, and my guess is that it's false for long-term noise, i.e., if prices revert in a day, noise traders improve a market, if prices take ten years to revert, the rational money seeks shorter-term gains. Prediction markets may be expected to do better because they have a definite, known date on which the dumb money loses - you can stay solvent longer than the market stays irrational.

Comment author: wedrifid 05 July 2012 03:47:55AM 18 points [-]

They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.

Comment author: HonoreDB 05 July 2012 03:57:57AM 1 point [-]

Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Comment author: Kaj_Sotala 05 July 2012 08:00:07AM *  13 points [-]

And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Aren't prediction markets just a special case of financial markets? (Or vice versa.) Then if your algorithm could outperform prediction markets, it could also outperform the financial ones, where there is lots of money to be made.

In prediction markets, you are betting money on your probability estimates of various things X happening. On financial markets, you are betting money on your probability estimates of the same things X, plus your estimate of the effect of X on the prices of various stocks or commodities.

Comment author: CarlShulman 18 July 2012 01:10:46AM 1 point [-]

The IARPA expert aggregation exercises look plausible, and have supposedly done all right predicting geopolitical events. I would not be shocked if the first to use those methods on financial markets got a bit of alpha.

Comment author: RichardKennaway 05 July 2012 08:37:54AM 1 point [-]

histocratic

A new word to me. Is this what you're referring to?

Comment author: Kaj_Sotala 05 July 2012 08:02:41AM 7 points [-]

would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Markets can incorporate any source or type of information that humans can understand. Which algorithm can do the same?

Comment author: [deleted] 01 August 2012 10:37:17PM *  1 point [-]

The Mona Lisa currently exposed at the Louvre Museum is actually a replica. (33%)

Comment author: AandNot-A 09 July 2012 12:17:32PM -1 points [-]

Irrationality game:

Different levels of description are just that, and are all equally "real". To speak of particles as in statistical mechanics or as in thermodynamics is as correct/real.

The same about the mind, talking as in neurochemistry or as in thoughts is as correct/real.

80% confidence

Comment author: Dallas 04 July 2012 12:41:49PM 4 points [-]

An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)

Comment author: wedrifid 09 July 2012 09:54:55PM 0 points [-]

An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)

I would have upvoted this even if it limited itself to "intelligent aliens exist in the current observable universe".

Comment author: FiftyTwo 06 July 2012 09:33:01PM 2 points [-]

The probability of this would seem to depend on the resolution of the fermi paradox. If life is relatively common then it would seem to be true purely by statistics. If life is relatively rare then it would require some sort of shared aesthetic standard. Are you saying aesthetics might be universal in the same way as say mathematics?

Comment author: TheOtherDave 04 July 2012 03:47:55PM 5 points [-]

I'm inclined to downvote this for agreement, but haven't yet. Can you say more about what "directly analogous" means? How different from ASZ can this work of art be and still count?

Comment author: Dallas 05 July 2012 03:38:53AM 7 points [-]
  1. The art form must be linear and intend to proceed without interaction from the user.
  2. The length of the three "notes" must be in 8:8:15 ratio (in that order).
  3. The main distinguishing factor between "notes", must be in 2:3:4 ratio (in that order).
  4. The motif must be the overwhelmingly dominant "voice" when it occurs.
Comment author: faul_sname 05 July 2012 04:01:34AM 4 points [-]

Upvoted for overconfidence, not about the directly analogous art form (I suspect that even several hundred pieces of human art have that) but about there being other civilizations within the observable universe.

Though I would still give that at least 20%.

Comment author: TheOtherDave 05 July 2012 03:43:15AM 2 points [-]

Cool. Upvoted immediate parent for specificity and downvoted grandparent for agreement.

Comment author: Yvain 20 July 2012 02:37:30AM *  2 points [-]

Irrationality game comment

The importance of waste heat in the brain is generally under-appreciated. An overheated brain is a major source of mental exhaustion, akrasia, and brain fog. One easy way to increase the amount of practical intelligence we can bring to bear on complicated tasks (with or without an accompanying increase in IQ itself) is to improving cooling in the brain. This would be most effective with some kind of surgical cooling system thingy, but even simple things like being in a cold room could help

Confidence: 30%

Comment author: [deleted] 03 February 2013 09:44:32PM 1 point [-]

INSERT THE ROD, JOHN.

Comment author: Jonathan_Graehl 11 October 2012 12:39:23AM 1 point [-]

Overheating your body enough to limit athletic performance (whether due to associated dehydration or not) is probably enough to impair the brain as well. Dehydration is known to cause headaches.

I think the effect exists. But what's the size, when you're merely sedentary + thinking + suffering a hot+humid day?

Comment author: gwern 20 July 2012 03:35:45AM *  4 points [-]

The nice thing about this one is that it's really easy to test yourself. A plastic bag to put ice or hot water into, and some computerized mental exercise like dual n-back. I know if I thought this at anywhere close to 30% I'd test it...

EDIT: see Yvain's full version: http://squid314.livejournal.com/320770.html http://squid314.livejournal.com/321233.html http://squid314.livejournal.com/321773.html

Comment author: Yvain 27 July 2012 08:40:17PM 1 point [-]

Self-experimentation seems like a really bad way to test things about mental exhaustion. It would be way too easy to placebo myself into working for a longer amount of time without a break, when testing the condition that would support my theory. Might wait until I can find a test subject.

Comment author: gwern 27 July 2012 08:50:55PM 7 points [-]

If you got a result consistent with your theory, then yes it might just be placebo effect, but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?

Comment author: wedrifid 08 September 2013 01:06:08AM 2 points [-]

but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?

"Conservation of expected uselessness!"

Comment author: [deleted] 09 July 2012 10:18:38PM *  3 points [-]

It is plausible that an existing species of dolphin or whale possesses symbolic language and oral culture at least on par with that of neolithic-era humanity. (75%)

Comment author: Alicorn 09 July 2012 11:00:21PM *  7 points [-]

Is "it is plausible" part of the statement to which you give 75% credence, or is it another way of putting said credence?

Because cetacean-language is more than 75% likely to be plausible but I think less than 75% likely to be true.

Comment author: Not-A 06 July 2012 07:29:28PM 2 points [-]

Irrationality Game:

I believe Plato (and others) were right when they said music develops some form of sensibility, some sort of compassion. I posit a link between the capacity of understanding music and understanding other people by creating accurate images of them in our head, and of how they feel. 80%

Comment author: sixes_and_sevens 03 July 2012 09:29:27PM 9 points [-]

Irrationality Game

It's possible to construct a relatively simple algorithm to distinguish superstimulatory / acrasiatic media from novel, educational or insightful content. Such an algorithm need not make use of probabilistic classifiers or machine-learning techniques that rely on my own personal tastes. The distinction can be made based on testable, objective properties of the material. (~20%)

(This is a bit esoteric. I am starting to think up aggressive tactics to curb my time-wasteful internet habits, and was idly fantasising about a browser plugin that would tell me whether the link I was about to follow was entertaining glurge or potentially valuable. In wondering how that would work, I started thinking about how I classify it. My first thought would be that it's a subjective judgement call, and a naive acid-test that distinguished the two was tantamount to magic. After thinking about it for a little longer, I've started to develop some modestly-weighted fuzzy intuitions that there is some objective property I use to classify them, and that this may map faithfully onto how other people classify them.)

Comment author: MixedNuts 11 July 2012 10:50:17AM -1 points [-]

Upvoted for underconfidence.

Comment author: faul_sname 04 July 2012 10:47:04PM 4 points [-]

Such an algorithm need not make use of probabilistic classifiers.

Upvoted for this sentence.

Comment author: John_Maxwell_IV 03 July 2012 09:16:32PM *  4 points [-]

I proposed a variation on this game, optimized for usefulness instead of novelty: the "maximal update game". Start with a one sentence summary of your conclusion, then justify it. Vote up or down the submissions of others based on the degree to which you update on the one sentence summary of the person's conclusion. (Hence no UFOs at the top, unless good arguments for them can be made.)

If anyone wants to try this game, feel free to do it in replies to this comment.

Comment author: Pavitra 04 July 2012 03:47:12AM 6 points [-]

Downvoted for agreement: you did in fact propose the specified variation.

Comment author: faul_sname 04 July 2012 10:48:57PM 1 point [-]

He didn't state his confidence level. Since his probability estimate for this is likely much higher than mine, I upvoted.

Comment author: steven0461 05 July 2012 02:05:02AM 0 points [-]

That seems worth its own thread.

Comment author: NancyLebovitz 04 July 2012 12:40:35PM 14 points [-]

Irrationality Game

Being a materialist doesn't exclude nearly as much of the magical, religious, and anomalous as most materialists believe because matter/energy is much weirder than is currently scientifically accepted.

75% certainty.

Comment author: Will_Newsome 06 July 2012 11:29:05AM 0 points [-]

Do materialists still exist? In order to vote on this am I to imagine what not-necessarily-coherent model a materialist should in some sense have given their irreversible handicap in the form of a misguided metaphysic? If so I'd vote down; if not I'd vote up.

Comment author: sixes_and_sevens 04 July 2012 04:25:40PM 3 points [-]

Upvoted, as many phenomena that get labelled "magical" or "religious" have readily-identifiable materialist causes. For those phenomena to be a consequence of esoteric physics and to have a more pedestrian materialist explanation that turns out to be incorrect, and to conform to enough of a culturally-prescribed category of magical phenomena to be labelled as such in the first place seems like a staggering collection of coincidences.

Comment author: CellBioGuy 03 February 2013 08:03:38PM 1 point [-]

Upvoted for disagreement with the quibble that there is probably room for a lot of interesting things in the realm of human experience that while not necessarily relating one-to-one with nonhuman physical reality, have significance witin the context of human thought or social interaction and contain elements that normally get lumped into magical or religious.

Comment author: torekp 07 July 2012 02:12:59AM 1 point [-]

matter/energy is much weirder than is currently scientifically accepted.

Nitpick: do you really mean this? Current scientific theories are pretty damn weird. But not, in your view, weird enough?

Comment author: NancyLebovitz 07 July 2012 02:32:42AM 1 point [-]

I'm pretty sure that the current theories aren't weird enough, but less sure that current theories need to be modified to include various things that people experience. However, it does seem to me that materialists are very quick to conclude that mental phenomena have straightforward physical explanations.

Comment author: [deleted] 16 July 2012 08:17:22PM -1 points [-]

May I remind you that scientists rescently created and indirectly observed the elementary particle responsible for mass?

The smallest mote of the thing that makes stuff have inertia. Has. Been. Indirectly. Observed.

What.

Comment author: MileyCyrus 04 July 2012 03:49:52PM 3 points [-]

I'm having trouble understanding what you are claiming. It seems that once anything is found to exist in the actual world, people won't call it "magical" or "anomalous". When Hermione Granger uses an invisibility cloak, it's magic. When researchers at the University of Dallas use an invisibility cloak, it's science.

Comment author: NancyLebovitz 04 July 2012 04:19:44PM 2 points [-]

What I meant was that there may be more to such things as auras, ghosts, precognition, free will, etc. than current skepticism allows for, while still not having anything in the universe other than matter/energy.

Comment author: Eugine_Nier 05 July 2012 06:22:04AM -1 points [-]

Taboo "matter/energy".

Comment author: wedrifid 05 July 2012 06:40:31AM 5 points [-]

Taboo "matter/energy".

Well damn. What is left? "You know... like... the stuff that there is."

Comment author: Eugine_Nier 06 July 2012 03:17:51AM -1 points [-]

My point is that what counts as matter/energy may very well not be obvious in different theories.

Comment author: Armok_GoB 10 July 2012 01:43:45AM 1 point [-]

Algebra.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:30:09PM 1 point [-]

Causes and effects.

Comment author: wedrifid 09 July 2012 09:13:48PM 1 point [-]

Causes and effects.

Good point. But this 'cause' word is still a little nebulous and seems to confuse some people. Taboo 'cause'!

Comment author: NancyLebovitz 05 July 2012 10:02:26AM 2 points [-]

Thank you. I was about to ask the same thing.

Comment author: hankx7787 05 July 2012 04:50:54PM 2 points [-]

MWI is unlikely because it is too unparsimonious (not very confident).

Comment author: RomeoStevens 05 July 2012 04:45:43AM 5 points [-]

There is no dark matter. Gravity behaves weirdly for some other reason we haven't discovered yet. (85%)

Comment author: Mitchell_Porter 05 July 2012 07:09:21AM 2 points [-]

Many such "modified gravity" theories have been proposed. The best known is "MOND", "Modified Newtonian Dynamics".

Comment author: AspiringRationalist 05 July 2012 04:02:35AM 6 points [-]

The case for atheistic reductionism is not a slam-dunk.

While atheistic reductionism is clearly simpler than any of the competing hypotheses, each added bit of complexity doubles the size of hypothesis space. Some of these additional hypotheses will be ruled out due to impossibility or inconsistency with observation, but that still leaves a huge number of possible hypotheses that each add take up a tiny amount of probability mass, but they add up.

I would give atheistic reductionism a ~30% probability of being true. (I would still assign specific human religions or a specific simulation scenario approximately zero probability.)

Comment author: Pavitra 05 July 2012 07:45:17AM 0 points [-]

Assuming our MMS-prior uses a binary machine, the probability of any single hypothesis of complexity C=X is equal to the total probabilities of all hypotheses of complexity C>X.

Comment author: prase 04 July 2012 10:25:55PM 6 points [-]

Irrationality game comment:

Imagine that we transformed the Universe using some elegant mathematical mapping (think about Fourier transform of the phase space) or that we were able to see the world through different quantum observables than we have today (seeing the world primarily in the momentum space, or even being able to experience "collapses" to eigenvectiors not of x or p, but of a different, for us unobservable, operator, e.g. xp). Then, we would observe complex structures, perhaps with their own evolution and life and intelligence. That is, aliens can be all around us but remain as invisible as Mona Lisa on a Fourier transformed picture from Louvre.

Probability : 15%.

Comment author: marchdown 07 July 2012 01:54:01AM 1 point [-]

This is an interesting way to look at things. I would assert a higher probability, so I'm voting up. Even a slight tweaking (x+ε, m-ε) is enough. I'm imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.

Comment author: Manfred 05 July 2012 12:42:26AM 1 point [-]

Any blob (continuous, smooth, rapidly decreasing function) in momentum space corresponds to a blob in position space. That is, you can't get structure in one without structure in the other.

Comment author: prase 05 July 2012 01:05:16PM 4 points [-]
  1. The narrower blob, the wider its Fourier transform. To recognise a perfectly localised blob in the momentum space one would need to measure at every place over the whole Universe.
  2. Not every structure is recognisable as such by human eye.
Comment author: endoself 05 July 2012 12:33:03AM 1 point [-]

Upvoted for underconfidence; there are a lot of bases you can use.

Comment author: prase 05 July 2012 01:19:11PM 1 point [-]

Still, what you see in one basis is not independent on what you see in another one, and I expect elegant mapping between the bases. There is difference between

  • "there exist a basis in the Hilbert space in which some vaguely interesting phenomena could be observed, if we were able to perceive the associated operator the same way as we perceive position"

and

  • "there exist simple functions of observables such as momentum, particle number or field intensities defining observables which, if we could perceive them directly, would show us a world with life and civilisations and evolution"

My 15% belief is closer to the second version.

Comment author: endoself 06 July 2012 01:41:03AM 2 points [-]

Okay, that's less likely. I'd still give it higher than 15% though. The holographic principle is very suggestive of this, for instance.

It's hard to know exactly what would count in order to make an estimate, since we don't yet know the actual laws of physics. It's obvious that "position observables, but farther away" would encode the regular type of alien, but the boundary between regular aliens and weird quantum aliens could easily blur as we learn more physics.

Comment author: Andreas_Giger 03 July 2012 07:36:47PM *  18 points [-]

I'll bite:

The U.S. government deliberately provoked the attack on Pearl Harbour through diplomacy and/or fleet redeployment, and it was not by chance that the carriers of the U.S. Pacific Fleet weren't at port when the attack happened.

Very confident. (90-95%)

By the way, the reason I assume I am personally more rational about this than the LW average is that there are lots of US Americans around here, and I have sufficient evidence to believe that people tend to become less rational if a topic centrally involves a country they are emotionally involved with or whose educational system they went through.

Comment author: [deleted] 03 July 2012 10:59:36PM 4 points [-]

I have seen a few low status conspiracy theorists advocating a position like this, and eventually started to agree that provoking an attack from an enemy is a strategy the US has used several times this century, my probability for this particular incident is still around 75% at most though

Comment author: faul_sname 04 July 2012 05:13:28AM *  5 points [-]

Upvoted, not for the assertion, but for the confidence level (I would give it 25-75%)

Comment author: Andreas_Giger 04 July 2012 10:38:56AM 1 point [-]

Thanks; I assumed the many upvotes came from people who considered my confidence level too high, not too low, but it's nice to have someone actually confirm that.

Comment author: Vladimir_M 04 July 2012 01:21:38AM *  15 points [-]

Regarding the first part, the truth of that statement critically depends on how exactly you define "provoke." For some reasonable definitions, the statement is almost certainly true; for others, probably not.

As for the second part (the supposed intentional dispersion of the carriers), I don't think that's plausible. If anything, the U.S. would have been in a similar position, i.e. at war with Japan with guaranteed victory, even if every single ship under the U.S. flag magically got sunk on December 7, 1941. So even if there was a real conspiracy involved, it would have made no sense to add this large and risky element to it just to make the eventual victory somewhat quicker.

Also, your heuristic about bias is broken. In the Western world outside of the U.S., people are on average, if anything, only more inclined to believe the official historical narrative about WW2.

Comment author: prase 04 July 2012 10:52:54PM 5 points [-]

If anything, the U.S. would have been in a similar position, i.e. at war with Japan with guaranteed victory, even if every single ship under the U.S. flag magically got sunk on December 7, 1941.

This is suspect. The U.S. had greater industrial capacities and population than Japan, but that doesn't guarantee victory. Rebuilding the navy would take a lot of time which the Japanese could use to end their war in China. Also, it was far from clear in late 1941 whether the USSR would withstand the German assault and whether the British would not seek peace.

Comment author: Vladimir_M 05 July 2012 12:10:08AM *  2 points [-]

Even in the worst possible case, I still don't see what could prevent the U.S. from simply cranking out a new huge Pacific navy and overwhelming Japan. Yes, the production would take a few years to ramp up to full capacity, as it did in reality -- but once it did, I can't imagine what could save Japan from being overwhelmed.

Ending the war in China wouldn't have helped the Japanese at all, even if they linked with a victorious German army in the Far East. An additional land army at their disposal could not prevent the U.S. navy steamroller from eventually reaching their home islands, whereupon they would be bombed and starved into surrender. (If not for the atom bomb ending their agony even earlier.) The Japanese islands are so exposed and vulnerable to any superior naval power that they could be lost even as the world's mightiest army is watching helplessly from the Asian mainland.

The only theoretical chance I see is if Germany somehow conquered both the U.S.S.R. and Britain, and then threw all its resources on a crash program to build up a huge navy of its own and help the Japanese. But I'm not sure if they'd be able to outproduce the U.S. even in that case. (And note that this would require a vanishingly improbable long continuation of the Germans' lucky streak.)

Comment author: prase 05 July 2012 02:21:38PM *  3 points [-]

In the context of this discussion the important thing is what could be reliably predicted in 1941, so we should ignore the possible effects of the atomic bomb.

Assume that the entire U.S. navy is destroyed in January 1942. A reasonable realistic scenario, if everything went really well for Japan, may be this:

  • Germans capture Leningrad and encircle Moscow in summer 1942, Stalin is arrested in the forthcoming chaos and the new Soviet government signs armistice with Germany, ceding large territories in the west.
  • German effort is now concentrated on expanding their naval power. Germany has half of Europe's industrial capacity at her disposal. The production of U-boats increases and Britain alone has not enough destroyers to guard the convoys.
  • Starvation, threat of German invasion and heavy naval losses to German submarines, leading to inability to supply the Indian armies, make Britain accept Hitler's peace offer. Britain surrenders Gibraltar, Malta, Channel islands and all interests in European mainland to Germany and Italy, Singapore and Malaya to Japan and backs from the war.
  • China now obtains no help, no arms, no aircraft and surrenders in 1944, becoming divided among several Japanese puppet states.
  • The U.S. are alone, still having no significant navy. Hawaii is lost to the Japanese. Germany is aggresively building new ships to improve their naval power and potentially help the Japanese in the Pacific. Roosevelt dies in early 1945, as he did historically. The Japanese offer peace that would secure them the leading position in East Asia, willing to give Hawaii back.

Now in this situation, being a U.S. general, what would be your advice given to Truman? Would it be "let's continue in a low intensity war against both Germany and Japan until we have a strong enough navy, which may be in 1947 or 1948, and then start taking one island after another, which may take two more years, and then, from the island bases supplied through the U-boat infested Pacific start bombarding Japan, until the damned fanatics realise they have no other chance than to surrender"? Or would it rather be "let's accept peace if it's offered on honourable terms"?

Comment author: Vladimir_M 05 July 2012 05:18:04PM *  2 points [-]

Even in that scenario, Japanese victory is conditional on the political decision of the U.S. government to accept the peace. My comments considered only the strategic situation under the assumption that all sides were willing to fight on with determination. And I don't think this assumption is so unrealistic: the American people were extremely unwilling to enter the war, but once they did, they would have been even less willing to accept a humiliating peace. Especially since the Pacific great naval offensive could be (and historically was) fought with very low casualties, and not to mention the U.S. government's wartime control of the media that was in many ways even more effective than the crude and heavy-handed control in totalitarian states.

Now, in your scenario, the U.S. would presumably see immediately that its first priority was navy rebuilding. (An army is useless if you can't get it off the mainland.) This means that by 1944, Americans would be cranking out even more ships than they did historically. I don't think the Axis could match that output even if they were in control of the entire Eurasia.

(The U-boats would have been a complicating factor. Their effectiveness changed dramatically with unpredictable innovations in technology and tactics. In actual history, they became useless by mid-1943, although Germans were arguably on the verge of introducing dramatically superior ones at the time of their capitulation. But in any case, the U-boat factor cuts both ways: Americans could swamp the Pacific with even greater numbers of U-boats and wreck the entire Japanese logistics, as they actually did.)

Comment author: TimS 05 July 2012 02:48:52PM 0 points [-]

Even assuming a plausible scenario in which the US couldn't defeat Germany, that doesn't have anything to do with whether we could have defeated Japan standing alone.

Historically, we know it wasn't that hard for the US - despite Japan attacking first, the US adopted a "Europe First" strategy that committed approx. 2/3 of capacity to fighting Germany. Despite this, the US defeated Japan easily - there are no major victories for Japan against the US after Pearl Harbor, and Midway was less than a year after Pearl Harbor. If the US strategy is "Japan First" (doing things like transferring the Atlantic Fleet to the Pacific), why should we expect the Pacific war would last long enough that Germany would be able to consolidate a victory in the east into driving the UK into peace and be able to intervene in the Pacific?

Also, why do you think an invasion of Hawaii was possible? The surprise strike was at the end of Japanese logistical capacity - I think the US wins if Japan tries a land invasion.

Comment author: prase 05 July 2012 03:37:05PM *  1 point [-]

If the US strategy is "Japan First" (doing things like transferring the Atlantic Fleet to the Pacific), why should we expect the Pacific war would last long enough that Germany would be able to consolidate a victory in the east into driving the UK into peace and be able to intervene in the Pacific?

Remember the context: we are in the hypothetical where all US ships (Atlantic fleet included) were magically anihilated in the end of 1941.

Comment author: TimS 05 July 2012 03:53:58PM 1 point [-]

I'm a big believer in not fighting the hypothetical, but there is no historically plausible account leading to the destruction of the Atlantic fleet. At that point, we aren't discussing facts relevant to whether FDR knew of the Pearl Harbor attack ahead of time.

The hypothetical of Pearl Harbor as the most resounding success it could possibly be (US Pacific fleet reduced to irrelevance) and Germany winning the Battle of Moscow strongly enough that it has leverage to force the UK out of the war is reasonable for discussing FDR's decision process. That's all he could reasonably have thought he was risking by allowing Pearl Harbor. As I stated elsewhere, I think FDR gets his political goals with Japan firing the first shot - there's no need for him to court a military disaster.

Comment author: Douglas_Knight 04 July 2012 04:38:03AM *  2 points [-]

Could you spell out what you mean by different definitions of "provoke"?

Anyhow, I am more concerned about the word "deliberate." The government is not a coherent actor; it does not have deliberate actions. For example, FDR explicitly rejected an oil embargo, yet oil exports stopped. Was this because his subordinates correctly interpreted his wishes? Or were they more belligerent? In Present at the Creation (p26) Acheson seems to say that he implemented the embargo by mistake, thinking that Japan had hidden assets that would keep the flow going. On the following page, he agrees to accept payment from a Latin American bank, but something goes awry, seemingly out of his control. Delong asks if FDR even knew of the embargo.

Comment author: Andreas_Giger 04 July 2012 12:57:25PM *  1 point [-]

Regarding the first part, the truth of that statement critically depends on how exactly you define "provoke."

I am more concerned about the word "deliberate."

  • Provoking: presenting someone with a multitude of bad choices, one of them being to attack you.
  • Deliberate: proceeding with an action in the hope of achieving a specific outcome.
  • Deliberately provoking: presenting someone with a multitude of bad choices, hoping they will attack you because of this.

As for the second part (the supposed intentional dispersion of the carriers), I don't think that's plausible. If anything, the U.S. would have been in a similar position, i.e. at war with Japan with guaranteed victory, even if every single ship under the U.S. flag magically got sunk on December 7, 1941. So even if there was a real conspiracy involved, it would have made no sense to add this large and risky element to it just to make the eventual victory somewhat quicker.

The carrier fleet being operational was decisive in preventing an expected Japanese invasion of Midway and Hawaii, and recapturing Hawaii from the American continent would have been very difficult, if not outright impossible. What if China had surrendered or made peace with Japan? What if Germany captured Leningrad, Moscow, and Stalingrad? What if the Japanese nuclear weapon program had succeded? What if the public opinion had turned anti-war, as during the Vietnam War?

"Guaranteed victory" sounds like hindsight bias to me. Even if the US mainland could not have been invaded, that doesn't mean the USA could not have lost the war.

Also, your heuristic about bias is broken. In the Western world outside of the U.S., people are on average, if anything, only more inclined to believe the official historical narrative about WW2.

The point is that the "official historical narrative" is different in different countries. For example, Japan has a strong culture of ignoring Japanese war crimes, in Polish textbooks there rarely is mention of Poland taking part in the partition of Czechoslovakia, Britons are generally unaware of the fact that GB declared war on Germany and not vice versa, many French think that the surrender to Germany was an action the government did not have the license to make, and so on.

The government is not a coherent actor; it does not have deliberate actions.

"The government" is an abstract concept. I am talking about a circle of people within the government who together had the power to provoke Japan, and to assure that the losses at Pearl Harbor were within reasonable bounds. I am not overly familiar with the way the U.S. government was organised at that time, but it seems to me that such a circle had to include either the president or high ranking intelligence officials, most likely both.

Comment author: prase 03 July 2012 09:33:31PM 8 points [-]
  1. Do you think that the U.S. government provoked an attack specifically on Pearl Harbor, or that they just wanted the Japanese to attack somewhere?
  2. Where exactly do you place the boundary of deliberate provocation? That is, does not trying too hard to prevent the attack count, or had they have to be actively persuading the Japanese and moving the fleets into easily attackable positions?
Comment author: Andreas_Giger 04 July 2012 12:05:08PM 1 point [-]

Do you think that the U.S. government provoked an attack specifically on Pearl Harbor, or that they just wanted the Japanese to attack somewhere?

I think they wanted the Japanese to attack somewhere, but they were aware of the fact that Pearl Harbor was a likely target.

Where exactly do you place the boundary of deliberate provocation? That is, does not trying too hard to prevent the attack count, or had they have to be actively persuading the Japanese and moving the fleets into easily attackable positions?

I think they were actively persuading the Japanese to commit some act of war, and were not trying too hard to prevent the specific act of war that happened.

Comment author: see 04 July 2012 05:11:00AM 9 points [-]

The "and it was not chance" bit? That requires the conspirators be non-human.

Carrier supremacy was hardly an established doctrine, much less proved in battle; orthodox belief since Mahan was that battleships were the most important ships in a fleet. The orthodox method of preserving the US Navy's power would have been to disperse battleships, not carriers. Even if the conspirators were all believers in the importance of carriers, even a minimum of caution would have led them to find an excuse to also save some of the battleships. To believe at 90% confidence that a group of senior naval officials, while engaging in a high-stakes conspiracy, also took a huge un-hedged gamble on an idea that directly contradicted the established naval dogma they were steeped in since they were midshipmen, is ludicrous.

Comment author: Andreas_Giger 04 July 2012 10:49:07AM 0 points [-]

Not really. It wasn't just "a carrier fleet" and "a battleship fleet", it was a predominantly modern carrier fleet and an outdated battleship fleet that consisted mostly of WWI designs or modifications of WWI designs. It was also consensus that if you were going to deploy carriers, the Pacific Ocean was a more promising theatre than the Atlantic ocean, due to (a) the weather and (b) the lack of strategically positioned air bases on land that were in little danger of being invaded, such as Newfoundland, Great Britain, West Africa, and so on. Also, the U.S. Navy could have commissioned more battleships instead of carriers, but they didn't, and that means they did have plans for them; most likely in the Pacific theatre. It was clear from the start that being at war with Japan would also mean being at war with Germany, so fighting only on the Pacific front was never an option.

Comment author: see 04 July 2012 05:50:10PM 4 points [-]

I didn't say they wouldn't try to save the carriers. I said they would have hedged their bets by also dispersing some of the battleships. Your 90% confidence in your whole conjunct opinion requires a greater-than-90% confidence in the proposition that while saving the carriers, the people involved, all steeped in battleship supremacy/prestige for decades, would deliberately leave all the battleships vulnerable, rather than disperse even one or two as a hedge.

Also, the U.S. Navy could have commissioned more battleships instead of carriers,

Only in violation of the Washington and First London Naval Treaties. The US Navy could not have built more battleships at the time it started, for example, the Enterprise (1934) under those treaties.

I note that in the period 1937-to-Pearl-Harbor, which is to say subsequent to the 1936 Second London Naval Treaty that allowed it, the US Navy started no fewer than nine new battleships (and got funding authorization for a tenth), which suggests that they still seriously believed in battleships. Otherwise, why not build carriers in their place?

Comment author: Andreas_Giger 04 July 2012 05:59:00PM *  1 point [-]

I didn't say they wouldn't try to save the carriers. I said they would have hedged their bets by also dispersing some of the battleships. Your 90% confidence in your whole conjunct opinion requires a greater-than-90% confidence in the proposition that while saving the carriers, the people involved, all steeped in battleship supremacy/prestige for decades, would deliberately leave all the battleships vulnerable, rather than disperse even one or two as a hedge.

But they did disperse some of the battleships. That's why all the battleships at Pearl Harbor were outdated classes. They didn't have that many outdated carriers, and carriers retain their value more over the course of time than battleships and battlecruisers do.

The ratio value:tonnage of capital ships sunk at Pearl harbor was significantly lower than the ratio value:tonnage of capital ships in the surviving fleets in the Pacific Ocean and elsewhere. This was never about carriers versus battleships, it was about vessels with high value versus vessels with low value.

Comment author: see 04 July 2012 06:36:07PM 5 points [-]

Er? What battleships are you claiming were dispersed?

There were quite literally no newer battleships on active duty in the US Navy on December 7th, 1941 than the West Virginia, "outdated class" or no, sunk at Pearl Harbor along with her brand-new CXAM-1 radar. The only newer battleships in commission were the North Carolina and Washington, both of which were not yet on active duty because of delays caused by propeller issues.

Comment author: [deleted] 04 July 2012 01:46:25AM 17 points [-]

Computationalism is an incorrect model of cognition. Brains compute, but mind is not what the brain does. There is no self hiding inside your apesuit. You are the apesuit. Minds are embodied and extended, and a major reason why the research program to build synthetic intelligences has largely gone nowhere since its inception is the failure of many researchers to understand/agree with this idea.

70%

Comment author: Kindly 05 July 2012 12:07:56AM 3 points [-]

Just because I am an apesuit, doesn't mean I need to dress my synthetic intelligence in one.

Comment author: [deleted] 04 July 2012 10:05:58PM *  1 point [-]

Have you been reading this recently?

More particularly, anything that links to this post.

Comment author: Armok_GoB 04 July 2012 08:02:43PM 2 points [-]

Do you believe a upload with a simulated body would work? how high fidelity?

Comment author: magfrump 04 July 2012 09:25:22AM -1 points [-]

I don't understand why you don't believe that computations can be "embodied and extended."

I do believe that the fact that any kind of human emulation would have to be embedded into a digital body with sensory inputs is underdiscussed here, though I'm not even sure what constitutes scientific literature on the subject so I don't want to make statements about that.

Comment author: torekp 07 July 2012 02:05:17AM 1 point [-]

Computations can be embodied and extended, but computationalism regards embodiment and extension as unworthy of interest or concern. Downvoted the parent for being probably right.

Comment author: magfrump 07 July 2012 07:58:50PM 1 point [-]

Can you provide a citation for that point?

Not knowing anything really about academic cognitive psychologists, and just being someone who identifies as a computationalist, I feel like the embodiment of a computation is still very important to ANY computation.

If the OP means that researchers underestimate the plasticity of the brain in response to its inputs and outputs, and that their research doesn't draw a circle around the right "computer" to develop a good theory of mind, then I'm extra interested to see some kind of reference to papers which attempt to isolate the brain too much.

Comment author: torekp 08 July 2012 01:04:29AM *  1 point [-]

I understand "computationalism" as referring to the philosophical Computational Theory of the Mind (wiki, Stanford Encyclopedia of Phil.). From the wiki:

Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However the two theories differ in that the representational theory claims that all mental states are representations while the computational theory leaves open that certain mental states, such as pain or depression, may not be representational and therefore may not be suitable for a computational treatment.

From the SEP:

representations have both semantic and syntactic properties, and processes of reasoning are performed in ways responsive only to the syntax of the symbols—a type of process that meets a technical definition of ‘computation’

Because computation is about syntax not semantics, the physical context - embodiment and extension - is irrelevant to computation qua computation. That is what I mean when I say that embodiment and extension are regarded as of no interest. Of course, if a philosopher is less thorough-going about computationalism, leaving pains and depression out of it for example, then embodiment may be of interest for those mental events.

However, your last paragraph throws a monkey wrench into my reasoning, because you raise the possibility of a "computer" drawn to include more territory. All I can say is, that would be unusual, and it seems more straightforward to delineate the syntactic rules of the visual system's edge-detection and blob-detection processes, for example, than of the whole organism+world system.

Comment author: Kindly 03 July 2012 10:51:20PM 24 points [-]

Irrationality game

0 and 1 are probabilities. (100%)

Comment author: Normal_Anomaly 04 July 2012 01:55:33AM 1 point [-]

Only Sith deal in absolutes!

I am very happy that the parent is currently at 0 karma.

Comment author: John_Maxwell_IV 06 July 2012 05:13:51AM 0 points [-]

This doesn't offer any anticipation about the world for me to agree or disagree with. Probability is just a formalism you use, and there's no reason for you to not define the formalism anyway you want.

Comment author: asparisi 04 July 2012 12:05:21AM 11 points [-]

Upvoted not for the claim, but the ridiculously high confidence in that claim.

Comment author: Grognor 05 July 2012 11:35:55PM 1 point [-]

I'd like to point out that anyone who does not share the (claimed) Infinite Certainty should be upvoting, as this confidence level is infinitely higher than any other possible confidence level. (It's kind of like, if you agree that dividing by zero is merely an error, then any claim to infinite certainty is also an error, almost exactly the same error in fact.)

Comment author: [deleted] 05 July 2012 01:35:04AM 1 point [-]

So you are more confident in math than in hallucinating this entire interaction with an internet forum?

Comment author: Kindly 05 July 2012 01:57:59AM 2 points [-]

I'm not quite sure how to parse that, but I'll do my best. I am more confident in math than I am in my belief that arbitrary parts of my life are not hallucinations.

Comment author: [deleted] 05 July 2012 02:01:28AM 0 points [-]

Damn... You're good. Anyway, 1 and 0 aren't probabilities because Bayes Theorem break down there (in the log-odds/information base where Bayes Theorem is simple addition, they are positive and negative infinity). You can however meaningfully construct limits of probabilities. I prefer the notation (1 -) epsilon.

Comment author: Kindly 05 July 2012 02:20:24AM *  2 points [-]

Log-odds aren't what probability is, they're a way to think about probability. They happen not to work so well when the probabilities are 0 and 1; they also fail rather dramatically for probability density functions. That doesn't mean they don't have their uses.

Similarly, Bayes's Theorem breaks down because the proof of it assumes a nonzero probability. This isn't fixed by defining away 0 and 1, because it can still return those as output, and then you end up looking silly. In many cases, not being able to condition on an event with probability 0 is the only thing to do: given that a d6 comes up both odd and even, what is the probability that the result is higher than 3?

[I tried saying some things about conditioning on sets of measure 0 here, but apparently I don't know what I'm talking about so I will retract that portion of the comment for the sake of clarity.]

Comment author: endoself 06 July 2012 01:22:02AM 2 points [-]

In more mathematical settings, you can successfully condition on events with probability 0 (for instance, if (X,Y) follow a bivariate normal distribution, you might want to know the probability distribution of Y given X=x).

You can't really do this, since the answer depends on how you take the limit. You can find a limit of conditional probabilities, but saying "the probability distribution of Y given X=x" is ambiguous. This is known as the Borel-Kolmogorov paradox.

Comment author: Kindly 06 July 2012 01:29:19AM 2 points [-]

Oops. Right, I knew there were some problems here, but I thought the way I defined it I was safe. I guess not. Thanks for keeping me honest!

Comment author: [deleted] 05 July 2012 04:15:26PM *  2 points [-]

Log-odds are perfectly isomorphic with probabilities and satisfies Cox's Theorem. Saying that log-odds are not what probabilities are is as non-sequiteur as saying 2+2 isn't a valid representation of 4.

Bayes theorem assumes no such thing as non-zero probability, it assumes Real Numbered probabilities, as it is in fact a perfectly valid statement of real-number arithmetic in any other context. It just so happens to be that this arithmetic expression is undefined for when certain variables are 0, and is an identity (equal to 1) when certain variables are 1. Neither are particularly interesting.

Bayes Theorem is interesting because it becomes propositional logic when you apply it to a limit going towards 1 or 0.

Real life applications are not my expertise, but I know my groups, categories and types. 0 and 1 are not probabilities, just as positive and negative infinity are not Real Numbers. This is a truth derived directly from Russel's Axioms, which is the definition basis for all modern mathematics.

When you say P(A) = 1 you are not using probabilities anymore, At best your are doing propositional logic, at worst you'll get a type error. If you want to be as sure as you can, let credence be 1 - epsilon for arbitrarily small positive real epsilon.

1 and 0 are not probabilities by definition

Comment author: Kindly 05 July 2012 06:27:08PM 4 points [-]

Clearly log-odds aren't perfectly isomorphic with multiplicative probabilities, since clearly one allows probabilities of 0 and 1 and the other doesn't.

Bayes's theorem does assume nonzero probability, as you can observe by examining its proof.

  1. Pr[A & B] = Pr[B] Pr[A|B] = Pr[A] Pr[B|A] by definition of conditional probability.
  2. Pr[A|B] = Pr[A] Pr[B|A] / Pr[B] if we divide by Pr[B]. This assumes Pr[B]>0 because otherwise this operation is invalid.

You can't derive properties of probability from Russell's axioms, because these describe set theory and not probability. One standard way of deriving properties of probability is via Dutch Book arguments. These can only show that probabilities must be in the range [0,1] (including the endpoints). In fact, no finite sequence of bets you offer me can distinguish a credence of 1 from a credence of 1-epsilon for sufficiently small epsilon. (That is, for any epsilon, there's a bet that distinguishes 1-epsilon from 1, but for any sequence of bets, there's an 1-epsilon that is indistinguishable from 1).

Here is an analogy. The well-known formula D = RT describes the relationship between distance traveled, average speed, and time. You can also express this as log(D) = log(R) + log(T) if you like, or D/R = T. In either of these formulas, setting R=0 will be an error. This doesn't mean that there's no such thing as a speed of 0, and if you think your speed is 0 you are actually traveling at a speed of epsilon for some very small value of epsilon. It just means that when you passed to these (mostly equivalent) formulations, you lost the capability to discuss speeds of 0. In fact, when we set R to 0 in the original formula, we get a more useful description of what happens: D=0 no matter the value of T. In other words, 0 is a valid speed, but you can't travel a nonzero distance with an average speed of zero, no matter how much time you allow yourself.

What is the difference between log-odds and log-speeds, that makes the former an isomorphism and the latter an imperfect description?

Finally, do you really think that someone who thinks "0 and 1 are probabilities" is a statement LW is irrational about is unaware of the "0 and 1 are not probabilities" post?

Comment author: AspiringRationalist 04 July 2012 11:49:55PM 6 points [-]

Downvoted for agreement. Trivially, P(A|A)=1 and P(A|~A)=0.

Comment author: Andreas_Giger 04 July 2012 01:11:53PM -1 points [-]

Downvoted for agreement. Of course usually it isn't rational to assign probabilites of 0 and 1, but in this case I think it is.

Comment author: JackV 04 July 2012 10:18:41AM -1 points [-]

I'd not seen Elizier's post on "0 and 1 are not probabilities" before. It was a very interesting point. The link at the end was very amusing.

However, it seems he meant "it would be more useful to define probabilities excluding 0 and 1" (which may well be true), but phrased it as if it were a statement of fact. I think this is dangerous and almost always counterproductive -- if you mean "I think you are using these words wrong" you should say that, not give the impression you mean "that statement you made with those words is false according to your interpretation of those words is false".

Comment author: [deleted] 03 July 2012 06:51:28PM 3 points [-]

Meta-discussion Comment

Comment author: wedrifid 03 July 2012 07:35:43PM 1 point [-]

Thanks Hariant!

Comment author: FiftyTwo 06 July 2012 09:14:47PM 6 points [-]

I suspect many of the upvotes in this are being done out of an assessment of the interestingness of well-writtenness of a comment rather than disagreement. If this weren't the case I would expect boring and obviously untrue statements to be at the top, instead the top comments are interesting and more boring ones are hovering around zero.

I suspect upvoting comments you enjoy reading becomes reflexive in long time users, so overriding that instinct requires conscious system 2 effort.

Comment author: TimS 05 July 2012 12:11:52AM 1 point [-]

Just want to make sure I'm understanding the terminology. Saying I'm 10% confident of proposition X is equivalent to saying I'm 90% confident in not-X, right?

Comment author: Pavitra 05 July 2012 07:41:52AM 3 points [-]

Yes. However, since the point of the game is to display beliefs that you hold and others don't, you should choose the phrasing that makes your confidence higher than LW's. That is: if you think other LWers are 5% confident of X, then you should say you're 10% confident of X; and if you think other LWers are 15% confident of X, then you should say you're 90% confident of not-X.

Comment author: Cthulhoo 04 July 2012 09:30:16AM 20 points [-]

Irrationality Game

I believe that exposure to rationality (in the LW sense) at today's state does in general more harm than good^ to someone who's already a skeptic. 80%

^ In the sense of generating less happiness and in general less "winning".

Comment author: wedrifid 09 July 2012 09:48:56PM 0 points [-]

I believe that exposure to rationality (in the LW sense) at today's state does in general more harm than good^ to someone who's already a skeptic. 80%

I predict with about 60% probability that exposure to LW rationality benefits skeptics more and is also more likely to harm non-skeptics.

Comment author: Viliam_Bur 05 July 2012 03:44:42PM *  2 points [-]

I realized I didn't have a model of an average skeptic, so I am not sure what my opinion on this topic actually is.

My provisional model of an average skeptic is like this: "You guys as LW have a good point about religion being irrational; the math is kind of interesting, but boring; and the ideas about superhuman intelligence and quantum physics being more than just equations are completely crazy."

No harm, no benefit, tomorrow everything is forgotten.

Comment author: Athrelon 05 July 2012 07:43:44PM 1 point [-]

I roughly agree with this one. This is something that we would not see much evidence of, if true.

Downvoted.

Comment author: woodside 06 July 2012 05:33:23PM 10 points [-]

Irrationality Game:

These claims assume MWI is true.

Claim #1: Given that MWI is true, a sentient individual will be subjectively immortal. This is motivated by the idea that branches in which death occurs can be ignored and that there are always enough branches for some form of subjective consciousness to continue.

Claim #2: The vast majority of the long-term states a person will experience will be so radically different than the normal human experience that they are akin to perpetual torture.

P(Claim #1) = 60%

P(Claim #2 | Claim #1) = 99%

Comment author: Eliezer_Yudkowsky 09 July 2012 06:31:43PM 6 points [-]

Given these beliefs, you should buy cryonics at almost any price, including prices at which I would no longer personally sign up and prices at which I would no longer advocate that other people sign up. Are you signed up? If not, then I upvote the above comment because I don't believe you believe it. :)

Comment author: woodside 10 July 2012 07:11:05AM 1 point [-]

Well, I agree with you that I should buy cryonics at very high prices and I plan on doing so. For the last few years I've spent the majority of my time in places where being signed up for cryonics wouldn't make a difference (9 months out of the year on a submarine, and now overseas in a place where there aren't any cryonics companies set up).

You should probably still upvote because the < 1/4 of the time I've spent in situations where it would matter still more than justify it. I should also never eat an icecream snickers again. I'll be the first to admit I don't behave perfectly rationally. :)

Comment author: komponisto 09 July 2012 10:55:20PM 1 point [-]

The person may not believe that MWI is true; the beliefs were stated as being conditional.

Nevertheless, your argument does apply to me, since I have similar beliefs (or at least worries), and I also for the most part buy your arguments on MWI. I do plan to sign up for cryonics within the next year or so, but not at any price. This is because I don"t expect to die soon enough for my short-term motivational system to be affected.

Comment author: MixedNuts 11 July 2012 11:03:10AM 5 points [-]

Multiple systems are correct about their experiences. In particular, killing a N-person system is as bad as killing N singlets. (90%)

Comment author: MixedNuts 12 July 2012 11:19:45AM 2 points [-]

From private exchange with woodside, published with auhorization

woodside:

I'm leaning heavily towards viewing this as a (not necessarily destructive) mental disorder but I'm keeping an open mind because it seems obvious that multiplicity is possible in a general sense (multiple emulations could obviously be simultaneously run on a "single" piece of fast enough hardware) but it seems like there are tons of problems when you think about a human brain doing the same.

It's not surprising that most multiples (this is my gut instinct) also have non-standard sexual orientations because so much of sexuality is tied up with hormone and chemical levels in the brain that are seperate from the map of your neural connections and these levels wouldn't appreciably change between one personality and another.

Also I'm extremely skeptical that the brain has sufficient resources from a hardware perspective to run multiple "complete" people. It seems like evolution would have preened away that much excess processing power.

MixedNuts:

How complex is it to run extra people? Most functions are certainly shared. I don't think I've heard of a case where perceptions, reflexes or language skills differed between members, motor skills are shared more often than not, and memory is a coin toss. It'd be interesting to see if disturbances at low level (e.g. strokes) affect members differently. (Meds do, but there are lots of psychological effects here.) I'd assume that most systems have just enough differences between members to make them different people, or a little less, whence medians.

I have a pet theory that multiplicity is caused by empathy going overboard. This is suggested by fictives (characters from works of fictions appearing in a system) and a few cases of people from one system joining another.

Comment author: FiftyTwo 19 March 2013 01:36:11AM 1 point [-]

I'd say I'm reasonably confident that there is something interesting going on, but I wouldn't go as far as to say they are genuinely different people to the extent of having equal moral weight to standard human personalities.

I would guess they are closer to different patterns of accessing the same mental resources than fully different. (You could make an analogy with operating systems/programmes/user interfaces on a computer.)

Comment author: [deleted] 13 January 2013 11:33:52AM *  6 points [-]

Irrationality Game

Aaron Swartz did not actually commit suicide. (10%)

(Hat tip to Quirinus Quirrell, whoever that actually is.)

Comment author: Mitchell_Porter 04 July 2012 12:06:27AM *  41 points [-]

Irrationality Game

If we are in a simulation, a game, a "planetarium", or some other form of environment controlled by transhuman powers, then 2012 may be the planned end of the game, or end of this stage of the game, foreshadowed within the game by the Mayan calendar, and having something to do with the Voyager space probe reaching the limits of the planetarium-enclosure, the galactic center lighting up as a gas cloud falls in 30,000 years ago, or the discovery of the higgs boson.

Since we have to give probabilities, I'll say 10%, but note well, I'm not saying there is a 10% probability that the world ends this year, I'm saying 10% conditional on us being in a transhumanly controlled environment; e.g., that if we are in a simulation, then 2012 has a good chance of being a preprogrammed date with destiny.

Comment author: OphilaDros 04 July 2012 05:15:35AM 3 points [-]

Upvoted because 10% as an estimate seems too high.

I especially can't imagine why transhuman powers would have used the end of the calendar of a long-dead civilization (one of many comparable civilizations) to foreshadow the end of their game plan.

Comment author: Mitchell_Porter 04 July 2012 11:45:51AM 3 points [-]

It's easy to invent scenarios. But the high probability estimate really derives from two things.

First, the special date from the Mayan calendar is astronomically determined, to a degree that hasn't been recognized by mainstream scholarship about Mayan culture. The precession of the equinoxes takes 26000 years. Every 6000 years or so, you have a period in which a solstice sun or an equinox sun lines up close to the galactic center, as seen from Earth. We are in such a period right now; I think the point of closest approach was in 1998. Then, if you mark time by transits of Venus (Venus was important in Mayan culture, being identified with their version of the Aztecs' Quetzalcoatl), that picks out the years 2004 and 2012. It's the December solstice which is the "galactic solstice" at this time, and 21 December 2012 will be the first December solstice after the last transit of Venus during the current period of alignment.

OK, so one might suppose that a medieval human civilization with highly developed naked-eye astronomy might see all that coming and attach a quasi-astrological significance to it. What's always bugged me is that this period in time, whose like comes around only every 6000 years, is historically so close to the dramatic technological developments of the present day.

Carl Sagan wrote a novel (Contact) in which, when humans speak to the ultra-advanced aliens, they discover that the aliens also struggle with impossible messages from beyond, because there are glyphs and messages encoded in the digits of pi. If you were setting up a universe in such a way that you wanted creatures to go through a singularity, and yet know that the universe they had now mastered was just a second-tier reality, one way to do it would certainly be to have that singularity occur simultaneously with some rare, predetermined astronomical configuration.

Nothing as dramatic as a singularity is happening yet in 2012, but it's not every day that a human probe first reaches interstellar space, the black hole at the center of the galaxy visibly lights up, and we begin to measure the properties of the fundamental field that produces mass, all of this happening within a year of an ancient, astronomically timed prophecy of world-change. It sounds like an unrealistic science-fiction plot. So perhaps one should give consideration to models which treat this as more than a coincidence.

Comment author: Khoth 04 July 2012 03:10:33PM 7 points [-]

Why pick out those events?

It's easy to see it as a coincidence when you take into account all the events that you might have counted as significant if they'd happened at the right time. How about the discovery of general relativity, the cosmic microwave background, neutrinos, the Sputnik launch, various supernovae, the Tunguska impact, etc etc?

Comment author: OphilaDros 04 July 2012 04:20:57PM 2 points [-]

Also all those dramatic technological developments of 6000 years ago, which seem minor now due to the passage of time and further advances in knowledge and technology. As no doubt the discovery of the Higgs Boson or the Voyager leaving the boundary of the solar system would seem in 8012. AD. If anybody even remembers these events then.

Comment author: NancyLebovitz 04 July 2012 12:33:58PM 1 point [-]

Also, even if the transhuman powers are choosing based on current end-of-the-world predictions, there's no reason why they would choose 2012 rather than any of the many past predictions.

Comment author: FiftyTwo 18 July 2012 01:11:40PM 9 points [-]

Irrationality game

Money does buy happiness. In general the rich and powerful are in fact ridiculously happy to an extent we can't imagine. The hedonic treadmill and similar theories are just a product of motivated cognition, and the wealthy and powerful have no incentive to tell us otherwise. 30%

Comment author: TimS 03 July 2012 08:37:41PM *  29 points [-]

Irrationality Game

For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)

I believe that human hardware can - in principle - be as intelligent as it is possible to be. (60%) To be clear, this doesn't actually occur in the real world we currently live in. I consider the putatively irrational assertion roughly isomorphic to asserting that AGI won't go FOOM.


If you voted already, you might not want to vote again.

Comment author: John_Maxwell_IV 06 July 2012 05:08:20AM *  0 points [-]

If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space. Which trivially implies that the maximum amount of intelligence you could fit into a finite amount of space is bounded.

And of course you could also update perfectly on every piece of evidence, simulate every possibility, etc.. in this hypothetical universe. This is the theoretical maximum bound on intelligence.

If our universe can be well approximated by a snap to grid universe, or really can be well approximated by any Turing machine at all, then your statements seem trivially true.

Comment author: Eugine_Nier 07 July 2012 05:58:19AM 0 points [-]

If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space.

It's called the Bekenstein bound and it doesn't require discreteness.

Comment author: [deleted] 05 July 2012 01:38:05AM 1 point [-]

Downvoted for the first, upvoted for the second.

Physics limit how big computers can get; I have no evidence whatsoever for humans being optimal.

Comment author: faul_sname 04 July 2012 05:11:17AM *  3 points [-]

I believe there is an upper limit on how intelligent an agent can be. (90%)

What's your estimate that this value is at a level that we actually care about (i.e. not effectively infinite from our point of view)?

Comment author: Armok_GoB 04 July 2012 07:17:44PM *  38 points [-]

IRRATIONALITY GAME

Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.

Probability: improbable ( 2% )

Comment author: TheOtherDave 04 July 2012 07:58:00PM 7 points [-]

Upvoted for vast overconfidence.
Downvoted back to zero because I suspect you're not following the rules of the thread.
Also, I have no idea who "Eliezer Yudovsky" is, though it doesn't matter for either of the above.

Comment author: John_Maxwell_IV 06 July 2012 05:03:07AM 11 points [-]

If such a universal basilisk exists, wouldn't it almost by definition kill the person who discovered it?

I think it's vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He's just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:16:54PM 7 points [-]

This seems like a clear example of "You shouldn't adjust the probability that high just because you're trying to avoid overconfidence; that's privileging a complicated possibility."

Comment author: wedrifid 09 July 2012 09:45:48PM 2 points [-]

This seems like a clear example of "You shouldn't adjust the probability that high just because you're trying to avoid overconfidence; that's privileging a complicated possibility."

Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.

Comment author: Eliezer_Yudkowsky 09 July 2012 10:24:13PM 1 point [-]
Comment author: wedrifid 10 July 2012 12:42:24AM 1 point [-]

Thanks! I recall reading that one but didn't recall.

It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There's probably a book or ten out there I could read.

Comment author: maia 06 July 2012 04:20:52PM 13 points [-]

This seems like a sarcastic Eliezer Yudkowsky Fact, not a serious Irrationality Game entry.

Comment author: faul_sname 04 July 2012 09:11:27PM 13 points [-]

Upvoted for enormous overconfidence that a universal basilisk exists.

Comment author: Armok_GoB 05 July 2012 12:36:48AM 0 points [-]

Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.

The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it'd make it more likely a weapon like that falls into the wrong hands.

Comment author: faul_sname 05 July 2012 03:02:39AM 5 points [-]

The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).

Comment author: Armok_GoB 05 July 2012 01:52:48PM 1 point [-]

Must... resist... revealing... info.... that... may... get... people.... killed.

Comment author: faul_sname 05 July 2012 02:49:22PM 3 points [-]

Please do resist. If you must tell someone, do it through private message.

Comment author: Armok_GoB 05 July 2012 07:17:13PM 1 point [-]

Yea. It's not THAT big a danger, I'm just trying to make it clear why I hold a belief not based of evidence that I can share.

Comment author: Davorak 09 July 2012 09:04:17PM *  3 points [-]

Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.

Specifically the claim of universality, that "any person" can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that "any one" and "with a few clicks", this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim "Never said it was a single universal one." Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.

Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.

Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?

Comment author: Armok_GoB 29 July 2012 07:57:11PM 1 point [-]

I don't remember this post. Weird. I've updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I've updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.

Comment author: Davorak 31 July 2012 07:24:34PM *  1 point [-]

Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:

It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.

Comment author: Armok_GoB 09 July 2012 11:27:08PM 1 point [-]

It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

Comment author: linkhyrule5 07 September 2013 10:53:15PM 4 points [-]

Irrationality Game:

Time travel is physically possible.

80%

Comment author: CellBioGuy 04 February 2013 05:17:24AM *  3 points [-]

Irrationality game:

Humanity has already recieved and recorded a radio message from another technological civilization. This was unconfirmed/unnoticed due to being very short and unrepeated, or mistaken for a transient terrestrial signal, or modulated in ways we were not looking for, or was otherwise overlooked. 25%.

What are the rules on multiple postings? I have a cluster of related (to each other, not this) ones I would love to post as a group.

Comment author: [deleted] 14 January 2013 10:07:51PM 10 points [-]

irrationality game: The universe is, due to some non-reducible (i.e. non-physical) entity, indeterministic. 95% That entity is the human mind (not brain). 90%