You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Irrationality Game II

13 [deleted] 03 July 2012 06:50PM

I was very interested in the discussions and opinions that grew out of the last time this was played, but find digging through 800+ comments for a new game to start on the same thread annoying. I also don't want this game ruined by a potential sock puppet (whom ever it may be). So here's a non-sockpuppetiered Irrationality Game, if there's still interest. If there isn't, downvote to oblivion!

The original rules:


Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

Enjoy!

 

Comments (380)

Comment author: linkhyrule5 07 September 2013 10:53:15PM 4 points [-]

Irrationality Game:

Time travel is physically possible.

80%

Comment author: wedrifid 08 September 2013 12:11:26AM 0 points [-]

Irrationality game upvote for disagreement. This is based on the confidence rather than the claim. I would also upvote if the probability given was, say, less than 1%.

Comment author: linkhyrule5 08 September 2013 01:38:48AM 0 points [-]

80% is hardly "confident"... but fair enough.

Comment author: wedrifid 08 September 2013 01:43:19AM 1 point [-]

80% is hardly "confident"... but fair enough.

I perhaps could have said "the specific probability estimate given" to be clearer about the meaning I was attempting to convey.

Comment author: CellBioGuy 04 February 2013 05:17:24AM *  3 points [-]

Irrationality game:

Humanity has already recieved and recorded a radio message from another technological civilization. This was unconfirmed/unnoticed due to being very short and unrepeated, or mistaken for a transient terrestrial signal, or modulated in ways we were not looking for, or was otherwise overlooked. 25%.

What are the rules on multiple postings? I have a cluster of related (to each other, not this) ones I would love to post as a group.

Comment author: [deleted] 14 January 2013 10:07:51PM 10 points [-]

irrationality game: The universe is, due to some non-reducible (i.e. non-physical) entity, indeterministic. 95% That entity is the human mind (not brain). 90%

Comment author: [deleted] 13 January 2013 11:33:52AM *  6 points [-]

Irrationality Game

Aaron Swartz did not actually commit suicide. (10%)

(Hat tip to Quirinus Quirrell, whoever that actually is.)

Comment author: [deleted] 01 August 2012 10:37:17PM *  1 point [-]

The Mona Lisa currently exposed at the Louvre Museum is actually a replica. (33%)

Comment author: Yvain 20 July 2012 02:37:30AM *  2 points [-]

Irrationality game comment

The importance of waste heat in the brain is generally under-appreciated. An overheated brain is a major source of mental exhaustion, akrasia, and brain fog. One easy way to increase the amount of practical intelligence we can bring to bear on complicated tasks (with or without an accompanying increase in IQ itself) is to improving cooling in the brain. This would be most effective with some kind of surgical cooling system thingy, but even simple things like being in a cold room could help

Confidence: 30%

Comment author: wedrifid 08 September 2013 12:17:16AM 0 points [-]

One easy way to increase the amount of practical intelligence we can bring to bear on complicated tasks (with or without an accompanying increase in IQ itself) is to improving cooling in the brain.

To the pork futures warehouse!

Comment author: [deleted] 03 February 2013 09:44:32PM 1 point [-]

INSERT THE ROD, JOHN.

Comment author: Jonathan_Graehl 11 October 2012 12:39:23AM 1 point [-]

Overheating your body enough to limit athletic performance (whether due to associated dehydration or not) is probably enough to impair the brain as well. Dehydration is known to cause headaches.

I think the effect exists. But what's the size, when you're merely sedentary + thinking + suffering a hot+humid day?

Comment author: AlexSchell 07 August 2012 08:50:17PM 0 points [-]

Some indirect evidence from yawning, with a few references: http://www.epjournal.net/wp-content/uploads/ep0592101.pdf

Comment author: gwern 20 July 2012 03:35:45AM *  4 points [-]

The nice thing about this one is that it's really easy to test yourself. A plastic bag to put ice or hot water into, and some computerized mental exercise like dual n-back. I know if I thought this at anywhere close to 30% I'd test it...

EDIT: see Yvain's full version: http://squid314.livejournal.com/320770.html http://squid314.livejournal.com/321233.html http://squid314.livejournal.com/321773.html

Comment author: Yvain 27 July 2012 08:40:17PM 1 point [-]

Self-experimentation seems like a really bad way to test things about mental exhaustion. It would be way too easy to placebo myself into working for a longer amount of time without a break, when testing the condition that would support my theory. Might wait until I can find a test subject.

Comment author: gwern 27 July 2012 08:50:55PM 7 points [-]

If you got a result consistent with your theory, then yes it might just be placebo effect, but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?

Comment author: wedrifid 08 September 2013 01:06:08AM 2 points [-]

but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?

"Conservation of expected uselessness!"

Comment author: FiftyTwo 18 July 2012 01:11:40PM 9 points [-]

Irrationality game

Money does buy happiness. In general the rich and powerful are in fact ridiculously happy to an extent we can't imagine. The hedonic treadmill and similar theories are just a product of motivated cognition, and the wealthy and powerful have no incentive to tell us otherwise. 30%

Comment author: Grognor 14 July 2012 03:09:33AM *  0 points [-]

Irrationality game comment

The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.

Comment author: MixedNuts 11 July 2012 11:03:10AM 5 points [-]

Multiple systems are correct about their experiences. In particular, killing a N-person system is as bad as killing N singlets. (90%)

Comment author: FiftyTwo 19 March 2013 01:36:11AM 1 point [-]

I'd say I'm reasonably confident that there is something interesting going on, but I wouldn't go as far as to say they are genuinely different people to the extent of having equal moral weight to standard human personalities.

I would guess they are closer to different patterns of accessing the same mental resources than fully different. (You could make an analogy with operating systems/programmes/user interfaces on a computer.)

Comment author: MixedNuts 12 July 2012 11:19:45AM 2 points [-]

From private exchange with woodside, published with auhorization

woodside:

I'm leaning heavily towards viewing this as a (not necessarily destructive) mental disorder but I'm keeping an open mind because it seems obvious that multiplicity is possible in a general sense (multiple emulations could obviously be simultaneously run on a "single" piece of fast enough hardware) but it seems like there are tons of problems when you think about a human brain doing the same.

It's not surprising that most multiples (this is my gut instinct) also have non-standard sexual orientations because so much of sexuality is tied up with hormone and chemical levels in the brain that are seperate from the map of your neural connections and these levels wouldn't appreciably change between one personality and another.

Also I'm extremely skeptical that the brain has sufficient resources from a hardware perspective to run multiple "complete" people. It seems like evolution would have preened away that much excess processing power.

MixedNuts:

How complex is it to run extra people? Most functions are certainly shared. I don't think I've heard of a case where perceptions, reflexes or language skills differed between members, motor skills are shared more often than not, and memory is a coin toss. It'd be interesting to see if disturbances at low level (e.g. strokes) affect members differently. (Meds do, but there are lots of psychological effects here.) I'd assume that most systems have just enough differences between members to make them different people, or a little less, whence medians.

I have a pet theory that multiplicity is caused by empathy going overboard. This is suggested by fictives (characters from works of fictions appearing in a system) and a few cases of people from one system joining another.

Comment author: [deleted] 09 July 2012 10:18:38PM *  3 points [-]

It is plausible that an existing species of dolphin or whale possesses symbolic language and oral culture at least on par with that of neolithic-era humanity. (75%)

Comment author: Alicorn 09 July 2012 11:00:21PM *  7 points [-]

Is "it is plausible" part of the statement to which you give 75% credence, or is it another way of putting said credence?

Because cetacean-language is more than 75% likely to be plausible but I think less than 75% likely to be true.

Comment author: TheOtherDave 09 July 2012 10:30:13PM 0 points [-]

Upvoted for overconfidence.

Comment author: AandNot-A 09 July 2012 12:17:32PM -1 points [-]

Irrationality game:

Different levels of description are just that, and are all equally "real". To speak of particles as in statistical mechanics or as in thermodynamics is as correct/real.

The same about the mind, talking as in neurochemistry or as in thoughts is as correct/real.

80% confidence

Comment author: MixedNuts 11 July 2012 09:25:14AM 0 points [-]

How, if at all, does this differ from "reductionism is true"? There are approximations made in high-level descriptions (e.g. number of particles treated as infinitely larger than its variation); are you saying they are real, or that the high-level description is true modulo these approximations? What do you mean by "real" anyway?

Tentatively downvoted because this looks like some brand of reductionism.

Comment author: asparisi 08 July 2012 07:05:21AM -1 points [-]

Irrationality Game

The Big Bang is not the beginning of the universe, nor is it even analagous to the beginning of the universe. (60% confident)

Comment author: [deleted] 16 July 2012 08:22:14PM -1 points [-]

Nonvoted. It might just be a 0 on the Real line, or analogous. I don't know the real laws of physics, but that seems sensible.

Comment author: Not-A 06 July 2012 07:29:28PM 2 points [-]

Irrationality Game:

I believe Plato (and others) were right when they said music develops some form of sensibility, some sort of compassion. I posit a link between the capacity of understanding music and understanding other people by creating accurate images of them in our head, and of how they feel. 80%

Comment author: woodside 06 July 2012 05:33:23PM 10 points [-]

Irrationality Game:

These claims assume MWI is true.

Claim #1: Given that MWI is true, a sentient individual will be subjectively immortal. This is motivated by the idea that branches in which death occurs can be ignored and that there are always enough branches for some form of subjective consciousness to continue.

Claim #2: The vast majority of the long-term states a person will experience will be so radically different than the normal human experience that they are akin to perpetual torture.

P(Claim #1) = 60%

P(Claim #2 | Claim #1) = 99%

Comment author: Eliezer_Yudkowsky 09 July 2012 06:31:43PM 6 points [-]

Given these beliefs, you should buy cryonics at almost any price, including prices at which I would no longer personally sign up and prices at which I would no longer advocate that other people sign up. Are you signed up? If not, then I upvote the above comment because I don't believe you believe it. :)

Comment author: woodside 10 July 2012 07:11:05AM 1 point [-]

Well, I agree with you that I should buy cryonics at very high prices and I plan on doing so. For the last few years I've spent the majority of my time in places where being signed up for cryonics wouldn't make a difference (9 months out of the year on a submarine, and now overseas in a place where there aren't any cryonics companies set up).

You should probably still upvote because the < 1/4 of the time I've spent in situations where it would matter still more than justify it. I should also never eat an icecream snickers again. I'll be the first to admit I don't behave perfectly rationally. :)

Comment author: MatthewBaker 11 July 2012 11:22:36PM 0 points [-]

more people have died from cryocrastinating than cryonics ;)

Comment author: komponisto 09 July 2012 10:55:20PM 1 point [-]

The person may not believe that MWI is true; the beliefs were stated as being conditional.

Nevertheless, your argument does apply to me, since I have similar beliefs (or at least worries), and I also for the most part buy your arguments on MWI. I do plan to sign up for cryonics within the next year or so, but not at any price. This is because I don"t expect to die soon enough for my short-term motivational system to be affected.

Comment author: Alejandro1 06 July 2012 07:36:55AM 2 points [-]

Irrationality Game:

The Occam argument against theism, in the forms typically used in LW invoking Kolmogorov complexity or equivalent notions, is a lousy argument: its premises and conclusions are not incorrect, but it is question-begging to the point that no intellectually sophisticated theist should move their credence significantly by it. 75%.

(It is difficult to attach meaningfully a probability to this kind of claim, which is not about hard facts. I guesstimated that in an ideally open-minded and reasoned philosophical discussion, there wold be a 25% chance of me being persuaded of the contrary.)

Comment author: Mestroyer 16 January 2013 06:47:57PM 0 points [-]

To the extent that it's begging anything, it's begging a choice of epistemology. If no intellectually sophisticated theist should take it seriously, what epistemology should they take seriously besides faith? If the answer is ordinary informal epistemology, when I present the Occam argument I accompany it with a justification of Occam's razor in terms of that epistemology.

Comment author: [deleted] 16 July 2012 08:20:09PM 0 points [-]

Theists are usually not rational about their theism. So there are relatively few arguments that bite.

Comment author: Alejandro1 16 July 2012 08:24:55PM 0 points [-]

Notice that I said "should move their credence", not "would". It is not a prediction about the reaction of (rational or irrational) real-life theists, but an assessment of the objective merits of the argument.

Comment author: [deleted] 17 July 2012 06:00:02AM 1 point [-]

Aaaaah. Upvoted for being wrong as a simple matter of maths.

Comment author: Alejandro1 17 July 2012 02:45:26PM *  0 points [-]

*grin * That's more like the reaction I was looking for!

I would be curious to see what is the maths you are referring to. I (think I) understand the math content of the Occam argument, and accept it as valid. Let me give an analogy for why I think the argument is useless anyway: suppose I tried the following argument against Christianity:

-If Christianity is true, God exists.

-God doesn't exist.

-Hence, Christianity is false.

The argument is valid as a matter of formal logic, and we would agree it has true premises and conclusion. However, it should (not only would, should) not persuade any Christian, because their priors for the second premise are very low, and the argument gives them no reason to update them. I contend the Occam argument is mathematically valid but question-begging and futile in a similar way. (I can explain more why I think this, if anybody is interested, but just wanted to make my position clear here).

Comment author: [deleted] 18 July 2012 09:42:46AM *  0 points [-]

The Occam argument is basically:

  • Humans are made by evolution to be approximately Occamian, this implies that Occamian reasoning is a least a local maxima of reasoning ability in our universe.

  • When we use our Occamian brains to consider the question of why the universe appears simple, we come up with the simple hypothesis that the universe is itself simple.

  • Describing the universe with maths works better than heroic epics or supernatural myths, as a matter of practical applicability and prediction power.

  • The mathematically best method of measuring simplicity is provably the one used in Solomonoff Induction/Kolmogorov complexity.

  • Quantum Mechanics and -Cosmology is one of the simplest explanations ever for the universe as we observe it.

The argument is sound, but the people are crazy. That doesn't make the argument unsound.

Comment author: hankx7787 05 July 2012 04:50:54PM 2 points [-]

MWI is unlikely because it is too unparsimonious (not very confident).

Comment author: [deleted] 16 July 2012 08:13:05PM 0 points [-]

Okay? So you weakly think reality should conform your sensibilities? I've got a whole lot of evidence behind a heuristic that is bad news for you... Not voted anything, both out of not really knowing what you mean, and also because the true QMI (explaining among other things Born Probabilities) might be smaller than just the "brute force" decoherence of MWI (such as Mangled Worlds).

Comment author: hankx7787 17 July 2012 12:12:19AM *  1 point [-]

Well, I'm sort of hypothesizing that simplicity is not just elegance, but involves a trade-off between elegance and parsimony (vaguely similar to how algorithmic 'efficiency' involves a trade-off between time and space). What heuristic are you referring to which is bad news for this hypothesis? Also, what's QMI? I'm actually very much ignorant when it comes to quantum mechanics.

Comment author: [deleted] 17 July 2012 06:11:02AM 0 points [-]

First of all, I don't care much for some philosophical dictionary's definition of simplicity. You are going to have to specify what you mean by parsimony, and you are going to have to specify it with maths.

Here's my take:
Simplicity is the opposite of Complexity, and Complexity is the Kolmogorov kind. That is the entirety of my definition. And the universe appears to be made on very simple (as specified above) maths.

The Heuristic I am referring to is: "There are many, many, many occasions where people have expected the universe to conform to their sensibilities, and have been dead wrong." It has a lot of evidence backing it, and QM is one very counter-intuitive thing (although the maths are pretty simple), you simply aren't built to think about it.

QMI: Quantum Mechanical Interpretation

Lastly: Have you even read the QM sequence? It gives you a good grasp of what physicists are doing and also explain why everything non-MWI-like is more complex (of the Kolmogorov kind) that any MWI-like.

Comment author: hankx7787 17 July 2012 11:32:44AM *  0 points [-]

No, I'm not defining a notion based on anyone's whim/sensibilities; I fully agree that, to be meaningful, any account of 'simplicity' must be fully formalizable (a la K-complexity). However, I expect a full account of simplicity to include both elegance and parsimony based on the following kind of intuition:

a) There is in fact "stuff" out there
b) Everything that actually exists consists of some orderly combination of this stuff, acting in an orderly manner according to the nature of the stuff
c) All other things being equal, a theory is more simple if it posits less 'stuff' to account for the phenomena
d) Some full account of simplicity should include both elegance (a la K-complexity) and this sense of parsimony in a sort of trade-off relationship, such that, for example, if all other things equal, there's a theory A which is 5x more elegant but 1000x less parsimonious, and a theory B which is correspondingly 5x less elegant but 1000x more parsimonious, we should therefore favor theory B

My reasons for expecting there to be some formalization of simplicity which fully accounts for both of these concepts in such a way is, admittedly, somewhat based on whim/sensibility, as I cannot at this time provide such a formalization nor do I have any real evidence such a thing is possible (hence why this discussion is taking place in a thread entitled 'Irrationality game' and not in some more serious venue) - however, whim/sensibility is not inherent to the overall notion per se, i.e. I am not suggesting this notion of an elegance/parsimony trade-off is somehow true-but-not-formalizable or any such thing.

Comment author: RomeoStevens 05 July 2012 04:45:43AM 5 points [-]

There is no dark matter. Gravity behaves weirdly for some other reason we haven't discovered yet. (85%)

Comment author: Mitchell_Porter 05 July 2012 07:09:21AM 2 points [-]

Many such "modified gravity" theories have been proposed. The best known is "MOND", "Modified Newtonian Dynamics".

Comment author: AspiringRationalist 05 July 2012 04:02:35AM 6 points [-]

The case for atheistic reductionism is not a slam-dunk.

While atheistic reductionism is clearly simpler than any of the competing hypotheses, each added bit of complexity doubles the size of hypothesis space. Some of these additional hypotheses will be ruled out due to impossibility or inconsistency with observation, but that still leaves a huge number of possible hypotheses that each add take up a tiny amount of probability mass, but they add up.

I would give atheistic reductionism a ~30% probability of being true. (I would still assign specific human religions or a specific simulation scenario approximately zero probability.)

Comment author: Pavitra 05 July 2012 07:45:17AM 0 points [-]

Assuming our MMS-prior uses a binary machine, the probability of any single hypothesis of complexity C=X is equal to the total probabilities of all hypotheses of complexity C>X.

Comment author: HonoreDB 05 July 2012 03:32:58AM 5 points [-]

Irrationality Game

Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%

Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%

Comment author: MixedNuts 11 July 2012 09:33:20AM 0 points [-]

Downvoted for agreement, but prediction markets still win because they're possible to implement. (Will change to upvote if you explicitly deny that too.)

Comment author: [deleted] 05 July 2012 04:22:47PM 0 points [-]

If you think Prediction Markets are terrible, why don't you just do better and get rich from them?

Comment author: RichardKennaway 05 July 2012 08:37:54AM 1 point [-]

histocratic

A new word to me. Is this what you're referring to?

Comment author: Kaj_Sotala 05 July 2012 08:02:41AM 7 points [-]

would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Markets can incorporate any source or type of information that humans can understand. Which algorithm can do the same?

Comment author: AspiringRationalist 05 July 2012 03:53:49AM 2 points [-]

Down-voted for semi-agreement.

There are simply too many irrational people with money, and as soon as it became popular to participate in prediction markets, the way it currently is to participate in the stock market, they will add huge amounts of noise.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:29:07PM 9 points [-]

The conventional reply is that noise traders improve markets by making rational prediction more profitable. This is almost certainly true for short-term noise, and my guess is that it's false for long-term noise, i.e., if prices revert in a day, noise traders improve a market, if prices take ten years to revert, the rational money seeks shorter-term gains. Prediction markets may be expected to do better because they have a definite, known date on which the dumb money loses - you can stay solvent longer than the market stays irrational.

Comment author: wedrifid 05 July 2012 03:47:55AM 18 points [-]

They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.

Comment author: CarlShulman 18 July 2012 01:10:46AM 1 point [-]

The IARPA expert aggregation exercises look plausible, and have supposedly done all right predicting geopolitical events. I would not be shocked if the first to use those methods on financial markets got a bit of alpha.

Comment author: HonoreDB 05 July 2012 03:57:57AM 1 point [-]

Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Comment author: Kaj_Sotala 05 July 2012 08:00:07AM *  13 points [-]

And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Aren't prediction markets just a special case of financial markets? (Or vice versa.) Then if your algorithm could outperform prediction markets, it could also outperform the financial ones, where there is lots of money to be made.

In prediction markets, you are betting money on your probability estimates of various things X happening. On financial markets, you are betting money on your probability estimates of the same things X, plus your estimate of the effect of X on the prices of various stocks or commodities.

Comment author: prase 04 July 2012 10:25:55PM 6 points [-]

Irrationality game comment:

Imagine that we transformed the Universe using some elegant mathematical mapping (think about Fourier transform of the phase space) or that we were able to see the world through different quantum observables than we have today (seeing the world primarily in the momentum space, or even being able to experience "collapses" to eigenvectiors not of x or p, but of a different, for us unobservable, operator, e.g. xp). Then, we would observe complex structures, perhaps with their own evolution and life and intelligence. That is, aliens can be all around us but remain as invisible as Mona Lisa on a Fourier transformed picture from Louvre.

Probability : 15%.

Comment author: marchdown 07 July 2012 01:54:01AM 1 point [-]

This is an interesting way to look at things. I would assert a higher probability, so I'm voting up. Even a slight tweaking (x+ε, m-ε) is enough. I'm imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.

Comment author: Manfred 05 July 2012 12:42:26AM 1 point [-]

Any blob (continuous, smooth, rapidly decreasing function) in momentum space corresponds to a blob in position space. That is, you can't get structure in one without structure in the other.

Comment author: prase 05 July 2012 01:05:16PM 4 points [-]
  1. The narrower blob, the wider its Fourier transform. To recognise a perfectly localised blob in the momentum space one would need to measure at every place over the whole Universe.
  2. Not every structure is recognisable as such by human eye.
Comment author: endoself 05 July 2012 12:33:03AM 1 point [-]

Upvoted for underconfidence; there are a lot of bases you can use.

Comment author: prase 05 July 2012 01:19:11PM 1 point [-]

Still, what you see in one basis is not independent on what you see in another one, and I expect elegant mapping between the bases. There is difference between

  • "there exist a basis in the Hilbert space in which some vaguely interesting phenomena could be observed, if we were able to perceive the associated operator the same way as we perceive position"

and

  • "there exist simple functions of observables such as momentum, particle number or field intensities defining observables which, if we could perceive them directly, would show us a world with life and civilisations and evolution"

My 15% belief is closer to the second version.

Comment author: endoself 06 July 2012 01:41:03AM 2 points [-]

Okay, that's less likely. I'd still give it higher than 15% though. The holographic principle is very suggestive of this, for instance.

It's hard to know exactly what would count in order to make an estimate, since we don't yet know the actual laws of physics. It's obvious that "position observables, but farther away" would encode the regular type of alien, but the boundary between regular aliens and weird quantum aliens could easily blur as we learn more physics.

Comment author: Armok_GoB 04 July 2012 07:17:44PM *  38 points [-]

IRRATIONALITY GAME

Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.

Probability: improbable ( 2% )

Comment author: [deleted] 23 January 2013 05:17:41PM 0 points [-]

Well, this is scary enough.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:16:54PM 7 points [-]

This seems like a clear example of "You shouldn't adjust the probability that high just because you're trying to avoid overconfidence; that's privileging a complicated possibility."

Comment author: [deleted] 31 July 2012 07:53:37PM 0 points [-]

Reading this comment made me slightly update my probability that the parent, or a weaker version thereof, is correct.

Comment author: Armok_GoB 09 July 2012 11:21:07PM *  0 points [-]

It may or may not be an example, but it's certainly not a clear one to me. Please explain? The entire sentence seems nonsensical, I know that the individual words mean but not how to apply them to the situation. Is this just some psychological effect because it targets a statement I personally made? It certainly doesn't feel like it but...

Edit: Figured out what I misunderstood. I modelled as .02 positive confidence not .98 negative confidence.

Comment author: Psy-Kosh 10 July 2012 03:22:43AM 9 points [-]

2% is way way way WAY too high for something like that. You shouldn't be afraid to assign a probability much closer to 0.

Comment author: wedrifid 09 July 2012 09:45:48PM 2 points [-]

This seems like a clear example of "You shouldn't adjust the probability that high just because you're trying to avoid overconfidence; that's privileging a complicated possibility."

Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.

Comment author: Eliezer_Yudkowsky 09 July 2012 10:24:13PM 1 point [-]
Comment author: wedrifid 10 July 2012 12:42:24AM 1 point [-]

Thanks! I recall reading that one but didn't recall.

It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There's probably a book or ten out there I could read.

Comment author: maia 06 July 2012 04:20:52PM 13 points [-]

This seems like a sarcastic Eliezer Yudkowsky Fact, not a serious Irrationality Game entry.

Comment author: John_Maxwell_IV 06 July 2012 05:03:07AM 11 points [-]

If such a universal basilisk exists, wouldn't it almost by definition kill the person who discovered it?

I think it's vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He's just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.

Comment author: Armok_GoB 06 July 2012 03:08:22PM 0 points [-]

There are a bunch of tricks that lets you immunize yourself to classes of basilisks, without having access to the specific basilisk- sort of like vaccination, you deliberately infect yourself with a non-lethal variant first.

Eliezer has demonstrated all the skills needed to construct basilisks, is very smart, and have shown to recognize the danger of basilisks. I don't think that's a very common combination, but conditional on eliezer having basilisk weapons most others fitting that description equally well probably do as well.

Comment author: FiftyTwo 06 July 2012 09:04:00PM 8 points [-]

Wouldn't the world be observably different if everyone of EY's intellectual ability or above had access to a basilisk kill agent? And wouldn't we expect a rash of inexplicable deaths in people who are capable of constructing a basilisk but not vaccinating themselves?

Comment author: Eliezer_Yudkowsky 09 July 2012 06:19:15PM 9 points [-]

Not necessarily. If I did, in fact, possess such a basilisk, I cannot think offhand of any occasion where I would have actually used it. Robert Mugabe doesn't read my emails, it's not clear that killing him saves Zimbabwe, I have ethical inhibitions that I consider to exist for good reasons, and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?

Comment author: wedrifid 09 July 2012 09:34:35PM *  4 points [-]

and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?

It would guarantee drastic improvements in secure, trusted communication protocols and completely cure internet addiction (among the comparatively few survivors).

Comment author: Armok_GoB 06 July 2012 10:29:32PM 0 points [-]

First of, there aren't nearly enough people for it to be any kind of "rash", secondly they must be researching a narrow range of topics where basilisks occur, thirdly they'd go insane and lose the basilisk creation capacity way before they got to deliberately lethal ones, and finally anyone smart enough to be able to do that is smart enough not to do it.

Comment author: TheOtherDave 06 July 2012 09:24:27PM 16 points [-]

Are basilisks necessarily fatal? If the majority of basilisks caused insanity or the loss of intellectual capacity instead of death, I would expect to see a large group of people who considered themselves capable of constructing basilisks, but who on inspection turned out to be crazy or not nearly that bright after all.

...

Oh, shit.

Comment author: Armok_GoB 06 July 2012 10:23:43PM 0 points [-]

Yup, this is entirely correct. Learned that the hard way. Vastly so, with such weak basilisks constantly arising from random noise in the memepool, while even knowing how and having all the necessary ingredients a Eliezer-class mind is likely needed for a lethal one.

Great practice for FAI in a way, in that as soon as you make a single misstep you've lost everything forever and wont even know it. Don't try this at home.

Comment author: FiftyTwo 06 July 2012 10:02:56PM 1 point [-]

Are basilisks necessarily fatal

The post specified fatal so I followed it.

For non-fatal basilisks we'd expect to see people flipping suddenly from highly intelligent and sane, to stupid and/or crazy. Specifically after researching basilisk related topics.

Comment author: Armok_GoB 06 July 2012 10:26:16PM 2 points [-]

Yes, this can also be reversed for a good way to see what topics are practically basilisk construction related.

Comment author: [deleted] 07 July 2012 07:59:38AM *  0 points [-]

Yes, but you would get false positives too, such as chess (scroll down to “Real Life” -- warning: TVTropes). Edited to fix link syntax -- how comes after all these months I still get it wrong this often?

Comment author: Armok_GoB 05 July 2012 02:16:17PM 0 points [-]

I am way to good at this game. :(

I really didn't expect this to go this high. All the other posts get lots of helpful comments about WHY they were wrong. If I'm really wrong, which these upvotes indicate; I really need to know WHY so I know with connected beliefs to update as well.

Comment author: MugaSofer 22 November 2012 02:00:33AM 1 point [-]

I was about to condescendingly explain that there's simply no reason to posit such a thing, when it started making far too much sense for my liking. That said, untraceable? How?

Comment author: Armok_GoB 22 November 2012 05:21:55PM 1 point [-]

Email via proxy, some incubation time, looks like normal depression followed by suicide.

Comment author: MugaSofer 22 November 2012 07:19:05PM 1 point [-]

Of course. I was assuming a near-instant effect for some reason.

On the plus side, he doesn't seem to have used it to remove anyone blocking progress on FAI ...

Comment author: Jack 05 July 2012 07:52:16PM 12 points [-]

2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It's the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.

But ascribing such a power to a specific individual who hasn't had any special connection to cutting edge brain science or DARPA and isn't even especially good at using conventional psychological weapons like 'charm' is what sends your entry into the realm of utter and astonishing absurdity.

Comment author: Will_Newsome 07 July 2012 04:42:15AM 4 points [-]

a specific individual who hasn't had any special connection to cutting edge brain science or DARPA

Not publicly, at least.

Comment author: Armok_GoB 06 July 2012 12:17:36AM 0 points [-]

2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It's the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.

Many say exactly the same thing about cryonics. And lots of anecdotal evidence does exist, not of killing specifically, but of inducing a wide enough range of mental states that some within there are known to be lethal.

So far in my experience skill at basilisks is utterly tangential to the skills you mentioned, and fit Eliezers skill set extremely well. Further, he has demonstrated this type of abilities before, for example in the AI box experiments or HPMoR.

Comment author: Jack 06 July 2012 12:39:12AM 5 points [-]

Many say exactly the same thing about cryonics.

Pointing to cryonics anytime someone says you believe in something that is the realm of speculative fiction and well beyond current science is a really, really, bad strategy for having true beliefs. Consider the generality of your response.

And lots of anecdotal evidence does exist,

Show me three.

skill at basilisks

How is this even a thing? That you have experience with?

the AI box experiments

Your best point. But nearly enough to bring p up to 0.02.

Comment author: Armok_GoB 06 July 2012 01:19:04AM 2 points [-]

Point, it's not a strategy for arriving at truths, it's a snappy comeback at a failure mode I'm getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.

Show me three.

Um, we're talking basilisks here. SHOWING you'd be a bad idea. However, to NAME a few, there's the famous Roko incident, several MLP gorefics had basilisk like effects on some readers, and then there's techniques like http://www.youtube.com/watch?v=eNBBl6goECQ .

Yes, skill at basilisks is a thing, that I have some experience with.

finaly, not in response to anything in particular but sort of related: http://cognitiveengineer.blogspot.se/2011/11/holy-shit.html

Comment author: Multiheaded 13 August 2012 04:55:10AM *  6 points [-]

Yet another fictional story that features a rather impressive "emotional basilisk" of sorts; enough to both drive people in-universe insane or suicidal, AND make the reader (especially one prone to agonizing over morality, obsessive thoughts, etc) feel potentially bad distress. I know I did feel sickened and generally wrong for a few hours, and I've heard of people who took it worse.

SCP-231. I'm not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment's sake. If you've read SCP before (I mostly dislike their stuff), you might be guessing which one I'm talking about - so no need to re-read it, dude.

Comment author: MugaSofer 22 November 2012 01:44:26AM 5 points [-]

Really? That's had basilisk-like effects? I guess these things are subjective ... torturing one girl to save humanity is treated like this vast and terrible thing, with the main risk being that one day they wont be able to bring themselves to continue - but in other stories they regularly kill tons of people in horrible ways just to find out how something works. Honestly, I'm not sure why it's so popular, there are a bunch of SCPs that could solve it (although there could be some brilliant reason why they can't, we'll never know due to redaction.) But it's too popular to ever be decommissioned ... it makes the Foundation come across as lazy, not even trying to help the girl, too busy stewing in self-pity at the horrors they have to commit to actually stop committing them.

Wait, I'm still thinking about it after all this time? Hmm, perhaps there's something to this basilisk thing...

Comment author: [deleted] 16 August 2012 07:26:58PM *  4 points [-]

SCP-231. I'm not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment's sake. If you've read SCP before (I mostly dislike their stuff), you might be guessing which one I'm talking about - so no need to re-read it, dude.

Straw Utilitarian exclaims: "Ha easy! Our world has many tortured children, adding one more is a trival cost to pay for continued human existence." But yes imagining me being put in a position to decide on something like that caused me quite a bit of emotional distress. Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option since my consequentialism infected brain cells yell at me for trying hair brained schemes to help the girl.

On a lighter note my favourite SCP.

The members of SCP-1845 are physiologically indistinct from normal animals of their species. However, the animals have been demonstrated to possess near-human intelligence, the ability to construct simple tools from objects in their habitat and introduced by the Foundation, and a system of government modeled on medieval European feudalism.

...

CP-1845-1 is the "leader" of the colony and the only member of the group observed to be able to use the installed keyboard. SCP-1845-1 considers itself to be of royal heritage and identifies itself using the title "His Royal Highness, Eugenio the Second, by the Grace of God, King of the Forest, Lord of the Plains, Duke of the Grand Fir and the Undergrowth, Count of the Swamp, Margrave of ██ ███████, Warden of All the Streams and Rivers, and Lord Protector of the Cities of Man, Defender of the Faith." SCP-1845-1 identifies itself and its followers as Roman Catholics and appears to be extremely pious in its devotions - it has been observed on video praying over its meals and observing holidays and saintly feast days, and has been observed to order punishments against other members of the colony for perceived lack of piety.

...

SCP-1845-1 has asserted that it was not responsible for the "war" that led to its discovery and capture, and that it was retaliating against an uprising on the part of one of its "subjects", a Columbian black-tailed deer (Odocoileus hemionus columbianus) it identified as "Duke Baxter of the West Bay." SCP-1845-1 spoke vitriolically of said deer, describing it as "a most uncouth usurper, rogue, and Protestant" who it claimed had, "having accused them falsely of witchcraft, assassinated our Queen Consort, the Prince of █████ █████, and our other royal issue", and of turning a large portion of the nobility and peasantry against it. It insists that the deer is still at large and marshalling its forces against its nation, and that once it is released from captivity it will defeat it. No deer matching the description given by SCP-1845-1 is among the members of SCP-1845 or was found among those killed during the raid.

Ah the entry is tragically incomplete!

The Catholic faith of the animals was not surprising since contact with SPC-3471 by agent ███ █████ and other LessWrong Computational Theology division cell members have received proof of Catholicism's consistency under CEV as well as indications it represented a natural Schelling point of mammalian morality. First pausing to praise the sovereigns taste in books, the existence of Protestantism has lead Dr. █████ █████ to speculate SPC-4271 ("w-force") has become active in species besides Homo Sapiens violating the 2008 Trilateral Blogosphere Accords. He advises full military support to Eugenio the Second in stamping out rebellion and termination of all animals currently under the rule under Duke Baxter of the West Bay.

"Kill them all. For SPC-3471 knows them that are His."

Adding:

"Nuke the site from orbit, its the only way to be sure."

Comment author: Multiheaded 22 November 2012 01:31:02PM 0 points [-]

Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option

An agent placed in similar circumstances before did just that.

Comment author: Multiheaded 17 August 2012 08:46:52AM 1 point [-]

Yep, suicide is probably what I'd do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I've said, it's constructed purely as horror/porn and not as an ethical dilemma.

BTW simply saying that "Catholicism" is consistent under something or other is quite meaningless, as "C." doesn't make for a very coherent system as seen through Papal policy and decisions of any period. Will would've had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand - for now, Will isn't doing much with his "Catholicism" strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.

Comment author: [deleted] 17 August 2012 09:45:45AM *  2 points [-]

Yep, suicide is probably what I'd do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I've said, it's constructed purely as horror/porn and not as an ethical dilemma.

I mentally iron man such details when presented with such scenarios. Often its the only way for me to keep suspension of disbelief and continue to enjoy fiction. To give a trivial fix to your nitpick, the ritual requires not only the suffering of the victim to be undiminished but also the sexual pleasure of the torturer and/or rapist to be present, automating it is therefore not viable.

BTW simply saying that "Catholicism" is consistent under something or other is quite meaningless, as "C." doesn't make for a very coherent system as seen through Papal policy and decisions of any period. Will would've had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand - for now, Will isn't doing much with his "Catholicism" strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.

Do not overanalyse the technobabble it ruins suspension of disbelief. And what is a SPC without technobabble? Can I perhaps then interest you in a web based Marxist state?

Also who is this Will? I deny all knowledge of him!

Comment author: Multiheaded 20 July 2012 09:08:28PM 1 point [-]

Also, always related to any basilisk discussion:

The Funniest Joke In The World

Comment author: Multiheaded 18 July 2012 12:17:48PM 6 points [-]

I do not know with what weapons World War III will be fought, but World War IV will be fought with fairytales about talking ponies!

Comment author: Armok_GoB 18 July 2012 08:06:05PM 1 point [-]

I love you so much right now. :D

Comment author: MixedNuts 11 July 2012 10:37:52AM 5 points [-]

I have a solid basilisk-handling procedure. (Details available on demand.) You or anyone is welcome to send me any basilisk in the next 24 hours, or at any point in the future with 12 hours warning. I'll publish how many different basilisks I've received, how basilisky I found them, and nothing else.

Evidence: I wasn't particularly shaken by Roko's basilisk. I found Cupcakes a pretty funny read (thanks for the rec!). I have lots of experience blocking out obsessive/intrusive thoughts. I just watched 2girls1cup while eating. I'm good at keeping non-basilisk secrets.

Comment author: [deleted] 31 July 2012 08:11:57PM 0 points [-]

Has anyone sent you any basilisk so far?

Comment author: MixedNuts 01 August 2012 08:58:30AM 0 points [-]

No, I'm all basilisk-less and forlorn. :( I stumbled on a (probably very personal) weak basilisk on my own. Do people just not trust me or don't they have any basilisks handy?

Comment author: Mitchell_Porter 01 August 2012 11:18:03AM 0 points [-]

How do you define basilisk? What effect is it supposed to have on you?

Comment author: wedrifid 01 August 2012 11:12:20AM 0 points [-]

Do people just not trust me or don't they have any basilisks handy?

The latter. Or, if the former, they don't trust you not to just laugh at what they provide and dismiss it.

Comment author: Dorikka 10 July 2012 02:15:39AM 2 points [-]

MLP gorefics

I am amused and curious. :P Did the basilisk-sharing list ever get off the ground?

Comment author: Armok_GoB 10 July 2012 02:35:21AM 0 points [-]

Not that I know of, and it's much less interesting then it sounds. Just nausea and permanent inability to enough the show in a small percent of readers of Cupcakes and the like.

Comment author: Jack 06 July 2012 01:50:18AM 7 points [-]

Point, it's not a strategy for arriving at truths, it's a snappy comeback at a failure mode I'm getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.

The argument isn't that because something is found in speculative fiction it can't be real; it's that this thing you're talking about isn't found outside of speculative fiction-- i.e. it's not real. Science can't do that yet. If you're familiar with the state of a science you have a good sense of what is and isn't possible yet. "A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote" is very likely one of those things. I mention "speculative fiction" because a lot of people have a tendency to privilege hypotheses they find in such fiction.

Hypnotism is not the same as what you're talking about. The Roko 'basilisk' is joke compared to what you're describing. None of these are anecdotal evidence for the power you are describing.

Comment author: Armok_GoB 06 July 2012 03:28:48PM 0 points [-]

Oh, illusion of transparency. Yea, that's at least a real argument.

There are plenty of things that individual geniuses can do that the institutions you seem to be referring to as "science" can't yet mass produce, especially in the reference class of things like works of fiction or political species which many basilisks belong to. "Science" also believes rational agents defect on the prisoners dilemma.

Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.

And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.

Comment author: Jack 07 July 2012 02:36:49AM *  7 points [-]

And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.

I don't believe you. Look, obviously if you have secret knowledge of the existence of fatal basilisks that you're unwilling to share that's a good reason to have a higher credence than me. But I asked you for evidence (not even good evidence, just anecdotal evidence) and you gave me hypnotism and the silly Roko thing. Hinting that you have some deep understanding of basilisks that I don't is explained far better by the hypothesis that you're trying to cover for the fact that you made an embarrassingly ridiculous claim than by your actually having such an understanding. It's okay, it was the irrationality game. You can admit you were privileging the hypothesis.

"Science" also believes rational agents defect on the prisoners dilemma.

Again, pointing to a failure of science as a justification for ignoring it when evaluating the probability of a hypothesis is a really bad thing to do. You actually have to learn things about the world in order to manipulate the world. The most talented writers in the world are capable of producing profound and significant --but nearly always temporary-- emotional reactions in the small set of people that connect with them. Equating that with

A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote

is bizarre.

Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.

A government possessing a basilisk and keeping it a secret is several orders of magnitude more likely than what you proposed. Governments have the funds and the will to both test and create weapons that kill. Also, "empathic" doesn't seem like a word that describes Eliezer well.

Anyway, I don't really think this conversation is doing anyone any good since debating absurd possibilities has the tendency to make them seem even more likely overtime as you'll keep running your sense-making system and come up with new and better justifications for this claim until you actually begin to think "wait, two percent seems kind of low!".

Comment author: Armok_GoB 07 July 2012 08:20:48PM 1 point [-]

Yea, that this thread is getting WAY to adversarial for my taste, dangerously so. At least we can agree on that.

Anyway, you did admit that sometimes, rarely, a really good writer can have permanent profound emotional reactions, and I suspect most of the disagreement here actually resides in the lethality of emotional reactions, and my taste for wording things to sound dramatic as long as they are still true.

Comment author: faul_sname 04 July 2012 09:11:27PM 13 points [-]

Upvoted for enormous overconfidence that a universal basilisk exists.

Comment author: Armok_GoB 05 July 2012 12:36:48AM 0 points [-]

Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.

The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it'd make it more likely a weapon like that falls into the wrong hands.

Comment author: faul_sname 05 July 2012 03:02:39AM 5 points [-]

The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).

Comment author: Armok_GoB 05 July 2012 01:52:48PM 1 point [-]

Must... resist... revealing... info.... that... may... get... people.... killed.

Comment author: faul_sname 05 July 2012 02:49:22PM 3 points [-]

Please do resist. If you must tell someone, do it through private message.

Comment author: Armok_GoB 05 July 2012 07:17:13PM 1 point [-]

Yea. It's not THAT big a danger, I'm just trying to make it clear why I hold a belief not based of evidence that I can share.

Comment author: Davorak 09 July 2012 09:04:17PM *  3 points [-]

Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.

Specifically the claim of universality, that "any person" can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that "any one" and "with a few clicks", this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim "Never said it was a single universal one." Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.

Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.

Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?

Comment author: Armok_GoB 29 July 2012 07:57:11PM 1 point [-]

I don't remember this post. Weird. I've updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I've updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.

Comment author: Davorak 31 July 2012 07:24:34PM *  1 point [-]

Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:

It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.

Comment author: Armok_GoB 09 July 2012 11:27:08PM 1 point [-]

It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

Comment author: TheOtherDave 04 July 2012 07:58:00PM 7 points [-]

Upvoted for vast overconfidence.
Downvoted back to zero because I suspect you're not following the rules of the thread.
Also, I have no idea who "Eliezer Yudovsky" is, though it doesn't matter for either of the above.

Comment author: Dallas 04 July 2012 12:41:49PM 4 points [-]

An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)

Comment author: wedrifid 09 July 2012 09:54:55PM 0 points [-]

An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)

I would have upvoted this even if it limited itself to "intelligent aliens exist in the current observable universe".

Comment author: FiftyTwo 06 July 2012 09:33:01PM 2 points [-]

The probability of this would seem to depend on the resolution of the fermi paradox. If life is relatively common then it would seem to be true purely by statistics. If life is relatively rare then it would require some sort of shared aesthetic standard. Are you saying aesthetics might be universal in the same way as say mathematics?

Comment author: TheOtherDave 04 July 2012 03:47:55PM 5 points [-]

I'm inclined to downvote this for agreement, but haven't yet. Can you say more about what "directly analogous" means? How different from ASZ can this work of art be and still count?

Comment author: Dallas 05 July 2012 03:38:53AM 7 points [-]
  1. The art form must be linear and intend to proceed without interaction from the user.
  2. The length of the three "notes" must be in 8:8:15 ratio (in that order).
  3. The main distinguishing factor between "notes", must be in 2:3:4 ratio (in that order).
  4. The motif must be the overwhelmingly dominant "voice" when it occurs.
Comment author: faul_sname 05 July 2012 04:01:34AM 4 points [-]

Upvoted for overconfidence, not about the directly analogous art form (I suspect that even several hundred pieces of human art have that) but about there being other civilizations within the observable universe.

Though I would still give that at least 20%.

Comment author: TheOtherDave 05 July 2012 03:43:15AM 2 points [-]

Cool. Upvoted immediate parent for specificity and downvoted grandparent for agreement.

Comment author: NancyLebovitz 04 July 2012 12:40:35PM 14 points [-]

Irrationality Game

Being a materialist doesn't exclude nearly as much of the magical, religious, and anomalous as most materialists believe because matter/energy is much weirder than is currently scientifically accepted.

75% certainty.

Comment author: CellBioGuy 03 February 2013 08:03:38PM 1 point [-]

Upvoted for disagreement with the quibble that there is probably room for a lot of interesting things in the realm of human experience that while not necessarily relating one-to-one with nonhuman physical reality, have significance witin the context of human thought or social interaction and contain elements that normally get lumped into magical or religious.

Comment author: Mestroyer 16 January 2013 06:23:31PM *  0 points [-]

Downvoted for agreement. (Retracted because I realized you were talking about in our universe, and I was thinking in principle)

Comment author: torekp 07 July 2012 02:12:59AM 1 point [-]

matter/energy is much weirder than is currently scientifically accepted.

Nitpick: do you really mean this? Current scientific theories are pretty damn weird. But not, in your view, weird enough?

Comment author: NancyLebovitz 07 July 2012 02:32:42AM 1 point [-]

I'm pretty sure that the current theories aren't weird enough, but less sure that current theories need to be modified to include various things that people experience. However, it does seem to me that materialists are very quick to conclude that mental phenomena have straightforward physical explanations.

Comment author: [deleted] 16 July 2012 08:17:22PM -1 points [-]

May I remind you that scientists rescently created and indirectly observed the elementary particle responsible for mass?

The smallest mote of the thing that makes stuff have inertia. Has. Been. Indirectly. Observed.

What.

Comment author: Will_Newsome 06 July 2012 11:29:05AM 0 points [-]

Do materialists still exist? In order to vote on this am I to imagine what not-necessarily-coherent model a materialist should in some sense have given their irreversible handicap in the form of a misguided metaphysic? If so I'd vote down; if not I'd vote up.

Comment author: sixes_and_sevens 04 July 2012 04:25:40PM 3 points [-]

Upvoted, as many phenomena that get labelled "magical" or "religious" have readily-identifiable materialist causes. For those phenomena to be a consequence of esoteric physics and to have a more pedestrian materialist explanation that turns out to be incorrect, and to conform to enough of a culturally-prescribed category of magical phenomena to be labelled as such in the first place seems like a staggering collection of coincidences.

Comment author: MileyCyrus 04 July 2012 03:49:52PM 3 points [-]

I'm having trouble understanding what you are claiming. It seems that once anything is found to exist in the actual world, people won't call it "magical" or "anomalous". When Hermione Granger uses an invisibility cloak, it's magic. When researchers at the University of Dallas use an invisibility cloak, it's science.

Comment author: NancyLebovitz 04 July 2012 04:19:44PM 2 points [-]

What I meant was that there may be more to such things as auras, ghosts, precognition, free will, etc. than current skepticism allows for, while still not having anything in the universe other than matter/energy.

Comment author: Eugine_Nier 05 July 2012 06:22:04AM -1 points [-]

Taboo "matter/energy".

Comment author: wedrifid 05 July 2012 06:40:31AM 5 points [-]

Taboo "matter/energy".

Well damn. What is left? "You know... like... the stuff that there is."

Comment author: Armok_GoB 10 July 2012 01:43:45AM 1 point [-]

Algebra.

Comment author: Eliezer_Yudkowsky 09 July 2012 06:30:09PM 1 point [-]

Causes and effects.

Comment author: wedrifid 09 July 2012 09:13:48PM 1 point [-]

Causes and effects.

Good point. But this 'cause' word is still a little nebulous and seems to confuse some people. Taboo 'cause'!

Comment author: Eugine_Nier 06 July 2012 03:17:51AM -1 points [-]

My point is that what counts as matter/energy may very well not be obvious in different theories.

Comment author: NancyLebovitz 05 July 2012 10:02:26AM 2 points [-]

Thank you. I was about to ask the same thing.

Comment author: Cthulhoo 04 July 2012 09:30:16AM 20 points [-]

Irrationality Game

I believe that exposure to rationality (in the LW sense) at today's state does in general more harm than good^ to someone who's already a skeptic. 80%

^ In the sense of generating less happiness and in general less "winning".

Comment author: wedrifid 09 July 2012 09:48:56PM 0 points [-]

I believe that exposure to rationality (in the LW sense) at today's state does in general more harm than good^ to someone who's already a skeptic. 80%

I predict with about 60% probability that exposure to LW rationality benefits skeptics more and is also more likely to harm non-skeptics.

Comment author: John_Maxwell_IV 06 July 2012 05:11:12AM 0 points [-]

Could you provide support? Have you seen http://lesswrong.com/lw/7s4/poll_results_lw_probably_doesnt_cause_akrasia/, by the way?

Comment author: Athrelon 05 July 2012 07:43:44PM 1 point [-]

I roughly agree with this one. This is something that we would not see much evidence of, if true.

Downvoted.

Comment author: Viliam_Bur 05 July 2012 03:44:42PM *  2 points [-]

I realized I didn't have a model of an average skeptic, so I am not sure what my opinion on this topic actually is.

My provisional model of an average skeptic is like this: "You guys as LW have a good point about religion being irrational; the math is kind of interesting, but boring; and the ideas about superhuman intelligence and quantum physics being more than just equations are completely crazy."

No harm, no benefit, tomorrow everything is forgotten.

Comment author: Kaj_Sotala 04 July 2012 08:00:16AM 2 points [-]

Irrationality game

I have a suspicion that some form of moral particularism is the most sensible moral theory. 10% confidence.

Moral particularism is the view that there are no moral principles and that moral judgement can be found only as one decides particular cases, either real or imagined. This stands in stark contrast to other prominent moral theories, such as deontology or utilitarianism. In the former, it is asserted that people have a set of duties (that are to be considered or respected); in the latter, people are to respect the happiness or the preferences of others in their actions. Particularism, to the contrary, asserts that there are no overriding principles that are applicable in every case, or that can be abstracted to apply to every case.

According to particularism, most notably defended by Jonathan Dancy, moral knowledge should be understood as knowledge of moral rules of thumb, which are not principles, and of particular solutions, which can be used by analogy in new cases.

Comment author: Manfred 05 July 2012 12:49:48AM 0 points [-]

In the turing machine sense, sure. In the "this is all you should know" sense, no way, have an upvote.

Comment author: Jack 04 July 2012 05:51:05PM 7 points [-]

Upvoted for too low a probability.

Comment author: magfrump 04 July 2012 09:29:25AM 2 points [-]

What do you mean by the "most sensible moral theory"?

And what the hell does Dancy mean if he says that there are rules of thumb that aren't principles?

I would weight this lower than .01% just because of my credence that it's incoherent.

Comment author: Kaj_Sotala 05 July 2012 10:43:24AM *  5 points [-]

Perhaps a workable restatement would be something like:

"Any attempt to formalize and extract our moral intuitions and judgements of how we should act in various situations will just produce a hopelessly complicated and inconsistent mess, whose judgements are very different from those of prescribed by any form of utilitarianism, deontology, or any other ethical theory that strives to be consistent. In most cases, any attempt of using a reflective equilibrium / extrapolated volition -type approach to clarify matters will leave things essentially unchanged, except for a small fraction of individuals whose moral intuitions are highly atypical (and who tend to be vastly overrepresented on this site)."

(I don't actually know how well this describes the actual theories for particularism.)

Comment author: magfrump 05 July 2012 11:07:37PM 0 points [-]

I agree that your restatement is internally consistent.

I don't see how such a theory would really be "sensible," in terms of being helpful during moral dilemmas. If it turns out that moral intuitions are totally inconsistent, doesn't "think it over and then trust your gut" give the same recommendations, fit the profile of being deontological, and have the advantage of being easy to remember?

I guess if you were interested in a purely descriptive theory of morality I could conceive of this being the best way to handle things for a long time, but it still flies in the face of the idea that morality was shaped by economic pressures and should therefore have an economic shape, which I find lots of support for, so my upvote remains with my credence being maybe .5%-1%, I think about 2 decibels lower than yours.

Comment author: marchdown 04 July 2012 02:36:32AM -1 points [-]

Irrationality game

Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam's humanity, you would need very little additional information to get a good idea of Adam's morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it's not possible to cheat by formalizing moral separately.

Comment author: Eugine_Nier 04 July 2012 07:19:43AM 1 point [-]

Please attach a probability.

Comment author: marchdown 04 July 2012 09:33:48AM 0 points [-]

Fairly certain (85%—98%).

Comment author: Andreas_Giger 04 July 2012 01:21:56PM *  -2 points [-]

That is a very wide range. Downvoted you anyway.

Comment author: [deleted] 04 July 2012 01:46:25AM 17 points [-]

Computationalism is an incorrect model of cognition. Brains compute, but mind is not what the brain does. There is no self hiding inside your apesuit. You are the apesuit. Minds are embodied and extended, and a major reason why the research program to build synthetic intelligences has largely gone nowhere since its inception is the failure of many researchers to understand/agree with this idea.

70%

Comment author: Kindly 05 July 2012 12:07:56AM 3 points [-]

Just because I am an apesuit, doesn't mean I need to dress my synthetic intelligence in one.

Comment author: [deleted] 04 July 2012 10:05:58PM *  1 point [-]

Have you been reading this recently?

More particularly, anything that links to this post.

Comment author: Armok_GoB 04 July 2012 08:02:43PM 2 points [-]

Do you believe a upload with a simulated body would work? how high fidelity?

Comment author: magfrump 04 July 2012 09:25:22AM -1 points [-]

I don't understand why you don't believe that computations can be "embodied and extended."

I do believe that the fact that any kind of human emulation would have to be embedded into a digital body with sensory inputs is underdiscussed here, though I'm not even sure what constitutes scientific literature on the subject so I don't want to make statements about that.

Comment author: torekp 07 July 2012 02:05:17AM 1 point [-]

Computations can be embodied and extended, but computationalism regards embodiment and extension as unworthy of interest or concern. Downvoted the parent for being probably right.

Comment author: magfrump 07 July 2012 07:58:50PM 1 point [-]

Can you provide a citation for that point?

Not knowing anything really about academic cognitive psychologists, and just being someone who identifies as a computationalist, I feel like the embodiment of a computation is still very important to ANY computation.

If the OP means that researchers underestimate the plasticity of the brain in response to its inputs and outputs, and that their research doesn't draw a circle around the right "computer" to develop a good theory of mind, then I'm extra interested to see some kind of reference to papers which attempt to isolate the brain too much.

Comment author: torekp 08 July 2012 01:04:29AM *  1 point [-]

I understand "computationalism" as referring to the philosophical Computational Theory of the Mind (wiki, Stanford Encyclopedia of Phil.). From the wiki:

Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However the two theories differ in that the representational theory claims that all mental states are representations while the computational theory leaves open that certain mental states, such as pain or depression, may not be representational and therefore may not be suitable for a computational treatment.

From the SEP:

representations have both semantic and syntactic properties, and processes of reasoning are performed in ways responsive only to the syntax of the symbols—a type of process that meets a technical definition of ‘computation’

Because computation is about syntax not semantics, the physical context - embodiment and extension - is irrelevant to computation qua computation. That is what I mean when I say that embodiment and extension are regarded as of no interest. Of course, if a philosopher is less thorough-going about computationalism, leaving pains and depression out of it for example, then embodiment may be of interest for those mental events.

However, your last paragraph throws a monkey wrench into my reasoning, because you raise the possibility of a "computer" drawn to include more territory. All I can say is, that would be unusual, and it seems more straightforward to delineate the syntactic rules of the visual system's edge-detection and blob-detection processes, for example, than of the whole organism+world system.

Comment author: magfrump 08 July 2012 04:58:42AM 0 points [-]

I feel like we are talking past each other in a way that I do not know how to pinpoint.

Part of the problem is that I am trying to compare three things--what I believe, the original statement, and the theory of computationalism.

To try to summarize each of these in a sentence:

I believe that the entire universe essentially "is" a computation, and so minds are necessarily PARTS of computations, but these computations involve their environments. The theory of computationalism tries to understand minds as computations, separate from the environment. The OP suggests that computationalism is likely not a very good way of figuring out minds.

1) do these summaries seem accurate to you? 2) I still can't tell whether my beliefs agree or disagree with either of the other two statements. Is it clearer from an outside perspective?

Comment author: torekp 10 July 2012 02:13:53AM *  0 points [-]

Your summaries look good to me. As compared to your beliefs, standard Computational Theory of Mind is probably neither true nor false, because it's defined in the context of assumptions you reject. Without those assumptions granted, it fails to state a proposition, I think.

Comment author: magfrump 11 July 2012 04:10:16AM 0 points [-]

Without those assumptions granted, it fails to state a proposition

I am constantly surprised and alarmed by how many things end up this way.

Comment author: Mitchell_Porter 04 July 2012 12:06:27AM *  41 points [-]

Irrationality Game

If we are in a simulation, a game, a "planetarium", or some other form of environment controlled by transhuman powers, then 2012 may be the planned end of the game, or end of this stage of the game, foreshadowed within the game by the Mayan calendar, and having something to do with the Voyager space probe reaching the limits of the planetarium-enclosure, the galactic center lighting up as a gas cloud falls in 30,000 years ago, or the discovery of the higgs boson.

Since we have to give probabilities, I'll say 10%, but note well, I'm not saying there is a 10% probability that the world ends this year, I'm saying 10% conditional on us being in a transhumanly controlled environment; e.g., that if we are in a simulation, then 2012 has a good chance of being a preprogrammed date with destiny.

Comment author: OphilaDros 04 July 2012 05:15:35AM 3 points [-]

Upvoted because 10% as an estimate seems too high.

I especially can't imagine why transhuman powers would have used the end of the calendar of a long-dead civilization (one of many comparable civilizations) to foreshadow the end of their game plan.

Comment author: NancyLebovitz 04 July 2012 12:33:58PM 1 point [-]

Also, even if the transhuman powers are choosing based on current end-of-the-world predictions, there's no reason why they would choose 2012 rather than any of the many past predictions.

Comment author: Mitchell_Porter 04 July 2012 11:45:51AM 3 points [-]

It's easy to invent scenarios. But the high probability estimate really derives from two things.

First, the special date from the Mayan calendar is astronomically determined, to a degree that hasn't been recognized by mainstream scholarship about Mayan culture. The precession of the equinoxes takes 26000 years. Every 6000 years or so, you have a period in which a solstice sun or an equinox sun lines up close to the galactic center, as seen from Earth. We are in such a period right now; I think the point of closest approach was in 1998. Then, if you mark time by transits of Venus (Venus was important in Mayan culture, being identified with their version of the Aztecs' Quetzalcoatl), that picks out the years 2004 and 2012. It's the December solstice which is the "galactic solstice" at this time, and 21 December 2012 will be the first December solstice after the last transit of Venus during the current period of alignment.

OK, so one might suppose that a medieval human civilization with highly developed naked-eye astronomy might see all that coming and attach a quasi-astrological significance to it. What's always bugged me is that this period in time, whose like comes around only every 6000 years, is historically so close to the dramatic technological developments of the present day.

Carl Sagan wrote a novel (Contact) in which, when humans speak to the ultra-advanced aliens, they discover that the aliens also struggle with impossible messages from beyond, because there are glyphs and messages encoded in the digits of pi. If you were setting up a universe in such a way that you wanted creatures to go through a singularity, and yet know that the universe they had now mastered was just a second-tier reality, one way to do it would certainly be to have that singularity occur simultaneously with some rare, predetermined astronomical configuration.

Nothing as dramatic as a singularity is happening yet in 2012, but it's not every day that a human probe first reaches interstellar space, the black hole at the center of the galaxy visibly lights up, and we begin to measure the properties of the fundamental field that produces mass, all of this happening within a year of an ancient, astronomically timed prophecy of world-change. It sounds like an unrealistic science-fiction plot. So perhaps one should give consideration to models which treat this as more than a coincidence.

Comment author: Khoth 04 July 2012 03:10:33PM 7 points [-]

Why pick out those events?

It's easy to see it as a coincidence when you take into account all the events that you might have counted as significant if they'd happened at the right time. How about the discovery of general relativity, the cosmic microwave background, neutrinos, the Sputnik launch, various supernovae, the Tunguska impact, etc etc?

Comment author: Mitchell_Porter 05 July 2012 03:30:34PM 0 points [-]

I agree that in themselves, the events I listed don't much suggest that the world ends, the game reboots, or first contact occurs this year. The astronomical and historical propositions - that there's something unlikely going on with calendars and the location of modernity within the precessional cycle - are essential to the argument.

One of the central ingredients is this stuff about a near-conjunction between the December solstice sun and "the galactic center", during recent decades. One needs to specify whether "galactic center" means the central black hole, the galactic ecliptic, the "dark rift" in the Milky Way as seen from Earth, or something else, because these are all different objects and they may imply different answers to the question, "in which year does the solstice sun come closest to this object". I've just learned some more about these details, and should shortly be able to say how they impact the argument.

Comment author: Khoth 05 July 2012 04:04:54PM 2 points [-]

You're still cherry-picking. There have been loads of conjunctions and other astronomical events that have been taken as omens. You could argue that the conjunction with the galactic center is a "big" one, but there are bigger possible ones that you're ignoring because they don't match (eg if the sun was aligned with with CMB rest frame, that would be the one you'd use)

Comment author: OphilaDros 04 July 2012 04:20:57PM 2 points [-]

Also all those dramatic technological developments of 6000 years ago, which seem minor now due to the passage of time and further advances in knowledge and technology. As no doubt the discovery of the Higgs Boson or the Voyager leaving the boundary of the solar system would seem in 8012. AD. If anybody even remembers these events then.