Irrationality Game II
I was very interested in the discussions and opinions that grew out of the last time this was played, but find digging through 800+ comments for a new game to start on the same thread annoying. I also don't want this game ruined by a potential sock puppet (whom ever it may be). So here's a non-sockpuppetiered Irrationality Game, if there's still interest. If there isn't, downvote to oblivion!
The original rules:
Please read the post before voting on the comments, as this is a game where voting works differently.
Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.
Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.
Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.
Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."
If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.
That's the spirit of the game, but some more qualifications and rules follow.
If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.
The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.
Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.
Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?
Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.
That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!Additional rules:
- Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
- If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post.
- Don't post propositions as comment replies to other comments. That'll make it disorganized.
- You have to actually think your degree of belief is rational. You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average. This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
- Debate and discussion is great, but keep it civil. Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
- No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
- Multiple propositions are fine, so long as they're moderately interesting.
- You are encouraged to reply to comments with your own probability estimates, but comment voting works normally for comment replies to other comments. That is, upvote for good discussion, not agreement or disagreement.
- In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!
Enjoy!
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (380)
Irrationality Game:
Time travel is physically possible.
80%
Irrationality game upvote for disagreement. This is based on the confidence rather than the claim. I would also upvote if the probability given was, say, less than 1%.
80% is hardly "confident"... but fair enough.
I perhaps could have said "the specific probability estimate given" to be clearer about the meaning I was attempting to convey.
Irrationality game:
Humanity has already recieved and recorded a radio message from another technological civilization. This was unconfirmed/unnoticed due to being very short and unrepeated, or mistaken for a transient terrestrial signal, or modulated in ways we were not looking for, or was otherwise overlooked. 25%.
What are the rules on multiple postings? I have a cluster of related (to each other, not this) ones I would love to post as a group.
irrationality game: The universe is, due to some non-reducible (i.e. non-physical) entity, indeterministic. 95% That entity is the human mind (not brain). 90%
Irrationality Game
Aaron Swartz did not actually commit suicide. (10%)
(Hat tip to Quirinus Quirrell, whoever that actually is.)
The Mona Lisa currently exposed at the Louvre Museum is actually a replica. (33%)
Irrationality game comment
The importance of waste heat in the brain is generally under-appreciated. An overheated brain is a major source of mental exhaustion, akrasia, and brain fog. One easy way to increase the amount of practical intelligence we can bring to bear on complicated tasks (with or without an accompanying increase in IQ itself) is to improving cooling in the brain. This would be most effective with some kind of surgical cooling system thingy, but even simple things like being in a cold room could help
Confidence: 30%
To the pork futures warehouse!
INSERT THE ROD, JOHN.
Overheating your body enough to limit athletic performance (whether due to associated dehydration or not) is probably enough to impair the brain as well. Dehydration is known to cause headaches.
I think the effect exists. But what's the size, when you're merely sedentary + thinking + suffering a hot+humid day?
Some indirect evidence from yawning, with a few references: http://www.epjournal.net/wp-content/uploads/ep0592101.pdf
The nice thing about this one is that it's really easy to test yourself. A plastic bag to put ice or hot water into, and some computerized mental exercise like dual n-back. I know if I thought this at anywhere close to 30% I'd test it...
EDIT: see Yvain's full version: http://squid314.livejournal.com/320770.html http://squid314.livejournal.com/321233.html http://squid314.livejournal.com/321773.html
Self-experimentation seems like a really bad way to test things about mental exhaustion. It would be way too easy to placebo myself into working for a longer amount of time without a break, when testing the condition that would support my theory. Might wait until I can find a test subject.
If you got a result consistent with your theory, then yes it might just be placebo effect, but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?
"Conservation of expected uselessness!"
Irrationality game
Money does buy happiness. In general the rich and powerful are in fact ridiculously happy to an extent we can't imagine. The hedonic treadmill and similar theories are just a product of motivated cognition, and the wealthy and powerful have no incentive to tell us otherwise. 30%
Irrationality game comment
The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.
Multiple systems are correct about their experiences. In particular, killing a N-person system is as bad as killing N singlets. (90%)
I'd say I'm reasonably confident that there is something interesting going on, but I wouldn't go as far as to say they are genuinely different people to the extent of having equal moral weight to standard human personalities.
I would guess they are closer to different patterns of accessing the same mental resources than fully different. (You could make an analogy with operating systems/programmes/user interfaces on a computer.)
From private exchange with woodside, published with auhorization
woodside:
MixedNuts:
It is plausible that an existing species of dolphin or whale possesses symbolic language and oral culture at least on par with that of neolithic-era humanity. (75%)
Is "it is plausible" part of the statement to which you give 75% credence, or is it another way of putting said credence?
Because cetacean-language is more than 75% likely to be plausible but I think less than 75% likely to be true.
Upvoted for overconfidence.
Irrationality game:
Different levels of description are just that, and are all equally "real". To speak of particles as in statistical mechanics or as in thermodynamics is as correct/real.
The same about the mind, talking as in neurochemistry or as in thoughts is as correct/real.
80% confidence
How, if at all, does this differ from "reductionism is true"? There are approximations made in high-level descriptions (e.g. number of particles treated as infinitely larger than its variation); are you saying they are real, or that the high-level description is true modulo these approximations? What do you mean by "real" anyway?
Tentatively downvoted because this looks like some brand of reductionism.
Irrationality Game
The Big Bang is not the beginning of the universe, nor is it even analagous to the beginning of the universe. (60% confident)
Nonvoted. It might just be a 0 on the Real line, or analogous. I don't know the real laws of physics, but that seems sensible.
Irrationality Game:
I believe Plato (and others) were right when they said music develops some form of sensibility, some sort of compassion. I posit a link between the capacity of understanding music and understanding other people by creating accurate images of them in our head, and of how they feel. 80%
Irrationality Game:
These claims assume MWI is true.
Claim #1: Given that MWI is true, a sentient individual will be subjectively immortal. This is motivated by the idea that branches in which death occurs can be ignored and that there are always enough branches for some form of subjective consciousness to continue.
Claim #2: The vast majority of the long-term states a person will experience will be so radically different than the normal human experience that they are akin to perpetual torture.
P(Claim #1) = 60%
P(Claim #2 | Claim #1) = 99%
Given these beliefs, you should buy cryonics at almost any price, including prices at which I would no longer personally sign up and prices at which I would no longer advocate that other people sign up. Are you signed up? If not, then I upvote the above comment because I don't believe you believe it. :)
Well, I agree with you that I should buy cryonics at very high prices and I plan on doing so. For the last few years I've spent the majority of my time in places where being signed up for cryonics wouldn't make a difference (9 months out of the year on a submarine, and now overseas in a place where there aren't any cryonics companies set up).
You should probably still upvote because the < 1/4 of the time I've spent in situations where it would matter still more than justify it. I should also never eat an icecream snickers again. I'll be the first to admit I don't behave perfectly rationally. :)
more people have died from cryocrastinating than cryonics ;)
The person may not believe that MWI is true; the beliefs were stated as being conditional.
Nevertheless, your argument does apply to me, since I have similar beliefs (or at least worries), and I also for the most part buy your arguments on MWI. I do plan to sign up for cryonics within the next year or so, but not at any price. This is because I don"t expect to die soon enough for my short-term motivational system to be affected.
Irrationality Game:
The Occam argument against theism, in the forms typically used in LW invoking Kolmogorov complexity or equivalent notions, is a lousy argument: its premises and conclusions are not incorrect, but it is question-begging to the point that no intellectually sophisticated theist should move their credence significantly by it. 75%.
(It is difficult to attach meaningfully a probability to this kind of claim, which is not about hard facts. I guesstimated that in an ideally open-minded and reasoned philosophical discussion, there wold be a 25% chance of me being persuaded of the contrary.)
To the extent that it's begging anything, it's begging a choice of epistemology. If no intellectually sophisticated theist should take it seriously, what epistemology should they take seriously besides faith? If the answer is ordinary informal epistemology, when I present the Occam argument I accompany it with a justification of Occam's razor in terms of that epistemology.
Theists are usually not rational about their theism. So there are relatively few arguments that bite.
Notice that I said "should move their credence", not "would". It is not a prediction about the reaction of (rational or irrational) real-life theists, but an assessment of the objective merits of the argument.
Aaaaah. Upvoted for being wrong as a simple matter of maths.
*grin * That's more like the reaction I was looking for!
I would be curious to see what is the maths you are referring to. I (think I) understand the math content of the Occam argument, and accept it as valid. Let me give an analogy for why I think the argument is useless anyway: suppose I tried the following argument against Christianity:
The argument is valid as a matter of formal logic, and we would agree it has true premises and conclusion. However, it should (not only would, should) not persuade any Christian, because their priors for the second premise are very low, and the argument gives them no reason to update them. I contend the Occam argument is mathematically valid but question-begging and futile in a similar way. (I can explain more why I think this, if anybody is interested, but just wanted to make my position clear here).
The Occam argument is basically:
Humans are made by evolution to be approximately Occamian, this implies that Occamian reasoning is a least a local maxima of reasoning ability in our universe.
When we use our Occamian brains to consider the question of why the universe appears simple, we come up with the simple hypothesis that the universe is itself simple.
Describing the universe with maths works better than heroic epics or supernatural myths, as a matter of practical applicability and prediction power.
The mathematically best method of measuring simplicity is provably the one used in Solomonoff Induction/Kolmogorov complexity.
Quantum Mechanics and -Cosmology is one of the simplest explanations ever for the universe as we observe it.
The argument is sound, but the people are crazy. That doesn't make the argument unsound.
MWI is unlikely because it is too unparsimonious (not very confident).
Okay? So you weakly think reality should conform your sensibilities? I've got a whole lot of evidence behind a heuristic that is bad news for you... Not voted anything, both out of not really knowing what you mean, and also because the true QMI (explaining among other things Born Probabilities) might be smaller than just the "brute force" decoherence of MWI (such as Mangled Worlds).
Well, I'm sort of hypothesizing that simplicity is not just elegance, but involves a trade-off between elegance and parsimony (vaguely similar to how algorithmic 'efficiency' involves a trade-off between time and space). What heuristic are you referring to which is bad news for this hypothesis? Also, what's QMI? I'm actually very much ignorant when it comes to quantum mechanics.
First of all, I don't care much for some philosophical dictionary's definition of simplicity. You are going to have to specify what you mean by parsimony, and you are going to have to specify it with maths.
Here's my take:
Simplicity is the opposite of Complexity, and Complexity is the Kolmogorov kind. That is the entirety of my definition. And the universe appears to be made on very simple (as specified above) maths.
The Heuristic I am referring to is: "There are many, many, many occasions where people have expected the universe to conform to their sensibilities, and have been dead wrong." It has a lot of evidence backing it, and QM is one very counter-intuitive thing (although the maths are pretty simple), you simply aren't built to think about it.
QMI: Quantum Mechanical Interpretation
Lastly: Have you even read the QM sequence? It gives you a good grasp of what physicists are doing and also explain why everything non-MWI-like is more complex (of the Kolmogorov kind) that any MWI-like.
No, I'm not defining a notion based on anyone's whim/sensibilities; I fully agree that, to be meaningful, any account of 'simplicity' must be fully formalizable (a la K-complexity). However, I expect a full account of simplicity to include both elegance and parsimony based on the following kind of intuition:
a) There is in fact "stuff" out there
b) Everything that actually exists consists of some orderly combination of this stuff, acting in an orderly manner according to the nature of the stuff
c) All other things being equal, a theory is more simple if it posits less 'stuff' to account for the phenomena
d) Some full account of simplicity should include both elegance (a la K-complexity) and this sense of parsimony in a sort of trade-off relationship, such that, for example, if all other things equal, there's a theory A which is 5x more elegant but 1000x less parsimonious, and a theory B which is correspondingly 5x less elegant but 1000x more parsimonious, we should therefore favor theory B
My reasons for expecting there to be some formalization of simplicity which fully accounts for both of these concepts in such a way is, admittedly, somewhat based on whim/sensibility, as I cannot at this time provide such a formalization nor do I have any real evidence such a thing is possible (hence why this discussion is taking place in a thread entitled 'Irrationality game' and not in some more serious venue) - however, whim/sensibility is not inherent to the overall notion per se, i.e. I am not suggesting this notion of an elegance/parsimony trade-off is somehow true-but-not-formalizable or any such thing.
There is no dark matter. Gravity behaves weirdly for some other reason we haven't discovered yet. (85%)
Many such "modified gravity" theories have been proposed. The best known is "MOND", "Modified Newtonian Dynamics".
The case for atheistic reductionism is not a slam-dunk.
While atheistic reductionism is clearly simpler than any of the competing hypotheses, each added bit of complexity doubles the size of hypothesis space. Some of these additional hypotheses will be ruled out due to impossibility or inconsistency with observation, but that still leaves a huge number of possible hypotheses that each add take up a tiny amount of probability mass, but they add up.
I would give atheistic reductionism a ~30% probability of being true. (I would still assign specific human religions or a specific simulation scenario approximately zero probability.)
Assuming our MMS-prior uses a binary machine, the probability of any single hypothesis of complexity C=X is equal to the total probabilities of all hypotheses of complexity C>X.
Irrationality Game
Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%
Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%
Downvoted for agreement, but prediction markets still win because they're possible to implement. (Will change to upvote if you explicitly deny that too.)
If you think Prediction Markets are terrible, why don't you just do better and get rich from them?
A new word to me. Is this what you're referring to?
Markets can incorporate any source or type of information that humans can understand. Which algorithm can do the same?
Down-voted for semi-agreement.
There are simply too many irrational people with money, and as soon as it became popular to participate in prediction markets, the way it currently is to participate in the stock market, they will add huge amounts of noise.
The conventional reply is that noise traders improve markets by making rational prediction more profitable. This is almost certainly true for short-term noise, and my guess is that it's false for long-term noise, i.e., if prices revert in a day, noise traders improve a market, if prices take ten years to revert, the rational money seeks shorter-term gains. Prediction markets may be expected to do better because they have a definite, known date on which the dumb money loses - you can stay solvent longer than the market stays irrational.
Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.
The IARPA expert aggregation exercises look plausible, and have supposedly done all right predicting geopolitical events. I would not be shocked if the first to use those methods on financial markets got a bit of alpha.
Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.
Aren't prediction markets just a special case of financial markets? (Or vice versa.) Then if your algorithm could outperform prediction markets, it could also outperform the financial ones, where there is lots of money to be made.
In prediction markets, you are betting money on your probability estimates of various things X happening. On financial markets, you are betting money on your probability estimates of the same things X, plus your estimate of the effect of X on the prices of various stocks or commodities.
Irrationality game comment:
Imagine that we transformed the Universe using some elegant mathematical mapping (think about Fourier transform of the phase space) or that we were able to see the world through different quantum observables than we have today (seeing the world primarily in the momentum space, or even being able to experience "collapses" to eigenvectiors not of x or p, but of a different, for us unobservable, operator, e.g. xp). Then, we would observe complex structures, perhaps with their own evolution and life and intelligence. That is, aliens can be all around us but remain as invisible as Mona Lisa on a Fourier transformed picture from Louvre.
Probability : 15%.
This is an interesting way to look at things. I would assert a higher probability, so I'm voting up. Even a slight tweaking (x+ε, m-ε) is enough. I'm imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.
Any blob (continuous, smooth, rapidly decreasing function) in momentum space corresponds to a blob in position space. That is, you can't get structure in one without structure in the other.
Upvoted for underconfidence; there are a lot of bases you can use.
Still, what you see in one basis is not independent on what you see in another one, and I expect elegant mapping between the bases. There is difference between
and
My 15% belief is closer to the second version.
Okay, that's less likely. I'd still give it higher than 15% though. The holographic principle is very suggestive of this, for instance.
It's hard to know exactly what would count in order to make an estimate, since we don't yet know the actual laws of physics. It's obvious that "position observables, but farther away" would encode the regular type of alien, but the boundary between regular aliens and weird quantum aliens could easily blur as we learn more physics.
IRRATIONALITY GAME
Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.
Probability: improbable ( 2% )
Well, this is scary enough.
This seems like a clear example of "You shouldn't adjust the probability that high just because you're trying to avoid overconfidence; that's privileging a complicated possibility."
Reading this comment made me slightly update my probability that the parent, or a weaker version thereof, is correct.
It may or may not be an example, but it's certainly not a clear one to me. Please explain? The entire sentence seems nonsensical, I know that the individual words mean but not how to apply them to the situation. Is this just some psychological effect because it targets a statement I personally made? It certainly doesn't feel like it but...
Edit: Figured out what I misunderstood. I modelled as .02 positive confidence not .98 negative confidence.
2% is way way way WAY too high for something like that. You shouldn't be afraid to assign a probability much closer to 0.
Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.
http://lesswrong.com/lw/u6/horrible_lhc_inconsistency/
Thanks! I recall reading that one but didn't recall.
It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There's probably a book or ten out there I could read.
This seems like a sarcastic Eliezer Yudkowsky Fact, not a serious Irrationality Game entry.
If such a universal basilisk exists, wouldn't it almost by definition kill the person who discovered it?
I think it's vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He's just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.
There are a bunch of tricks that lets you immunize yourself to classes of basilisks, without having access to the specific basilisk- sort of like vaccination, you deliberately infect yourself with a non-lethal variant first.
Eliezer has demonstrated all the skills needed to construct basilisks, is very smart, and have shown to recognize the danger of basilisks. I don't think that's a very common combination, but conditional on eliezer having basilisk weapons most others fitting that description equally well probably do as well.
Wouldn't the world be observably different if everyone of EY's intellectual ability or above had access to a basilisk kill agent? And wouldn't we expect a rash of inexplicable deaths in people who are capable of constructing a basilisk but not vaccinating themselves?
Not necessarily. If I did, in fact, possess such a basilisk, I cannot think offhand of any occasion where I would have actually used it. Robert Mugabe doesn't read my emails, it's not clear that killing him saves Zimbabwe, I have ethical inhibitions that I consider to exist for good reasons, and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?
It would guarantee drastic improvements in secure, trusted communication protocols and completely cure internet addiction (among the comparatively few survivors).
First of, there aren't nearly enough people for it to be any kind of "rash", secondly they must be researching a narrow range of topics where basilisks occur, thirdly they'd go insane and lose the basilisk creation capacity way before they got to deliberately lethal ones, and finally anyone smart enough to be able to do that is smart enough not to do it.
Are basilisks necessarily fatal? If the majority of basilisks caused insanity or the loss of intellectual capacity instead of death, I would expect to see a large group of people who considered themselves capable of constructing basilisks, but who on inspection turned out to be crazy or not nearly that bright after all.
...
Oh, shit.
Yup, this is entirely correct. Learned that the hard way. Vastly so, with such weak basilisks constantly arising from random noise in the memepool, while even knowing how and having all the necessary ingredients a Eliezer-class mind is likely needed for a lethal one.
Great practice for FAI in a way, in that as soon as you make a single misstep you've lost everything forever and wont even know it. Don't try this at home.
The post specified fatal so I followed it.
For non-fatal basilisks we'd expect to see people flipping suddenly from highly intelligent and sane, to stupid and/or crazy. Specifically after researching basilisk related topics.
Yes, this can also be reversed for a good way to see what topics are practically basilisk construction related.
Yes, but you would get false positives too, such as chess (scroll down to “Real Life” -- warning: TVTropes). Edited to fix link syntax -- how comes after all these months I still get it wrong this often?
I am way to good at this game. :(
I really didn't expect this to go this high. All the other posts get lots of helpful comments about WHY they were wrong. If I'm really wrong, which these upvotes indicate; I really need to know WHY so I know with connected beliefs to update as well.
I was about to condescendingly explain that there's simply no reason to posit such a thing, when it started making far too much sense for my liking. That said, untraceable? How?
Email via proxy, some incubation time, looks like normal depression followed by suicide.
Of course. I was assuming a near-instant effect for some reason.
On the plus side, he doesn't seem to have used it to remove anyone blocking progress on FAI ...
2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It's the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.
But ascribing such a power to a specific individual who hasn't had any special connection to cutting edge brain science or DARPA and isn't even especially good at using conventional psychological weapons like 'charm' is what sends your entry into the realm of utter and astonishing absurdity.
Not publicly, at least.
Many say exactly the same thing about cryonics. And lots of anecdotal evidence does exist, not of killing specifically, but of inducing a wide enough range of mental states that some within there are known to be lethal.
So far in my experience skill at basilisks is utterly tangential to the skills you mentioned, and fit Eliezers skill set extremely well. Further, he has demonstrated this type of abilities before, for example in the AI box experiments or HPMoR.
Pointing to cryonics anytime someone says you believe in something that is the realm of speculative fiction and well beyond current science is a really, really, bad strategy for having true beliefs. Consider the generality of your response.
Show me three.
How is this even a thing? That you have experience with?
Your best point. But nearly enough to bring p up to 0.02.
Point, it's not a strategy for arriving at truths, it's a snappy comeback at a failure mode I'm getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.
Um, we're talking basilisks here. SHOWING you'd be a bad idea. However, to NAME a few, there's the famous Roko incident, several MLP gorefics had basilisk like effects on some readers, and then there's techniques like http://www.youtube.com/watch?v=eNBBl6goECQ .
Yes, skill at basilisks is a thing, that I have some experience with.
finaly, not in response to anything in particular but sort of related: http://cognitiveengineer.blogspot.se/2011/11/holy-shit.html
Yet another fictional story that features a rather impressive "emotional basilisk" of sorts; enough to both drive people in-universe insane or suicidal, AND make the reader (especially one prone to agonizing over morality, obsessive thoughts, etc) feel potentially bad distress. I know I did feel sickened and generally wrong for a few hours, and I've heard of people who took it worse.
SCP-231. I'm not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment's sake. If you've read SCP before (I mostly dislike their stuff), you might be guessing which one I'm talking about - so no need to re-read it, dude.
Really? That's had basilisk-like effects? I guess these things are subjective ... torturing one girl to save humanity is treated like this vast and terrible thing, with the main risk being that one day they wont be able to bring themselves to continue - but in other stories they regularly kill tons of people in horrible ways just to find out how something works. Honestly, I'm not sure why it's so popular, there are a bunch of SCPs that could solve it (although there could be some brilliant reason why they can't, we'll never know due to redaction.) But it's too popular to ever be decommissioned ... it makes the Foundation come across as lazy, not even trying to help the girl, too busy stewing in self-pity at the horrors they have to commit to actually stop committing them.
Wait, I'm still thinking about it after all this time? Hmm, perhaps there's something to this basilisk thing...
Straw Utilitarian exclaims: "Ha easy! Our world has many tortured children, adding one more is a trival cost to pay for continued human existence." But yes imagining me being put in a position to decide on something like that caused me quite a bit of emotional distress. Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option since my consequentialism infected brain cells yell at me for trying hair brained schemes to help the girl.
On a lighter note my favourite SCP.
Ah the entry is tragically incomplete!
The Catholic faith of the animals was not surprising since contact with SPC-3471 by agent ███ █████ and other LessWrong Computational Theology division cell members have received proof of Catholicism's consistency under CEV as well as indications it represented a natural Schelling point of mammalian morality. First pausing to praise the sovereigns taste in books, the existence of Protestantism has lead Dr. █████ █████ to speculate SPC-4271 ("w-force") has become active in species besides Homo Sapiens violating the 2008 Trilateral Blogosphere Accords. He advises full military support to Eugenio the Second in stamping out rebellion and termination of all animals currently under the rule under Duke Baxter of the West Bay.
Adding:
An agent placed in similar circumstances before did just that.
Yep, suicide is probably what I'd do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I've said, it's constructed purely as horror/porn and not as an ethical dilemma.
BTW simply saying that "Catholicism" is consistent under something or other is quite meaningless, as "C." doesn't make for a very coherent system as seen through Papal policy and decisions of any period. Will would've had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand - for now, Will isn't doing much with his "Catholicism" strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.
I mentally iron man such details when presented with such scenarios. Often its the only way for me to keep suspension of disbelief and continue to enjoy fiction. To give a trivial fix to your nitpick, the ritual requires not only the suffering of the victim to be undiminished but also the sexual pleasure of the torturer and/or rapist to be present, automating it is therefore not viable.
Do not overanalyse the technobabble it ruins suspension of disbelief. And what is a SPC without technobabble? Can I perhaps then interest you in a web based Marxist state?
Also who is this Will? I deny all knowledge of him!
Also, always related to any basilisk discussion:
The Funniest Joke In The World
I do not know with what weapons World War III will be fought, but World War IV will be fought with fairytales about talking ponies!
I love you so much right now. :D
I have a solid basilisk-handling procedure. (Details available on demand.) You or anyone is welcome to send me any basilisk in the next 24 hours, or at any point in the future with 12 hours warning. I'll publish how many different basilisks I've received, how basilisky I found them, and nothing else.
Evidence: I wasn't particularly shaken by Roko's basilisk. I found Cupcakes a pretty funny read (thanks for the rec!). I have lots of experience blocking out obsessive/intrusive thoughts. I just watched 2girls1cup while eating. I'm good at keeping non-basilisk secrets.
Has anyone sent you any basilisk so far?
No, I'm all basilisk-less and forlorn. :( I stumbled on a (probably very personal) weak basilisk on my own. Do people just not trust me or don't they have any basilisks handy?
How do you define basilisk? What effect is it supposed to have on you?
The latter. Or, if the former, they don't trust you not to just laugh at what they provide and dismiss it.
I am amused and curious. :P Did the basilisk-sharing list ever get off the ground?
Not that I know of, and it's much less interesting then it sounds. Just nausea and permanent inability to enough the show in a small percent of readers of Cupcakes and the like.
The argument isn't that because something is found in speculative fiction it can't be real; it's that this thing you're talking about isn't found outside of speculative fiction-- i.e. it's not real. Science can't do that yet. If you're familiar with the state of a science you have a good sense of what is and isn't possible yet. "A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote" is very likely one of those things. I mention "speculative fiction" because a lot of people have a tendency to privilege hypotheses they find in such fiction.
Hypnotism is not the same as what you're talking about. The Roko 'basilisk' is joke compared to what you're describing. None of these are anecdotal evidence for the power you are describing.
Oh, illusion of transparency. Yea, that's at least a real argument.
There are plenty of things that individual geniuses can do that the institutions you seem to be referring to as "science" can't yet mass produce, especially in the reference class of things like works of fiction or political species which many basilisks belong to. "Science" also believes rational agents defect on the prisoners dilemma.
Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.
And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.
I don't believe you. Look, obviously if you have secret knowledge of the existence of fatal basilisks that you're unwilling to share that's a good reason to have a higher credence than me. But I asked you for evidence (not even good evidence, just anecdotal evidence) and you gave me hypnotism and the silly Roko thing. Hinting that you have some deep understanding of basilisks that I don't is explained far better by the hypothesis that you're trying to cover for the fact that you made an embarrassingly ridiculous claim than by your actually having such an understanding. It's okay, it was the irrationality game. You can admit you were privileging the hypothesis.
Again, pointing to a failure of science as a justification for ignoring it when evaluating the probability of a hypothesis is a really bad thing to do. You actually have to learn things about the world in order to manipulate the world. The most talented writers in the world are capable of producing profound and significant --but nearly always temporary-- emotional reactions in the small set of people that connect with them. Equating that with
is bizarre.
A government possessing a basilisk and keeping it a secret is several orders of magnitude more likely than what you proposed. Governments have the funds and the will to both test and create weapons that kill. Also, "empathic" doesn't seem like a word that describes Eliezer well.
Anyway, I don't really think this conversation is doing anyone any good since debating absurd possibilities has the tendency to make them seem even more likely overtime as you'll keep running your sense-making system and come up with new and better justifications for this claim until you actually begin to think "wait, two percent seems kind of low!".
Yea, that this thread is getting WAY to adversarial for my taste, dangerously so. At least we can agree on that.
Anyway, you did admit that sometimes, rarely, a really good writer can have permanent profound emotional reactions, and I suspect most of the disagreement here actually resides in the lethality of emotional reactions, and my taste for wording things to sound dramatic as long as they are still true.
Upvoted for enormous overconfidence that a universal basilisk exists.
Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.
The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it'd make it more likely a weapon like that falls into the wrong hands.
The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).
Must... resist... revealing... info.... that... may... get... people.... killed.
Please do resist. If you must tell someone, do it through private message.
Yea. It's not THAT big a danger, I'm just trying to make it clear why I hold a belief not based of evidence that I can share.
Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.
Specifically the claim of universality, that "any person" can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that "any one" and "with a few clicks", this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim "Never said it was a single universal one." Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.
Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.
Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?
I don't remember this post. Weird. I've updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I've updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.
Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:
maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.
Upvoted for vast overconfidence.
Downvoted back to zero because I suspect you're not following the rules of the thread.
Also, I have no idea who "Eliezer Yudovsky" is, though it doesn't matter for either of the above.
An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the "dawn motif" from the beginning of Richard Strauss's Also sprach Zarathustra. (~90%)
I would have upvoted this even if it limited itself to "intelligent aliens exist in the current observable universe".
The probability of this would seem to depend on the resolution of the fermi paradox. If life is relatively common then it would seem to be true purely by statistics. If life is relatively rare then it would require some sort of shared aesthetic standard. Are you saying aesthetics might be universal in the same way as say mathematics?
I'm inclined to downvote this for agreement, but haven't yet. Can you say more about what "directly analogous" means? How different from ASZ can this work of art be and still count?
Upvoted for overconfidence, not about the directly analogous art form (I suspect that even several hundred pieces of human art have that) but about there being other civilizations within the observable universe.
Though I would still give that at least 20%.
Cool. Upvoted immediate parent for specificity and downvoted grandparent for agreement.
Irrationality Game
Being a materialist doesn't exclude nearly as much of the magical, religious, and anomalous as most materialists believe because matter/energy is much weirder than is currently scientifically accepted.
75% certainty.
Upvoted for disagreement with the quibble that there is probably room for a lot of interesting things in the realm of human experience that while not necessarily relating one-to-one with nonhuman physical reality, have significance witin the context of human thought or social interaction and contain elements that normally get lumped into magical or religious.
Downvoted for agreement. (Retracted because I realized you were talking about in our universe, and I was thinking in principle)
Nitpick: do you really mean this? Current scientific theories are pretty damn weird. But not, in your view, weird enough?
I'm pretty sure that the current theories aren't weird enough, but less sure that current theories need to be modified to include various things that people experience. However, it does seem to me that materialists are very quick to conclude that mental phenomena have straightforward physical explanations.
May I remind you that scientists rescently created and indirectly observed the elementary particle responsible for mass?
The smallest mote of the thing that makes stuff have inertia. Has. Been. Indirectly. Observed.
What.
Do materialists still exist? In order to vote on this am I to imagine what not-necessarily-coherent model a materialist should in some sense have given their irreversible handicap in the form of a misguided metaphysic? If so I'd vote down; if not I'd vote up.
Upvoted, as many phenomena that get labelled "magical" or "religious" have readily-identifiable materialist causes. For those phenomena to be a consequence of esoteric physics and to have a more pedestrian materialist explanation that turns out to be incorrect, and to conform to enough of a culturally-prescribed category of magical phenomena to be labelled as such in the first place seems like a staggering collection of coincidences.
I'm having trouble understanding what you are claiming. It seems that once anything is found to exist in the actual world, people won't call it "magical" or "anomalous". When Hermione Granger uses an invisibility cloak, it's magic. When researchers at the University of Dallas use an invisibility cloak, it's science.
What I meant was that there may be more to such things as auras, ghosts, precognition, free will, etc. than current skepticism allows for, while still not having anything in the universe other than matter/energy.
Taboo "matter/energy".
Well damn. What is left? "You know... like... the stuff that there is."
Algebra.
Causes and effects.
Good point. But this 'cause' word is still a little nebulous and seems to confuse some people. Taboo 'cause'!
My point is that what counts as matter/energy may very well not be obvious in different theories.
Thank you. I was about to ask the same thing.
Irrationality Game
I believe that exposure to rationality (in the LW sense) at today's state does in general more harm than good^ to someone who's already a skeptic. 80%
^ In the sense of generating less happiness and in general less "winning".
I predict with about 60% probability that exposure to LW rationality benefits skeptics more and is also more likely to harm non-skeptics.
Could you provide support? Have you seen http://lesswrong.com/lw/7s4/poll_results_lw_probably_doesnt_cause_akrasia/, by the way?
I roughly agree with this one. This is something that we would not see much evidence of, if true.
Downvoted.
I realized I didn't have a model of an average skeptic, so I am not sure what my opinion on this topic actually is.
My provisional model of an average skeptic is like this: "You guys as LW have a good point about religion being irrational; the math is kind of interesting, but boring; and the ideas about superhuman intelligence and quantum physics being more than just equations are completely crazy."
No harm, no benefit, tomorrow everything is forgotten.
Irrationality game
I have a suspicion that some form of moral particularism is the most sensible moral theory. 10% confidence.
In the turing machine sense, sure. In the "this is all you should know" sense, no way, have an upvote.
Upvoted for too low a probability.
What do you mean by the "most sensible moral theory"?
And what the hell does Dancy mean if he says that there are rules of thumb that aren't principles?
I would weight this lower than .01% just because of my credence that it's incoherent.
Perhaps a workable restatement would be something like:
"Any attempt to formalize and extract our moral intuitions and judgements of how we should act in various situations will just produce a hopelessly complicated and inconsistent mess, whose judgements are very different from those of prescribed by any form of utilitarianism, deontology, or any other ethical theory that strives to be consistent. In most cases, any attempt of using a reflective equilibrium / extrapolated volition -type approach to clarify matters will leave things essentially unchanged, except for a small fraction of individuals whose moral intuitions are highly atypical (and who tend to be vastly overrepresented on this site)."
(I don't actually know how well this describes the actual theories for particularism.)
I agree that your restatement is internally consistent.
I don't see how such a theory would really be "sensible," in terms of being helpful during moral dilemmas. If it turns out that moral intuitions are totally inconsistent, doesn't "think it over and then trust your gut" give the same recommendations, fit the profile of being deontological, and have the advantage of being easy to remember?
I guess if you were interested in a purely descriptive theory of morality I could conceive of this being the best way to handle things for a long time, but it still flies in the face of the idea that morality was shaped by economic pressures and should therefore have an economic shape, which I find lots of support for, so my upvote remains with my credence being maybe .5%-1%, I think about 2 decibels lower than yours.
Irrationality game
Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam's humanity, you would need very little additional information to get a good idea of Adam's morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it's not possible to cheat by formalizing moral separately.
Please attach a probability.
Fairly certain (85%—98%).
That is a very wide range. Downvoted you anyway.
Computationalism is an incorrect model of cognition. Brains compute, but mind is not what the brain does. There is no self hiding inside your apesuit. You are the apesuit. Minds are embodied and extended, and a major reason why the research program to build synthetic intelligences has largely gone nowhere since its inception is the failure of many researchers to understand/agree with this idea.
70%
Just because I am an apesuit, doesn't mean I need to dress my synthetic intelligence in one.
Have you been reading this recently?
More particularly, anything that links to this post.
Do you believe a upload with a simulated body would work? how high fidelity?
I don't understand why you don't believe that computations can be "embodied and extended."
I do believe that the fact that any kind of human emulation would have to be embedded into a digital body with sensory inputs is underdiscussed here, though I'm not even sure what constitutes scientific literature on the subject so I don't want to make statements about that.
Computations can be embodied and extended, but computationalism regards embodiment and extension as unworthy of interest or concern. Downvoted the parent for being probably right.
Can you provide a citation for that point?
Not knowing anything really about academic cognitive psychologists, and just being someone who identifies as a computationalist, I feel like the embodiment of a computation is still very important to ANY computation.
If the OP means that researchers underestimate the plasticity of the brain in response to its inputs and outputs, and that their research doesn't draw a circle around the right "computer" to develop a good theory of mind, then I'm extra interested to see some kind of reference to papers which attempt to isolate the brain too much.
I understand "computationalism" as referring to the philosophical Computational Theory of the Mind (wiki, Stanford Encyclopedia of Phil.). From the wiki:
From the SEP:
Because computation is about syntax not semantics, the physical context - embodiment and extension - is irrelevant to computation qua computation. That is what I mean when I say that embodiment and extension are regarded as of no interest. Of course, if a philosopher is less thorough-going about computationalism, leaving pains and depression out of it for example, then embodiment may be of interest for those mental events.
However, your last paragraph throws a monkey wrench into my reasoning, because you raise the possibility of a "computer" drawn to include more territory. All I can say is, that would be unusual, and it seems more straightforward to delineate the syntactic rules of the visual system's edge-detection and blob-detection processes, for example, than of the whole organism+world system.
I feel like we are talking past each other in a way that I do not know how to pinpoint.
Part of the problem is that I am trying to compare three things--what I believe, the original statement, and the theory of computationalism.
To try to summarize each of these in a sentence:
I believe that the entire universe essentially "is" a computation, and so minds are necessarily PARTS of computations, but these computations involve their environments. The theory of computationalism tries to understand minds as computations, separate from the environment. The OP suggests that computationalism is likely not a very good way of figuring out minds.
1) do these summaries seem accurate to you? 2) I still can't tell whether my beliefs agree or disagree with either of the other two statements. Is it clearer from an outside perspective?
Your summaries look good to me. As compared to your beliefs, standard Computational Theory of Mind is probably neither true nor false, because it's defined in the context of assumptions you reject. Without those assumptions granted, it fails to state a proposition, I think.
I am constantly surprised and alarmed by how many things end up this way.
Irrationality Game
If we are in a simulation, a game, a "planetarium", or some other form of environment controlled by transhuman powers, then 2012 may be the planned end of the game, or end of this stage of the game, foreshadowed within the game by the Mayan calendar, and having something to do with the Voyager space probe reaching the limits of the planetarium-enclosure, the galactic center lighting up as a gas cloud falls in 30,000 years ago, or the discovery of the higgs boson.
Since we have to give probabilities, I'll say 10%, but note well, I'm not saying there is a 10% probability that the world ends this year, I'm saying 10% conditional on us being in a transhumanly controlled environment; e.g., that if we are in a simulation, then 2012 has a good chance of being a preprogrammed date with destiny.
Upvoted because 10% as an estimate seems too high.
I especially can't imagine why transhuman powers would have used the end of the calendar of a long-dead civilization (one of many comparable civilizations) to foreshadow the end of their game plan.
Also, even if the transhuman powers are choosing based on current end-of-the-world predictions, there's no reason why they would choose 2012 rather than any of the many past predictions.
It's easy to invent scenarios. But the high probability estimate really derives from two things.
First, the special date from the Mayan calendar is astronomically determined, to a degree that hasn't been recognized by mainstream scholarship about Mayan culture. The precession of the equinoxes takes 26000 years. Every 6000 years or so, you have a period in which a solstice sun or an equinox sun lines up close to the galactic center, as seen from Earth. We are in such a period right now; I think the point of closest approach was in 1998. Then, if you mark time by transits of Venus (Venus was important in Mayan culture, being identified with their version of the Aztecs' Quetzalcoatl), that picks out the years 2004 and 2012. It's the December solstice which is the "galactic solstice" at this time, and 21 December 2012 will be the first December solstice after the last transit of Venus during the current period of alignment.
OK, so one might suppose that a medieval human civilization with highly developed naked-eye astronomy might see all that coming and attach a quasi-astrological significance to it. What's always bugged me is that this period in time, whose like comes around only every 6000 years, is historically so close to the dramatic technological developments of the present day.
Carl Sagan wrote a novel (Contact) in which, when humans speak to the ultra-advanced aliens, they discover that the aliens also struggle with impossible messages from beyond, because there are glyphs and messages encoded in the digits of pi. If you were setting up a universe in such a way that you wanted creatures to go through a singularity, and yet know that the universe they had now mastered was just a second-tier reality, one way to do it would certainly be to have that singularity occur simultaneously with some rare, predetermined astronomical configuration.
Nothing as dramatic as a singularity is happening yet in 2012, but it's not every day that a human probe first reaches interstellar space, the black hole at the center of the galaxy visibly lights up, and we begin to measure the properties of the fundamental field that produces mass, all of this happening within a year of an ancient, astronomically timed prophecy of world-change. It sounds like an unrealistic science-fiction plot. So perhaps one should give consideration to models which treat this as more than a coincidence.
Why pick out those events?
It's easy to see it as a coincidence when you take into account all the events that you might have counted as significant if they'd happened at the right time. How about the discovery of general relativity, the cosmic microwave background, neutrinos, the Sputnik launch, various supernovae, the Tunguska impact, etc etc?
I agree that in themselves, the events I listed don't much suggest that the world ends, the game reboots, or first contact occurs this year. The astronomical and historical propositions - that there's something unlikely going on with calendars and the location of modernity within the precessional cycle - are essential to the argument.
One of the central ingredients is this stuff about a near-conjunction between the December solstice sun and "the galactic center", during recent decades. One needs to specify whether "galactic center" means the central black hole, the galactic ecliptic, the "dark rift" in the Milky Way as seen from Earth, or something else, because these are all different objects and they may imply different answers to the question, "in which year does the solstice sun come closest to this object". I've just learned some more about these details, and should shortly be able to say how they impact the argument.
You're still cherry-picking. There have been loads of conjunctions and other astronomical events that have been taken as omens. You could argue that the conjunction with the galactic center is a "big" one, but there are bigger possible ones that you're ignoring because they don't match (eg if the sun was aligned with with CMB rest frame, that would be the one you'd use)
Also all those dramatic technological developments of 6000 years ago, which seem minor now due to the passage of time and further advances in knowledge and technology. As no doubt the discovery of the Higgs Boson or the Voyager leaving the boundary of the solar system would seem in 8012. AD. If anybody even remembers these events then.