Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.
is quite safe if done properly
That's the thing -- it's basically an issue of idiot-proofing. Many things are "safe if done properly" and still are not a good idea because people in general are pretty bad at doing things properly.
Flush toilets are idiot-proof to a remarkable degree. Composting human manure, I have my doubts.
Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.
Irrationality game:
Most posthuman societies will have violent death rate much higher than humans ever had. Most poshumans who will ever live will die at wars. 95%
Interesting. So, you have Robin Hanson's belief that we won't get a strong singleton; but you lack his belief that emulated minds will be able to evaluate each other's abilities with enough confidence that trade (taking into account the expected value of fighting) will be superior to fighting? That's quite the idiosyncratic position, especially for 95% confidence.
You (the reader) do not exist.
EDIT: That was too punchy and not precise. The reasoning behind the statement:
Most things which think they are me are horribly confused gasps of consciousness. Rational agents should believe the chances are small that their experiences are remotely genuine.
EDIT 2: After thinking about shminux's comment, I have to retract my original statement about you readers not existing. Even if I'm a hopelessly confused Boltzmann brain, the referent "you" might still well exist. At minimum I have to think about existence more. Sorry!
Irrationality Game: I am something ontologically distinct from my body; I am much simpler and I am not located in the same spacetime. 50%
EDIT: Upon further reflection, my probability assignment would be better represented as the range between 30% and 50%, after factoring in general uncertainty due to confusion. I doubt this will make a difference to the voting though. ;)
Irrationality game - there is a provident, superior entity that is in no way infinite (I wonder if people here would call that God. As a "superman theist" I had to put "odds of God (as defined in question)" at 5% but identify as strongly theist in the last census)
Edit: forgot odds. 80%
The universe is finite, and not much bigger than the region we observe. There is no multiverse (in particular Many Worlds Interpretation is incorrect and SIA is incorrect). There have been a few (< million) intelligent civilisations before human beings but none of them managed to expand into space, which explains Fermi's paradox. This also implies a mild form of the "Doomsday" argument (we are fairly unlikely to expand ourselves) but not a strong future filter (if instead millions or billions of civilisations had existed, but none of them expanded, there would be a massive future filter). Probability: 90%.
Irrationality Game: One can reliable and predictably make $1M / year, and it's not that difficult. (Confidence: 75%)
Irrationality game:
There are other 'technological civilizations' (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%
There are other 'technological civilizations' in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.
Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never co...
Irrationality game:
Nice idea. This way I can safely test whether the Baseline of my opinion on LW topics is as contrarian as I think.
My proposition:
On The Simulation Argument I go for "(1) the human species is very likely to go extinct before reaching a “posthuman” stage" (80%)
Correspondingly on The Great Filter I go for failure to reach "9. Colonization explosion" (80%).
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility).
Irrationality game: people are happier when living in traditional social structures, and value being part of their traditions[1]. The public existence of "weird" relationships (homosexuality, polyamory, BDSM, ...) is actively harmful to most people; the open practice of them is a net negative for world utility. Morally good actions include condemnation and censorship of such things.
[1] Or rather what they believe are their traditions; these beliefs may not be particularly well-correlated with reality.
Irrationality Game: Currently, understanding history or politics is a better avenue than studying AI or decision theory for dealing with existential risk. This is not because of the risk of total nuclear annihilation, but because of the possibility of political changes that result in setbacks to or an accelerated use and understanding of AI. 70%
I'm 99% confident that dust specks in 3^^^3 eyes result in less lost utility than 50 years of torturing one person.
Just as a curiosity, this was the most downvoted comment in the original thread:
For a large majority of people who read this, learning a lot about how to interact with other human beings genuinely and in a way that inspires comfort and pleasure on both sides is of higher utility than learning a lot about either AI or IA. ~90%
(-44 points)
Irrationality game:
Most progress in medicine in the next 50 years won't be due to advances in molecular biology and the production of drugs that are designed to target specific biochemical pathways but through other paradigms.
Probability: 75%
Irrationality game: The straightforward view of the nature of the universe is fundamentally flawed. 90%
By "fundamentally flawed", I mean things like:
Irrationality game: The Great Stagnation is actually occurring, and it is mostly due to fossil fuel depletion rather than (say) leftist politics or dysgenics. (60%)
Irrationality game: most opposition to wireheading comes from seeing it as weird and/or counterintuitive in the same way that most non-LWers see cryonics/immortalism as weird. Claiming to have multiple terminal values is an attempt to justify this aversion. 75%
Irrationality Game: We need a way to give feedback on irrationality game entries that the troll toll won't mess with. (98%)
[pollid:643]
Irrationality Game:
Everyone alive in developed nations today will die a fairly standard biological death by age:
150: 75%
250: 95%
(This latter figure accounts for the possibility that the stories of the odd Chinese monk living to age 200+ after only eating wild herbs from age 10 on up is actually true and not an exaggeration, or someone sticking to unreasonably-effective calorie restriction regimes religiously combined with some interesting metabolic rejiggering in the coming decade or two).
The majority (90+%) of people born in developed nations today will die a fairly standard biological death by age:
120: 85%
150: 99%
Irrationality Game:
Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.
I got a bit distracted by the "anthropic reasoning is wrong" discussion below, and missed adding something important. The conclusion that "we would not expect to see the world as we in fact see it" holds in a big universe regardless of the approach taken to anthropic reasoning. It's worth spelling that out in some detail.
Suppose I don't want to engage in any form of anthropic reasoning or observation sampling hypothesis. Then the large universe model leaves me unable to predict anything much at all about my observations. I might perhaps be in a small civilisation, but then I might be in a simulation, or a Boltzmann Brain, or mad, or a galactic emperor, or a worm, or a rock, or a hydrogen molecule. I have no basis for assigning significant probability to any of these - my predictions are all over the place. So I certainly can't expect to observe that I'm an intelligent observer in a small civilisation confined to its home planet.
Suppose I adopt a "Copernican" hypothesis - I'm just at a random point in space. Well now, the usual big and small universe hypotheses predict that I'm most likely going to be somewhere in intergalactic or interstellar space, so that's not a great predictive success. The universe model which most predicts my observations looks frankly weird... instead of a lot of empty space, it is a dense mass of "computronium" running lots of simulations of different observers, and I'm one of them. Even then I can't expect to be in a simulation of a small civilisation, since the sim could be of just about anything. Again, not a great predictive success.
Suppose I adopt SIA reasoning. Then I should just ignore the finite universes, since they contribute zero prior probability. Or if I've decided for some reason to keep all my universe hypotheses finite, then I should ignore all but the largest ones (ones with 3^^^3 or more galaxies). Among the infinite-or-enormous universes, they nearly all have expanded civilisations, and so under SIA, nearly all predict that I'm going to be in a big civilisation. The only ones which predict otherwise include a "universal doom" - the probability that a small civilisation ever expands off its home world is zero, or negligibly bigger than zero. That's a massive future filter. So SIA and big universes can - just about - predict my observations, but only if there is this super-strong filter. Again, that has low prior probability, and is not what I should expect to see.
Suppose I adopt SSA reasoning. I need to specify the reference class, and it is a bit hard to know which one to use. In a big universe, different reference classes will lead to very different predictions: picking out small civilisations, large civilisations, AIs, SIMs, emperors and so on (plus worms, rocks and hydrogen for the whackier reference classes). As I don't know which to use, my predictions get smeared out across the classes, and are consequently vague. Again, I can't expect to be in a small civilisation on its home planet.
By contrast, look at the small universe models with only a few civilisations. A fair chunk of these models have modest future filters so none of the civilisations expand. For those models, SSA looks in quite good shape, as there is quite a wide choice of reference classes that all lead to the same prediction. Provided the reference class predicts I am an intelligent observer at all then it must predict I am in a small civilisation confined to its home planet (because all civilisations are like that). Of course there are the weird classes which predict I'm a worm and so on - nothing we can do about those - but among the sensible classes we get a hit.
So this is where I'm coming from. The only model which leads me to expect to see what I actually see is a small universe model, with a modest future filter. Within that model, I will need to adopt some sort of SSA-reasoning to get a prediction, but I don't have to know in advance which reference class to use: any reference class which selects an intelligent observer predicts roughly what I see. None of the other models or styles of reasoning lead to that prediction.
The 'Irrationality Game' posts in discussion came before my time here, but I had a very good time reading the bits written in the comments section. I also had a number of thoughts I would've liked to post and get feedback on, but I knew that being buried in such old threads not much would come of it. So I asked around and feedback from people has suggested that they would be open to a reboot!
I hereby again quote the original rules:
I would suggest placing *related* propositions in the same comment, but wildly different ones might deserve separate comments for keeping threads separate.
Make sure you put "Irrationality Game" as the first two words of a post containing a proposition to be voted upon in the game's format.
Here we go!
EDIT: It was pointed out in the meta-thread below that this could be done with polls rather than karma so as to discourage playing-to-win and getting around the hiding of downvoted comments. If anyone resurrects this game in the future, please do so under that system If you wish to test a poll format in this thread feel free to do so, but continue voting as normal for those that are not in poll format.