Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.
is quite safe if done properly
That's the thing -- it's basically an issue of idiot-proofing. Many things are "safe if done properly" and still are not a good idea because people in general are pretty bad at doing things properly.
Flush toilets are idiot-proof to a remarkable degree. Composting human manure, I have my doubts.
Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.
Irrationality game:
Most posthuman societies will have violent death rate much higher than humans ever had. Most poshumans who will ever live will die at wars. 95%
Interesting. So, you have Robin Hanson's belief that we won't get a strong singleton; but you lack his belief that emulated minds will be able to evaluate each other's abilities with enough confidence that trade (taking into account the expected value of fighting) will be superior to fighting? That's quite the idiosyncratic position, especially for 95% confidence.
You (the reader) do not exist.
EDIT: That was too punchy and not precise. The reasoning behind the statement:
Most things which think they are me are horribly confused gasps of consciousness. Rational agents should believe the chances are small that their experiences are remotely genuine.
EDIT 2: After thinking about shminux's comment, I have to retract my original statement about you readers not existing. Even if I'm a hopelessly confused Boltzmann brain, the referent "you" might still well exist. At minimum I have to think about existence more. Sorry!
Irrationality Game: I am something ontologically distinct from my body; I am much simpler and I am not located in the same spacetime. 50%
EDIT: Upon further reflection, my probability assignment would be better represented as the range between 30% and 50%, after factoring in general uncertainty due to confusion. I doubt this will make a difference to the voting though. ;)
Irrationality game - there is a provident, superior entity that is in no way infinite (I wonder if people here would call that God. As a "superman theist" I had to put "odds of God (as defined in question)" at 5% but identify as strongly theist in the last census)
Edit: forgot odds. 80%
The universe is finite, and not much bigger than the region we observe. There is no multiverse (in particular Many Worlds Interpretation is incorrect and SIA is incorrect). There have been a few (< million) intelligent civilisations before human beings but none of them managed to expand into space, which explains Fermi's paradox. This also implies a mild form of the "Doomsday" argument (we are fairly unlikely to expand ourselves) but not a strong future filter (if instead millions or billions of civilisations had existed, but none of them expanded, there would be a massive future filter). Probability: 90%.
Irrationality Game: One can reliable and predictably make $1M / year, and it's not that difficult. (Confidence: 75%)
Irrationality game:
There are other 'technological civilizations' (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%
There are other 'technological civilizations' in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.
Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never co...
Irrationality game:
Nice idea. This way I can safely test whether the Baseline of my opinion on LW topics is as contrarian as I think.
My proposition:
On The Simulation Argument I go for "(1) the human species is very likely to go extinct before reaching a “posthuman” stage" (80%)
Correspondingly on The Great Filter I go for failure to reach "9. Colonization explosion" (80%).
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility).
Irrationality game: people are happier when living in traditional social structures, and value being part of their traditions[1]. The public existence of "weird" relationships (homosexuality, polyamory, BDSM, ...) is actively harmful to most people; the open practice of them is a net negative for world utility. Morally good actions include condemnation and censorship of such things.
[1] Or rather what they believe are their traditions; these beliefs may not be particularly well-correlated with reality.
Irrationality Game: Currently, understanding history or politics is a better avenue than studying AI or decision theory for dealing with existential risk. This is not because of the risk of total nuclear annihilation, but because of the possibility of political changes that result in setbacks to or an accelerated use and understanding of AI. 70%
I'm 99% confident that dust specks in 3^^^3 eyes result in less lost utility than 50 years of torturing one person.
Just as a curiosity, this was the most downvoted comment in the original thread:
For a large majority of people who read this, learning a lot about how to interact with other human beings genuinely and in a way that inspires comfort and pleasure on both sides is of higher utility than learning a lot about either AI or IA. ~90%
(-44 points)
Irrationality game:
Most progress in medicine in the next 50 years won't be due to advances in molecular biology and the production of drugs that are designed to target specific biochemical pathways but through other paradigms.
Probability: 75%
Irrationality game: The straightforward view of the nature of the universe is fundamentally flawed. 90%
By "fundamentally flawed", I mean things like:
Irrationality game: The Great Stagnation is actually occurring, and it is mostly due to fossil fuel depletion rather than (say) leftist politics or dysgenics. (60%)
Irrationality game: most opposition to wireheading comes from seeing it as weird and/or counterintuitive in the same way that most non-LWers see cryonics/immortalism as weird. Claiming to have multiple terminal values is an attempt to justify this aversion. 75%
Irrationality Game: We need a way to give feedback on irrationality game entries that the troll toll won't mess with. (98%)
[pollid:643]
Irrationality Game:
Everyone alive in developed nations today will die a fairly standard biological death by age:
150: 75%
250: 95%
(This latter figure accounts for the possibility that the stories of the odd Chinese monk living to age 200+ after only eating wild herbs from age 10 on up is actually true and not an exaggeration, or someone sticking to unreasonably-effective calorie restriction regimes religiously combined with some interesting metabolic rejiggering in the coming decade or two).
The majority (90+%) of people born in developed nations today will die a fairly standard biological death by age:
120: 85%
150: 99%
Irrationality Game:
Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.
As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t.
It is about 1/t x 1/log t x 1/log log t etc. for most values of t (taking base 2 logarithms). There are exceptions for very regular values of t.
Incidentally, I've been thinking about a similar weighting approach towards anthropic reasoning, and it seems to avoid a strong form of the Doomsday Argument (one where we bet heavily against our civilisation expanding). Imagine listing all the observers (or observer moments) in order of appearance since the Big Bang (use cosmological proper time). Then assign a prior probability 2^-K(n) to being the nth observer (or moment) in that sequence.
Now let's test this distribution against my listed hypotheses above:
1. No other civilisations exist or have existed in the universe apart from us.
Fit to observations: Not too bad. After including the various log terms in 2^-K(n), the probability of me having an observer rank n between 60 billion and 120 billion (we don't know it more precisely than that) seems to be about 1/log (60 billion) x 1/log (36) or roughly 1/200.
Still, the hypothesis seems a bit dodgy. How could there be exactly one civilisation over such a large amount of space and time? Perhaps the evolution of intelligence is just extraordinarily unlikely, a rare fluke that only happened once. But then the fact that the "fluke" actually happened at all makes this hypothesis a poor fit. A better hypothesis is that the chance of intelligence evolving is high enough to ensure that it will appear many times in the universe: Earth-now is just the first time it has happened. If observer moments were weighted uniformly, we would rule that out (we'd be very unlikely to be first), but with the 2^-K(n) weighting, there is rather high probability of being a smaller n, and so being in the first civilisation. So this hypothesis does actually work. One drawback is that living 13.8 billion years after the Big Bang, and with only 5% of stars still to form, we may simply be too late to be the first among many. If there were going to be many civilisations, we'd expect a lot of them to have already arrived.
Predictions for Future of Humanity: No doomsday prediction at all; the probability of my n falling in the range 60-120 billion is the same sum over 2^-K(n) regardless of how many people arrive after me. This looks promising.
2. A few have existed apart from us, but none have expanded (yet)
Fit to observations: Pretty good e.g. if the average number of observers per civilisation is less than 1 trilllion. In this case, I can't know what my n is (since I don't know exactly how many civilisations existed before human beings, or how many observers they each had). What I can infer is that my relative rank within my own civilisation will look like it fell at random between 1 and the average population of a civilisation. If that average population is less than 1 trillion, there will be a probability of > 1 in 20 of seeing a relative rank like my current one.
Predictions for Future of Humanity: There must be a fairly low probability of expanding, since other civilisations before us didn't expand. If there were 100 of them, our own estimated probability of expanding would be less than 0.01 and so on. But notice that we can't infer anything in particular about whether our own civilisation will expand: if it does expand (against the odds) then there will be a very large number of observer moments after us, but these will fall further down the tail of the Kolmogorov distribution. The probability of my having a rank n where it is (at a number before the expansion) doesn't change. So I shouldn't bet against expansion at odds much different from 100:1.
3. A few have existed, and a few have expanded, but we can't see them (yet)
Fit to observations: Poor. Since some civilisations have already expanded, my own n must be very high (e.g. up in the trillions of trillions). But then most values of n which are that high and near to my own rank will correspond to observers inside one of the expanded civilisations. Since I don't know my own n, I can't expect it to just happen to fall inside one of the small civilisations. My observations look very unlikely under this model.
Predictions for Future of Humanity: Similar to 2
4. Lots have existed, but none have expanded (very strong future filter)
Fit to observations: Mixed. It can be made to fit if the average number of observers per civilisation is less than 1 trilllion; this is for reasons simlar to 2. While that gives a reasonable degree of fit, the prior likelihood of such a strong filter seems low.
Predictions for Future of Humanity: Very pessimistic, because of the strong universal filter.
5. Lots have existed, and a few have expanded (still a strong future filter), but we can't see the expanded ones (yet)
Fit to observations: Poor. Things could still fit if the average population of a civilisation is less than a trillion. But that requires that the small, unexpanded, civilisations massively outnumber the big, expanded ones: so much so that most of the population is in the small ones. This requires an extremely strong future filter. Again, the prior likelihood of this strength of filter seems very low.
Predictions for Future of Humanity: Extremely pessimistic, because of the strong universal filter.
6. Lots have existed, and lots have expanded, so the uinverse is full of expanded civilisations; we don't see that, but that's because we are in a zoo or simulation of some sort.
Fit to observations: Poor: even worse than in case 5. Most values of n close to my own (enormous) value of n will be in one of the expanded civilisations. The most likely case seems to be that I'm in a simulation; but still there is no reason at all to suppose the simulation would look like this.
Predictions for Future of Humanity: Uncertain. A significant risk is that someone switches our simulation off, before we get a chance to expand and consume unavailable amounts of simulation resources (e.g. by running our own simulations in turn). This switch-off risk is rather hard to estimate. Most simulations will eventually get switched off, but the Kolmogorov weighting may put us into one of the earlier simulations, one which is running when lots of resources are still available, and doesn't get turned off for a long time.
The 'Irrationality Game' posts in discussion came before my time here, but I had a very good time reading the bits written in the comments section. I also had a number of thoughts I would've liked to post and get feedback on, but I knew that being buried in such old threads not much would come of it. So I asked around and feedback from people has suggested that they would be open to a reboot!
I hereby again quote the original rules:
I would suggest placing *related* propositions in the same comment, but wildly different ones might deserve separate comments for keeping threads separate.
Make sure you put "Irrationality Game" as the first two words of a post containing a proposition to be voted upon in the game's format.
Here we go!
EDIT: It was pointed out in the meta-thread below that this could be done with polls rather than karma so as to discourage playing-to-win and getting around the hiding of downvoted comments. If anyone resurrects this game in the future, please do so under that system If you wish to test a poll format in this thread feel free to do so, but continue voting as normal for those that are not in poll format.