Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.
is quite safe if done properly
That's the thing -- it's basically an issue of idiot-proofing. Many things are "safe if done properly" and still are not a good idea because people in general are pretty bad at doing things properly.
Flush toilets are idiot-proof to a remarkable degree. Composting human manure, I have my doubts.
Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.
Irrationality game:
Most posthuman societies will have violent death rate much higher than humans ever had. Most poshumans who will ever live will die at wars. 95%
Interesting. So, you have Robin Hanson's belief that we won't get a strong singleton; but you lack his belief that emulated minds will be able to evaluate each other's abilities with enough confidence that trade (taking into account the expected value of fighting) will be superior to fighting? That's quite the idiosyncratic position, especially for 95% confidence.
You (the reader) do not exist.
EDIT: That was too punchy and not precise. The reasoning behind the statement:
Most things which think they are me are horribly confused gasps of consciousness. Rational agents should believe the chances are small that their experiences are remotely genuine.
EDIT 2: After thinking about shminux's comment, I have to retract my original statement about you readers not existing. Even if I'm a hopelessly confused Boltzmann brain, the referent "you" might still well exist. At minimum I have to think about existence more. Sorry!
Irrationality Game: I am something ontologically distinct from my body; I am much simpler and I am not located in the same spacetime. 50%
EDIT: Upon further reflection, my probability assignment would be better represented as the range between 30% and 50%, after factoring in general uncertainty due to confusion. I doubt this will make a difference to the voting though. ;)
Irrationality game - there is a provident, superior entity that is in no way infinite (I wonder if people here would call that God. As a "superman theist" I had to put "odds of God (as defined in question)" at 5% but identify as strongly theist in the last census)
Edit: forgot odds. 80%
The universe is finite, and not much bigger than the region we observe. There is no multiverse (in particular Many Worlds Interpretation is incorrect and SIA is incorrect). There have been a few (< million) intelligent civilisations before human beings but none of them managed to expand into space, which explains Fermi's paradox. This also implies a mild form of the "Doomsday" argument (we are fairly unlikely to expand ourselves) but not a strong future filter (if instead millions or billions of civilisations had existed, but none of them expanded, there would be a massive future filter). Probability: 90%.
Irrationality Game: One can reliable and predictably make $1M / year, and it's not that difficult. (Confidence: 75%)
Irrationality game:
There are other 'technological civilizations' (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%
There are other 'technological civilizations' in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.
Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never co...
Irrationality game:
Nice idea. This way I can safely test whether the Baseline of my opinion on LW topics is as contrarian as I think.
My proposition:
On The Simulation Argument I go for "(1) the human species is very likely to go extinct before reaching a “posthuman” stage" (80%)
Correspondingly on The Great Filter I go for failure to reach "9. Colonization explosion" (80%).
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility).
Irrationality game: people are happier when living in traditional social structures, and value being part of their traditions[1]. The public existence of "weird" relationships (homosexuality, polyamory, BDSM, ...) is actively harmful to most people; the open practice of them is a net negative for world utility. Morally good actions include condemnation and censorship of such things.
[1] Or rather what they believe are their traditions; these beliefs may not be particularly well-correlated with reality.
Irrationality Game: Currently, understanding history or politics is a better avenue than studying AI or decision theory for dealing with existential risk. This is not because of the risk of total nuclear annihilation, but because of the possibility of political changes that result in setbacks to or an accelerated use and understanding of AI. 70%
I'm 99% confident that dust specks in 3^^^3 eyes result in less lost utility than 50 years of torturing one person.
Just as a curiosity, this was the most downvoted comment in the original thread:
For a large majority of people who read this, learning a lot about how to interact with other human beings genuinely and in a way that inspires comfort and pleasure on both sides is of higher utility than learning a lot about either AI or IA. ~90%
(-44 points)
Irrationality game:
Most progress in medicine in the next 50 years won't be due to advances in molecular biology and the production of drugs that are designed to target specific biochemical pathways but through other paradigms.
Probability: 75%
Irrationality game: The straightforward view of the nature of the universe is fundamentally flawed. 90%
By "fundamentally flawed", I mean things like:
Irrationality game: The Great Stagnation is actually occurring, and it is mostly due to fossil fuel depletion rather than (say) leftist politics or dysgenics. (60%)
Irrationality game: most opposition to wireheading comes from seeing it as weird and/or counterintuitive in the same way that most non-LWers see cryonics/immortalism as weird. Claiming to have multiple terminal values is an attempt to justify this aversion. 75%
Irrationality Game: We need a way to give feedback on irrationality game entries that the troll toll won't mess with. (98%)
[pollid:643]
Irrationality Game:
Everyone alive in developed nations today will die a fairly standard biological death by age:
150: 75%
250: 95%
(This latter figure accounts for the possibility that the stories of the odd Chinese monk living to age 200+ after only eating wild herbs from age 10 on up is actually true and not an exaggeration, or someone sticking to unreasonably-effective calorie restriction regimes religiously combined with some interesting metabolic rejiggering in the coming decade or two).
The majority (90+%) of people born in developed nations today will die a fairly standard biological death by age:
120: 85%
150: 99%
Irrationality Game:
Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.
No, it can be located absolutely anywhere. However you're right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.
Ah, I see what you're getting at. If the vertex is at the Big Bang, then the shortest programs basically simulate a history of the observable universe. Just start from a description of the laws of physics and some (low entropy) initial conditions, then read in random bits whenever there is an increase in entropy. (For technical reasons the programs will also need to simulate a slightly larger region just outside the light cone, to predict what will cross into it).
If the vertex lies elsewhere, the shortest programs will likely still simulate starting from the Big Bang, then "truncate" i.e. shift the vertex to a new point (s, t) and throw away anything outside the reduced light cone. So I suspect that this approach gives a weighting rather like 2^-K(s,t) for light-cones which are offset from the Big Bang. Probably most of the weight comes from programs which shift in t but not much in s.
The temporal discount here can be fast e.g. exponential.
That's what I thought you meant originally: this would ensures that the utility in any given light-cone is bounded, and hence that the expected utility converges.
...given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with.
This looks correct, but it is different from your initial argument. In particular there's no reason to believe MWI is wrong or anything like that.
I disagree. If models like MWI and/or eternal inflation are taken seriously, then they imply the existence of a huge number of civilisations (spread across multiple branches or multiple inflating regions), and a huge number of expanded civilisations (unless the chance of expansion is exactly zero). Observers should then predict that they will be in one of the expanded civilisations. (Or in UDT terms, they should take bets that they are in such a civilisation). Since our observations are not like that, this forces us into simulation conclusions (most people making our observations are in sims, so that's how we should bet). The problem is still that there is a poor fit to observations: yes we could be in a sim, and it could look like this, but on the other hand it could look like more or less anything.
Incidentally, there are versions of inflation and many worlds which don't run into that problem. You can always take a "local" view of inflation (see for instance these papers), and a "modal" interpretation of many worlds (see here). Combined, these views imply that all that actually exists is within one branch of a wave function constructed over one observable universe. These "cut-down" interpretations make either the same physical predictions as the "expansive" interpretations, or better predictions, so I can't see any real reason to believe in the expansive versions.
So I suspect that this approach gives a weighting rather like 2^-K(s,t) for light-cones which are offset from the Big Bang.
In some sense it does, but we must be wary of technicalities. In initial singularity models I'm not sure it makes sense to speak of "light cone with vertex in singularity" and it certainly doesn't make sense to speak of a privileged point in space. In eternal inflation models there is no singularity so it might make space to speak of the "Big Bang" point in space-time, however it is slightly "fuzzy".
...I
The 'Irrationality Game' posts in discussion came before my time here, but I had a very good time reading the bits written in the comments section. I also had a number of thoughts I would've liked to post and get feedback on, but I knew that being buried in such old threads not much would come of it. So I asked around and feedback from people has suggested that they would be open to a reboot!
I hereby again quote the original rules:
I would suggest placing *related* propositions in the same comment, but wildly different ones might deserve separate comments for keeping threads separate.
Make sure you put "Irrationality Game" as the first two words of a post containing a proposition to be voted upon in the game's format.
Here we go!
EDIT: It was pointed out in the meta-thread below that this could be done with polls rather than karma so as to discourage playing-to-win and getting around the hiding of downvoted comments. If anyone resurrects this game in the future, please do so under that system If you wish to test a poll format in this thread feel free to do so, but continue voting as normal for those that are not in poll format.