One useful definition of Bayesian vs Frequentist that I've found is the following. Suppose you run an experiment; you have a hypothesis and you gather some data.
I'm not sure whether this view holds up to criticism, but if so, I sure find the latter much more interesting than the former.
This has been the most fun, satisfying survey I've ever been part of :) Thanks for posting this. Can't wait to see the results!
One question I'd find interesting is closely related to the probability of life in the universe. Namely, what are the chances that a randomly sampled spacefaring lifeform would have an intelligence similar enough to ours for us to be able to communicate meaningfully, both in its "ways" and in general level of smarts, if we were to meet.
Given that I enjoyed taking part in this, may I suggest that more frequent and in-depth surveys on specialized topics might be worth doing?
Maybe we've finally reached the point where there's no work left to be done
If so, this is superb! This is the end goal. A world in which there is no work left to be done, so we can all enjoy our lives, free from the requirement to work.
The thought that work is desirable has been hammered into our heads so hard that it's a really, really dubious proposition that actually a world where nobody has to work is the ultimate goal. Not one in which everyone works. That world sucks. That's world in which 85% of us live today.
I've first read this about two years ago and it has been an invaluable tool. I'm sure it has saved countless hours of pointless arguments around the world.
When I realise that an inconsistency in how we interpret a specific word is a problem in a certain argument and apply this tool, it instantly transforms arguments which actually are about the meaning of the word to make them a lot more productive (it turns out it can be unobvious that the actual disagreement is about what a specific word means). In other cases it just helps get back on the right track instead of getting distracted by what we mean when we say a certain word that is actually beside the point.
It does occasionally take a while to convince the other party to the argument that I'm not trying to fool or trick them when I ask for us to apply this method. Another observation is that the article on Empty Labels has transformed my attitude towards the meaning of words, so when it turns out we disagree about meanings, I instantly lose interest and this can confuse the other party.
Addressed by making a few edits to the "Solution" section. Thank you!
All fair points. I did want to post this to main, but decided against it in the end. Didn't know I could move it to main afterwards. Will work on the title, after I've fixed the error pointed out by VincentYu.
I've reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.
Given #1: Omega was correct on the last 100 occurrences.
Given #2: Box B is already empty or already full.
There is no leeway left for quantum effects, or for your choice affecting in any way what's in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.
If you read it like this, then I don't see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.
If you don't read it like this, then we're back to the "gooey vagueness" problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe - in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.
With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what's in box B.
I'm not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in "99.9%" being an accurate value for Omega's success rate?
From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I'm just pointing out that this isn't very clear in the original specification of the paradox, and that clearing it up is useful.
To explain why it's important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.
If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.
However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That's impossible in our real world, but it's useful for the sake of discussing bayesianism. Surely any bayesianist's AI would expect heads with high probability in this hypothetical universe.
So, are we looking at "Omega in the real world where someone I don't even know tells me they are really damn good at predicting the future", or "Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future"?
While I disagree that one-boxing still wins, I'm most interested in seeing the "no future peeking" and the actual Omega success rate being defined as givens. It's important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.