See also The Valley of Bad Rationality.
How so? Don't people find it comforting believing that there are universes where they survive against impossible odds?
Mere survival doesn't sound all that great. Surviving in a way that is comforting is a very small target in the general space of survival.
By saying "clubs", I communicate the message that my friend would be better off betting $1 on a random club than $2 on the seven of diamonds, (or betting $1 on a random heart or spade), which is true, so I don't really consider that lying.
If, less conveniently, my friend takes what I see to literally mean the suit of the top card, but I still can get them to not bet $2 on the wrong card, then I bite the bullet and lie.
and the number of possible models for T rounds is exponential in T
??? Here n is the number of other people betting. It's a constant.
If you wanted to, you could create "super-people" that mix and match the bets of other people depending on the round. Then the number of super-people grows exponentially in T, and without further assumptions you can't hope to be competitive with such "super-people". If that's what you're saying, then I agree with that.
And I agree with the broader point that in general you need to make structural assumptions to make progress. The thing that's awesome about the regret bound is that it does well even in the presence of correlated, non-i.i.d., maybe even adversarial data, and even if the "true hypothesis" isn't in the family of models we consider.
and the number of possible models for T rounds is exponential in T
??? Here n is the number of other people betting. It's a constant.
Within a single application of online learning, n is a constant, but that doesn't mean we can't look at the consequences of it having particular values, even values that vary with other parameters. But, you seem to be agreeing with the main points that if you use all possible models (or "super-people") the regret bound is meaningless, and that in order to reduce the number of models so it is not meaningless, while also keeping a good model that is worth performing almost as well as, you need structural assumptions.
even if the "true hypothesis" isn't in the family of models we consider
I agree you don't need the model that is right every round, but you do need the model to be right in a lot of rounds. You don't need a perfect model, but you need a model that is as correct as you want your end results to be.
maybe even adversarial data
I think truly adversarial data gives a result that is within the regret bounds, as guaranteed, but still uselessly inaccurate because the data is adversarial against the collection of models (unless the collection is so large you aren't really bounding regret).
The Finnish military uses personality tests on everyone to look for the leader types amongst their conscripts. Everyone with half a brain could game them either to shorten their stay or to get picked as a leader candidate. It's amazing how these kinds of useless testing rituals stick.
Everyone with half a brain could game them either to shorten their stay or to get picked as a leader candidate.
Maybe that's the test.
Regarding myth 5 and the online learning, I don't think the average regret bound is as awesome as you claim. The bound is square root( (log n) / T). But if there are really no structural assumptions, then you should be considering all possible models, and the number of possible models for T rounds is exponential in T, so the bound ends up being 1, which is the worst possible average regret using any strategy. With no assumptions of structure, there is no meaningful guarantee on the real accuracy of the method.
The thing that is awesome about the bounds guarantee is that if you assume some structure, and choose a subset of possible models based on that structure, you know you get increased accuracy if your structural assumptions hold.
So this method doesn't really avoid relying on structural assumptions, it just punts the question of which structural assumption to make to the choice of models to run the method over. This is pretty much the same as Bayesian methods putting the structural assumptions in the prior, and it seems that choosing a collection of models is an approximation of choosing a prior, though less powerful because instead of assigning models probabilities in a continuous range, it just either includes the model or doesn't.
That's a reasonable thing to do, but can you obtain something like De Finetti's justification of probability via Dutch books that way?
The obvious steelman of dialogue participant A would keep the coin hidden but ready to inspect, so that A can offer bets having credible ignorance of the outcomes and B isn't justified in updating on A offering the bet.
This seems to be overloading the term "side effects". The functional programming concept of side effects (which it says its functions shouldn't have) is changing the global state of the program that invokes them other than by returning the value. It makes no claims of these other concepts of a program being affected by analyzing the source code of the function independent of invoking it or of the the function running on morally relevant causal structure.
Cryonics donation fund for Kim Suozzi established by Society for Venturism
Following the news that Kim Suozzi has terminal brain cancer and wants to be cryopreserved, many of us have donated to help her out, while others, including me, planned to donate when CI set up a fund to receive donations on her behalf. Now the Society for Venturism has set up a fund, and it is time for us to follow through on those plans. (Unless you are really insisting that the fund be managed by CI specifically.)
(ETA: Kim has posted on this herself.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I use whole life insurance. If you use term insurance, you should have a solid plan for an alternate funding source to replace your insurance at the end of the term.
I believe the Efficient Market Hypothesis is correct enough that reliably getting good results from buying term insurance and investing the premium difference would be a lot of work if possible at all.