See also The Valley of Bad Rationality.
How so? Don't people find it comforting believing that there are universes where they survive against impossible odds?
Mere survival doesn't sound all that great. Surviving in a way that is comforting is a very small target in the general space of survival.
By saying "clubs", I communicate the message that my friend would be better off betting $1 on a random club than $2 on the seven of diamonds, (or betting $1 on a random heart or spade), which is true, so I don't really consider that lying.
If, less conveniently, my friend takes what I see to literally mean the suit of the top card, but I still can get them to not bet $2 on the wrong card, then I bite the bullet and lie.
there's a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there's a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
And there's a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don't think the connotations of "fully general counterargument" are appropriate here. "Fully general" means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not "fully general".
and the number of possible models for T rounds is exponential in T
??? Here n is the number of other people betting. It's a constant.
If you wanted to, you could create "super-people" that mix and match the bets of other people depending on the round. Then the number of super-people grows exponentially in T, and without further assumptions you can't hope to be competitive with such "super-people". If that's what you're saying, then I agree with that.
And I agree with the broader point that in general you need to make structural assumptions to make progress. The thing that's awesome about the regret bound is that it does well even in the presence of correlated, non-i.i.d., maybe even adversarial data, and even if the "true hypothesis" isn't in the family of models we consider.
and the number of possible models for T rounds is exponential in T
??? Here n is the number of other people betting. It's a constant.
Within a single application of online learning, n is a constant, but that doesn't mean we can't look at the consequences of it having particular values, even values that vary with other parameters. But, you seem to be agreeing with the main points that if you use all possible models (or "super-people") the regret bound is meaningless, and that in order to reduce the number of models so it is not meaningless, while also keeping a good model that is worth performing almost as well as, you need structural assumptions.
even if the "true hypothesis" isn't in the family of models we consider
I agree you don't need the model that is right every round, but you do need the model to be right in a lot of rounds. You don't need a perfect model, but you need a model that is as correct as you want your end results to be.
maybe even adversarial data
I think truly adversarial data gives a result that is within the regret bounds, as guaranteed, but still uselessly inaccurate because the data is adversarial against the collection of models (unless the collection is so large you aren't really bounding regret).
The Finnish military uses personality tests on everyone to look for the leader types amongst their conscripts. Everyone with half a brain could game them either to shorten their stay or to get picked as a leader candidate. It's amazing how these kinds of useless testing rituals stick.
Everyone with half a brain could game them either to shorten their stay or to get picked as a leader candidate.
Maybe that's the test.
Regarding myth 5 and the online learning, I don't think the average regret bound is as awesome as you claim. The bound is square root( (log n) / T). But if there are really no structural assumptions, then you should be considering all possible models, and the number of possible models for T rounds is exponential in T, so the bound ends up being 1, which is the worst possible average regret using any strategy. With no assumptions of structure, there is no meaningful guarantee on the real accuracy of the method.
The thing that is awesome about the bounds guarantee is that if you assume some structure, and choose a subset of possible models based on that structure, you know you get increased accuracy if your structural assumptions hold.
So this method doesn't really avoid relying on structural assumptions, it just punts the question of which structural assumption to make to the choice of models to run the method over. This is pretty much the same as Bayesian methods putting the structural assumptions in the prior, and it seems that choosing a collection of models is an approximation of choosing a prior, though less powerful because instead of assigning models probabilities in a continuous range, it just either includes the model or doesn't.
That's a reasonable thing to do, but can you obtain something like De Finetti's justification of probability via Dutch books that way?
The obvious steelman of dialogue participant A would keep the coin hidden but ready to inspect, so that A can offer bets having credible ignorance of the outcomes and B isn't justified in updating on A offering the bet.
Except he isn't doing that. He's misrepresenting people's arguments (due to misunderstanding?), tearing his strawman apart, and then "explaining" the poor quality of this argument by declaring that his opponents are lying about their beliefs, and their actual beliefs consist of simple deontological rules.
What bothers me is when they do this and they pretend they’re making a value-free statement about respecting the rights of others. “Oh, well, we’re a liberal democracy and people should be able to do whatever they like with their own bodies, but I’m just worried about people being euthanized against their will, and that would be a violation of consent, and a good liberal democracy like us wouldn’t want to violate consent, nosirree!”
No. You do not care how many people are kept alive without their consent, just like you do not care how many people work McJobs without their consent, or how many people feel pressured into going to social gatherings they don’t want to attend. You care about consent solely when it serves the purpose of your sacred values. You would gladly violate the consent of a billion people on some unrelated issue rather than risk a single consent violation of your own personal pet project.
... and obviously, an arbitrary set of deontological rules is not an argument, so he no longer has to actually disprove it.
I'm starting to think I need to write a larger deconstruction of his post, actually, but I hope you see what I mean. (Thank Azathoth that Yvain is such a clear writer and thinker so I can show this so simply with quotes like this. Although I suppose he wouldn't have as many of us caring what he writes if it wasn't worth reading.)
Yvain says that people claim to be using one simple deontological rule "Don't violate consent" when in fact they are using a complicated collection of rules of the form "Don't violate consent in this specific domain" while not following other rules of that form.
And yet, you accuse him of strawmanning their argument to be simple.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I use whole life insurance. If you use term insurance, you should have a solid plan for an alternate funding source to replace your insurance at the end of the term.
I believe the Efficient Market Hypothesis is correct enough that reliably getting good results from buying term insurance and investing the premium difference would be a lot of work if possible at all.