Oh, I agree; Brahms was quite old-fashioned. But Phil specifically said that Brahms's was writing music in a style that "had gone out of fashion before he was born", which I think is clearly not true.
I'm not sure Brahms was as unoriginal as you make out. In any case, his style is certainly no older than (say) mid-to-late Beethoven; Beethoven died in 1827 and I'm pretty sure that when Brahms was born in 1833 mid-to-late Beethoven wasn't "out of style" in any useful sense.
J S Bach's music was widely regarded as old-fashioned in his own lifetime, but if anyone's canon[1] he is.
Saint-Saƫns was born later than Brahms but his music isn't any more "modern" than Brahms's, and he was both popular and well regarded by critics (not as well regarded as Brahms, but that's perfectly well explained by his not being quite as good).
[1] Pun not intended but deliberately left in once noticed.
I think the following is perfectly credible and consistent with Brahms's success:
- To be regarded as a great artist, one must do impressive things.
- Great skill is impressive. So is great novelty, at least if accompanied by sufficient skill.
- So famous artists will tend to be skilled and original, but the exact proportions of skill and originality will vary.
- Brahms was exceptionally skilled and (for a first-rank composer) relatively unoriginal. In comparison, e.g., maybe John Cage was exceptionally original and relatively unskilled.
- That's all.
There's really only a puzzle if you insist on saying that success generally has nothing to do with skill, which sounds impressively cynical but really doesn't seem particularly likely to be correct.
If I were making music in the style of someone who died six years before I was born, people would probably think I was out of style. I'm not sure if this is the historical fallacy I don't have a name for, where we gloss over differences in a few decades because they're less salient to us than the differences between the 1990s and the 1960s, or if musical styles just change more quickly now.
What do you mean by "neutral"? Odds are you wouldn't have found the name "Google" neutral if you had just heard it for the first time.
I spent a long time associating Amazon with "something in South America, so it's probably not accessible to me" before the company was as ultra-famous as it is now.
It seems to me that roughly similar capabilities are useful for mining asteroids and deflecting asteroids with known impact trajectories (i.e. step 2 in the Don't Die plan), and this puts this into the class of opportunities where succeeding gives you more than just not dying (like AGI).
On the other hand, asteroid mining technologies have some risks of their own, although this only reaches "existential" if somebody starts mining the big ones.
The largest nuclear weapon was the Tsar Bomba: 50 megatonnes of TNT, roughly equivalent to a 3.3-million-tonne impactor. Asteroids larger than this are thought to number in the tens of millions, and at the time of writing only 1.1 million had been provisionally identified. Asteroid shunting at or beyond this scale is by definition a trans-nuclear technology, which means a point comes where the necessary level of trust is unprecedented.
Nobody is saying EY invented Cromwell's Rule, that's not the issue.
The issue is that "0 and 1 are not useful subjective certainties for a Bayesian agent" is a very different statement than "0 and 1 are not probabilities at all".
You're right, I misread your sentence about "his personal preferences" as referring to the whole claim, rather than specifically the part about what's "mentally healthy". I don't think we disagree on the object level here.
It seems that EY position boils down to
Pragmatically speaking, the real question for people who are not AI programmers is whether it makes sense for human beings to go around declaring that they are infinitely certain of things. I think the answer is that it is far mentally healthier to go around thinking of things as having 'tiny probabilities much larger than one over googolplex' than to think of them being 'impossible'.
And that's a weak claim. EY's ideas of what is "mentally healthier" are, basically, his personal preferences. I, for example, don't find any mental health benefits in thinking about one over googolplex probabilities.
Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like).
If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".
Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty.
Errr... as I read EY's post, he is certainly talking about the mathematics of probability (or about the formal framework in which we operate on probabilities) and not about some "subjective sense".
The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?
If we're asking what the author "really meant" rather than just what would be correct, it's on record.
The argument for why zero and one are not probabilities is not, "All objects which are special cases should be cast out of mathematics, so get rid of the real zero because it requires a special case in the field axioms", it is, "ceteris paribus, can we do this without the special case?" and a bit of further intuition about how 0 and 1 are the equivalents of infinite probabilities, where doing our calculations without infinities when possible is ceteris paribus regarded as a good idea by certain sorts of mathematicians. E.T. Jaynes in "Probability Theory: The Logic of Science" shows how many probability-theoretic errors are committed by people who assume limits directly into their calculations, without first showing the finite calculation and then finally taking its limit. It is not unreasonable to wonder when we might get into trouble by using infinite odds ratios. Furthermore, real human beings do seem to often do very badly on account of claiming to be infinitely certain of things so it may be pragmatically important to be wary of them.
I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.
What do the following have in common?
- "Trying to lose weight? Just eat less and exercise more!"
- "Trying to get more done? Just stop wasting time!"
- "Feeling depressed? Just cheer up!"
- "Want not to get pregnant? Just don't have sex!"
Answer: they would all be quite successful if followed, but they are all difficult enough to follow that people who actually care about results will do better to set different goals that take more account of how human decision-making actually works.
If you eat less and exercise more then, indeed, you will lose weight. (I do not know how reliably you will lose weight by losing fat, which of course is usually the actual goal.) But you don't exactly get to choose to eat less and exercise more; you get to choose to aim to do those things, but willpower is limited and akrasia is real and aiming to eat less and exercise more may be markedly less effective than (e.g.) aiming to reduce consumption of carbohydrates, or aiming to keep a careful tally of everything you eat, or aiming to stop eating things with sugar in, or whatever.
People with plenty of willpower, or unusually fast metabolism, or brains less-than-averagely inclined to make them eat everything tasty they see around them, may have excellent success by just aiming to eat less and exercise more. In the same way, for many people "just cheer up!" may be sufficient to avoid depression; for many people "just don't have sex if you don't want to have babies!" may be sufficient to avoid unwanted pregnancy; etc.
But there are plenty of people for whom that doesn't work so well, and this is true even among very smart people, very successful people, or almost any category of people not gerrymandered to force it to be false. And for those people, the "just do X" advice simply will not work, and sneering at them because they are casting around for methods other than "just do X" is simply a sign of callousness or incomprehension.
What do the following have in common?
You focused on akrasia, and obviously this is a component.
My guess was: they're all wildly underdetermined. "Cheer up" isn't a primitive op. "Don't have sex" or "eat less and exercise more" sound like they might be primitive ops, but can be cashed out in many different ways. "Eat less and exercise more, without excessively disrupting your career/social life/general health/etc" is not a primitive op at all, and may require many non-obvious steps.
My first comment ever on this site promptly gets downvoted without explanation. If you disagree with something I said, at least speak up and say why.
I am the downvoter, although another one seems to have found you since. I found your comment to be a mixture of "true, but irrelevant in the context of the quote", and a restatement of non-novel ideas. This is admittedly a harsh standard to apply to a first comment (particularly since you may not have yet even read the other stuff that duplicates your point about human designers being able to avoid local optima!), so I have retracted my downvote.
Welcome to the site, I hope I haven't turned you off.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
why?
Because we can't actually get infinite information, but we still want to calculate things.
And in practice, we can in fact calculate things to some level of precision, using a less-than-infinite amount of information.