I recommend trying to take the harmonic mean of a physical and an economic estimate when appropriate.
So, what you're saying is that the larger number is less likely to be accurate the further it is from the smaller number? Why is that?
Amusingly, the web page accusing Paul Graham of being a cult leader is down for the count, and despite my best efforts I can't find a Google or other cache of the actual text of the original post.
It's almost as if... someone... deliberately removed it.
+ evidence for paul-graham-being-an-apparently-damn-successful cult leader.
In fact, the best indicator of being a masterful cult leader is that no one suspects you! wait...
mwengler is offering the 2005 Mercedes CLK320a as an example of a luxury car that doesn't have "good, if sometimes absurd" cupholders, as a counterpoint to your own reported experience of always finding good cupholders in luxury cars.
You can't have a counterpoint to someone's experience. He always found luxury cars to have good cupholders. You can't say he's wrong about that...
In addition to making lists for "work," make one for things you want to watch, read, and/or play. You'll feel more productive and motivated even when taking a break from work.
However, make sure that the things you put on your list are things you actually want to do. Otherwise it may take away from the effect.
Or it signals that you are comfortable asserting your own values in contradiction to a group. That's a very positive signal to me, but probably generally negative.
Or maybe they think that your non-drinking is not a value of yours, but a value of another group that you are choosing over theirs.
I feel like this is more of a problem with your optimism than with induction. You should really have a hypothesis set that says "humans want me to be fed for some period of time" and the evidence increases your confidence in that, not just some subset of it. After that, you can have additional hypotheses about, for example, their possible motivations, that you could update on based on whatever other data you have (e.g. you're super-induction-turkey, so you figured out evolution). Or, more trivially, you might notice that sometimes your fellow turkeys disappear and don't come back (if that happens). You would then predict the future based on all of these hypotheses, not just one linear trend you detected.
I'm not sure why, but now I want Super-induction-turkey to be the LW mascot.
I think there seems to be a mismatch of terms involved. Ontological probability, or propensity, and epistemological probability, or uncertainty, are being confused. Reading over this discussion, I have seen claims that something called "chaotic randomness" is at work, where uncertainty results from chaotic systems because the results are so sensitive to initial conditions, but that's not ontological probability at all.
The claim of the paper is that all actual randomness, and thus ontological probability, is a result of quantum decoherence and recoherence in both chaotic and simple systems. Uncertainty is uninvolved, though uncertainty in chaotic systems appears to be random.
That said, I believe the hypothesis is correct simply because it is the simplest explanation for randomness I've seen.
We argue using simple models that all successful practical uses of probabilities originate in quantum fluctuations in the microscopic physical world around us, often propagated to macroscopic scales
Their argument is that not only is quantum mechanics ontologically probabilistic, but that only ontologically probabilistic things can be successfully described by probabilities. This is obviously false (not to mention that nothing has actually been shown to be ontologically probabilistic in the first place).
Thus we claim there is no physically verified fully classical theory of probability.
They think they can get away with this claim because it can't even be tested in a quantum world. But you can still make classical simulations and see if probability works as it should, and it's obvious that it does. Their only argument is that it's simpler for probability to be entirely quantum, but they fail to consider situations where quantum effects do not actually affect the system (which we can simulate and test).
Chaotic classical simulations? Could you elaborate?
Well, you can run things like physics engines on a computer, and their output is not quantum in any meaningful way (following deterministic rules fairly reliably). It's not very hard to simulate systems where a small uncertainty in initial conditions is magnified very quickly, and this increase in randomness can't really be attributed to quantum effects but can be described very well by probability. This seems to contradict their thesis that all use of probability to describe randomness is justified only by quantum mechanics.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
It can't do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn't include relativity will produce the wrong answer.