Long run? What? Which exactly equivalent random events are you going to experience more than once? And if the events are only really close to equivalent, how do you justify saying that 30 one-time shots at completely different ways of gaining 1 utility unit is a fundamentally different thing than a nearly-exactly-repeated game where you have 30 chances to gain 1 utility unit each time?
I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.
Tom: What actually happens under your scenario is that the naive human rationalists frantically try to undo their work when they realize that the optimization processes keep reprogramming themselves to adopt the mistaken beliefs that are easiest to correct. :D
Caledonian: please define meta-evidence, then, since I think Eliezer has adequately defined evidence. Clear up our confusion!
Selfreferencing: unfortunately there is an enormous gulf between "most theists" and "theistic philosophers". If you don't believe this then you need to get out more. Perhaps in the U.S. South, for instance. It might be irritating that most theists are not as enlightened as you are, but it is a fact, not a caricature.
I'm pretty sure, for example, that almost everyone I grew up with believes what a divine command theorist believes. And now that I look back at the OP and your comment, I notice that in the former Eliezer continually says "religious fundamentalists" and in the latter you continually say "theistic philosophers", so maybe you already recognize this.
To stay unbiased about all of the commenters here, do not visit this link and search the page for names. (sorry, but - wait no, not sorry)
So it seems to me that the smaller you can make a quine in some system with the property that small changes in it mean it produces nearly itself as output, the more likely that system is going to produce replicating evolution-capable things. Or something, I'm making this up as I go along. Is this concept sensical? Is there a computationally feasible way to test anything about it? Has it been discussed over and over?
Maybe we can do far better than evolution, but if we could design a good parallelizable "evolution-friendly" environment and see whether organisms develop that'd still be phenomenal.
Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.
On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibility value given enough thought. I think I'm saying "in the limit, experience-invariant" rather than "non-experiential". I believe that some things, like 2+2=4, are experience-invariant: in every universe I can imagine, an entity who knows enough about it should conclude that 2+2=4. Perhaps my imagination is deficient, though. :)
Ha, this just happened to me. Luckily it wasn't too painful because I knew the weakness existed, I avoided it, and then reading E. T. Jaynes' "Probability Theory: The Logic of Science" gave me a different and much better belief to patch up my old one. Also, thanks for that recommendation. A lot.
For a while I had been what I called a Bayesian because I thought the frequentist position was incoherent and the Bayesian position elegant. But I couldn't resolve to my satisfaction the problem of scale parameters. I read that there was a prior that was invariant with respect to them but something kept bothering me.
It turns out that my intuition of probability was still "there is a magic number I call probability inherent in objects and what they might do". So when I saw the question "What is the probability that a glass has water:wine in a ratio of 1.5:1 or less, given that it has water:wine in a ratio between 1:1 and 2:1?" I was still thinking something along the lines of "Well, consider all possible glasses of watered wine, and maybe weight them in some way, and I'll get a probability..."
Jaynes has convinced me that the right way to think about probability is plausibility of situations given states of knowledge. There's nothing wrong with insisting that a prior be set up for any given problem; it's incoherent to set up a problem _without_ looking at the priors. They aren't just useful, they're necessary, and anyone who says it's cheating to push the difficulty of an inductive reasoning problem onto the difficulty of determining real-world priors can be dismissed.
If only I'd asked around about this problem before, maybe I would have discovered meta-Jaynes earlier! Speaking of that, why haven't I seen his stuff or things building on it before? I feel like saying that 99% of people miss its importance says more about my importance assignment than their seeming apathy.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?
Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.
You might have a point if there existed a preference effector with incoherent preferences that could only ever effect one preference. I haven't thought a lot about that one. But since your incoherent preferences will show up in lots of decisions, I don't care if this specific decision will be "repeated" (note: none are ever really repeated exactly) or not. The point is that you'll just keep losing those pennies every time you make a decision.
1. Save 400 lives, with certainty. 2. Save 500 lives, with 90% probability; save no lives, 10% probability. What are the outcomes? U(400 alive, 100 dead, I chose choice 1) = A, U(500 alive, 0 dead, I chose choice 2) = B, and U(0 alive, 500 dead, I chose choice 2) = C.
Remember that probability is a measure of what we don't know. The plausibility that a given situation is (will be) the case. If 1.0*A > 0.9*B + 0.1*C, then I prefer choice 1. Otherwise 2. Can you tell me what's left out here, or thrown in that shouldn't be? Which part of this do you have a disagreement with?