That sounds like "thesis is true" or "thesis is not true" are reasonable positions. Bayesian beliefs have probabilities attached to them.
Sometimes, even people who understand Bayesian reasoning use idiomatic phrases like "believe is true" as a convenient shorthand for "assign a high probability to"! I can see how that might be confusing!
Does anyone have any insight into VoI plays with Bayesian reasoning?
At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:
""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """
From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:
If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.
This tends to be very context dependent; I don't know enough about biology to estimate. The main caution here is that people tend to forget about regression to the mean (if you have a local measurement X that's only partly related to Y, you should not just port your estimate from X over to Y, but move it closer to what you would have expected from Y beforehand).
You should play if the expected value is positive, and not if it's negative. If the test run results in heads, then the posterior probability is 2/3rds and 24*2/3-12=4, which is positive. If the test run results in tails, then the posterior probability is 1/3rd and 24*1/3-12=-4, which is negative.
(Why is the posterior probability 2/3 or 1/3? Check out footnote 3, or Laplace's Rule of Succession.)
I certainly wouldn't defend the [...] thesis
"Wouldn't defend" is an interestingly ambiguous phrase!—it could mean "I don't think the thesis is true," or it could mean "I think the thesis is true, but I'm not going to argue for it here." The thing to remember is that the ambiguity is meant for the listener, not the speaker; it's important not to let your sensible caution about what beliefs you're willing to argue for under your True Name distort your model of the true state of reality. And precisely because other people are also cautious about what they're willing to argue for, there could be all sorts of important truths—actionable information that you can use to make important life decisions better—that take special rationality skills to discover, that you won't automatically learn about just by reading what almost everyone says, because almost everyone is too cowardly to just say the Really Obvious Thing.
This, unfortunately, is why you probably won't understand what I'm talking about for another seven years and eight months.
We are told no such thing. We are told it's a fair coin and that can only mean that if you divide up worlds by their probability density, you win in half of them. This is defined.
No, take another look:
in the overwhelming measure of the MWI worlds it gives the same outcome. You don't care about a fraction that sees a different result, in all reality the result is that Omega won't even consider giving you $10000, it only asks for your $100.
Necro - I know. However, I'd be willing to bet that few current readers have seen it and we're kind of hurting for new content, so it's probably fine to mine the archives a bit.
That being said, I really enjoyed this article. It seems to check with my own experiences reasonably well and shed some new light on the subject (for me at least). I hadn't really looked at the rack and stack of power to this level of detail nor considered closely where the power of voters really lies. It's also one of the few places where there's a good rational argument for why "Blue Team" "Green Team" is destructive (most of the other content on the site - including the fable of Green and Blue - seem to focus more heavily on the fact that it's annoying when people act irrationally rather than discussing a specific situation where that irrationality is actually harming them).
Interesting stuff.
Yup. Preferably with some explanation of why the recommended book is being recommended over some of its rivals. But the comment you're replying to is from >4 years ago, and the person who wrote it hasn't written anything else here for >4 years, so I suspect there's little point complaining.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)