cousin_it comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
This falls in the same confused cluster as anticipated experience. You only anticipate certain things happening because they describe the fraction of the game you value playing and are able to play (plan for), over other possibilities where things go crazy. Observations don't provide evidence, and how you react to observations is a manner in which you follow a plan, conditional strategy of doing certain things in response to certain inputs, a plan that you must decide on from other considerations. Laws of physics seem to be merely a projection of our preference, something we came to value because we evolved to play the game within them (and are not able to easily influence things outside of them).
So "credence" is a very imprecise idea, and certainly not something you can use to make conclusions about what is actually possible (well, apart from however it reveals your prior, which might be a lot). What is actually possible is all there in the prior, not in what you observe. This suggests a kind of "anti-Bayesian" principle, where the only epistemic function of observations is to "update" your knowledge about what your prior actually is, but this "updating" is not at all straightforward. (This view also allows to get rid of the madness in anthropic thought experiments.)
(This is a serious response. Honest.)
Edit: See also this clarification.
Whoa. That's gotta be the most interesting comment I read on LW ever. Did you just give an evolutionary explanation for the concept of probability? If Eliezer's ideas are madness, yours are ultimate madness. It does sound like it can be correct, though.
But I don't see how it answers my question. Are you claiming I have no chance of ending up in a rescue sim because I don't care about it? Then can I start caring about it somehow? Because it sounds like a good idea.
It is much worse, this seems to be an evolutionary "explanation" for, say, particle physics, and I can't yet get through the resulting cognitive dissonance. This can't be right.
Yep, I saw the particle physics angle immediately too, but I saw it as less catastrophic than probability, not more :-) Let's work it out here. I'll try to think of more stupid-sounding questions, because they seemed to be useful to you in the past.
As applied to your comment, it means that you can only use observations epistemically where you expect to be existing according to the concept of anticipated experience as coded by evolution. Where you are instantiated by artificial devices like rescue simulations, these situations don't map on anticipated experience, so observations remembered in those states don't reveal your prior, and can't be used to learn how things actually are (how your prior actually is).
You can't change what you anticipate, because you can't change your mind that precisely, but changing what you anticipate isn't fundamental and doesn't change what will actually happen - everything "actually happens" in some sense, you just care about different things to different degree. And you certainly don't want to change what you care about (and in a sense, can't: the changed thing won't be what you care about, it will be something else). (Here, "caring" is used to refer to preference, and not anticipation.)
Before I dig into it formally, let's skim the surface some more. Do you also think Rolf Nelson's AI deterrence won't work? Or are sims only unusable on humans?
I think this might get dangerously close to the banned territory, and our Friendly dictator will close the whole thread. Though since it wasn't clarified what exactly is banned, I'll go ahead and discuss acausal trade in general until it's explicitly ruled banned as well.
As discussed before, "AI deterrence" is much better thought of as participation in acausal multiverse economy, but it probably takes a much more detailed knowledge of your preference than humans possess to make the necessary bead jar guesses to make your moves in the global game. This makes it doubtful that it's possible on human level, since the decision problem deteriorates into a form of Pascal's Wager (without infinities, but with quantities outside the usual ranges and too difficult to estimate, while precision is still important).
ETA: And sims are certainly "usable" for humans, they produce some goodness, but maybe less so than something else. That they aren't subjectively anticipated, doesn't make them improbable, in case you actually build them. Subjective anticipation is not a very good match for prior, it only tells you a general outline, sometimes in systematic error.
If you haven't already, read BLIT. I'm feeling rather like the protagonist.
Every additional angle, no matter how indirect, gets me closer to seeing that which I Must Not Understand. Though I'm taking it on faith that this is the case, I have reason to think the faith isn't misplaced. It's a very disturbing experience.
I think I'll go read another thread now. Or wait, better yet, watch anime. There's no alcohol in the house..
I don't believe that anticipated experience in natural situations as an accidental (specific to human psychology) way for eliciting prior was previously discussed, though general epistemic uselessness of observations for artificial agents is certainly an old idea.