Researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.
I'd think this isn't a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn't have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%).
This sounds like a critique of imprecise credences themselves, not maximality as a decision rule. Do you think that, even if the credences you actually endorse are imprecise, maximality is objectionable?
Anyway, to respond to the critique itself:
In response to the two reactions:
- Why do you say, "Besides, most people actually take the opposite approch: computation is the most "real" thing out there, and the universe—and any consciouses therein—arise from it."
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists "a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness" you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you've invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of "computational universe", e.g. Tegmark IV.
I'm happy to grant that last sentence for the sake of argument, but note that you originally just said "most people," full stop, without the massively important qualifier "who take the materialist perspective."
The non-functionalist audience is also not obliged to trust the introspective reports at intermediate stages.
This introduces a bizarre disconnect between your beliefs about your qualia, and the qualia themselves. Imagine: It would be possible, for example, that you believe you're in pain, and act in all ways as if you're in pain, but actually, you're not in pain.
I think "belief" is overloaded here. We could distinguish two kinds of "believing you're in pain" in this context:
I'd agree it's totally bizarre (if not incoherent) for someone to (2)-believe they're in pain yet be mistaken about that. But in order to resist the fading qualia argument along the quoted lines, I think we only need someone to (1)-believe they're in pain yet be mistaken. Which doesn't seem bizarre to me.
(And no, you don't need to be an epiphenomenalist to buy this, I think. Quoting Block: “Consider two computationally identical computers, one that works via electronic mechanisms, the other that works via hydraulic mechanisms. (Suppose that the fluid in one does the same job that the electricity does in the other.) We are not entitled to infer from the causal efficacy of the fluid in the hydraulic machine that the electrical machine also has fluid. One could not conclude that the presence or absence of the fluid makes no difference, just because there is a functional equivalent that has no fluid.”)
the copies would not only have the same algorithm, but also the same physical structure arbitrarily finely
I understand, I'm just rejecting the premise that "same physical structure" implies identity to me. (Perhaps confusingly, despite the fact that I'm defending the "physicalist ontology" in the context of this thread (in contrast to algorithmic ontology), I reject physicalism in the metaphysics sense.)
This also seems tangential, though, because the substantive appeals to the algorithmic ontology that get made in the decision theory context aren't about physically instantiated copies. They're about non-physically-instantiated copies of your algorithm. I unfortunately don't know of a reference for this off the top of my head, but it has come up in some personal communications FWIW.
you'd eventually meet copies of yourself
But a copy of me =/= me. I don't see how you establish this equivalence without assuming the algorithmic ontology in the first place.
it's not an independent or random sample
What kind of sample do you think it is?
Sure, but isn't the whole source of weirdness the fact that it's metaphysically unclear (or indeterminate) what the real "sampling procedure" is?
I don't understand. It seems that when people appeal to the algorithmic ontology to motivate interesting decision-theoretic claims — like, say, "you should choose to one-box in Transparent Newcomb" — they're not just taking a more general perspective. They're making a substantive claim that it's sensible to regard yourself as an algorithm, over and above your particular instantiation in concrete reality.
This post was a blog post day project. For its purpose of general sanity waterline-raising, I'm happy with how it turned out. If I still prioritized the kinds of topics this post is about, I'd say more about things like:
But I've come to think there are far deeper and higher-priority mistakes in the "orthodox rationalist worldview" (scare quotes because I know individuals' views are less monolithic than that, of course). Mostly concerning pragmatism about epistemology and uncritical acceptance of precise Bayesianism. I wrote a bit about the problems with pragmatism here, and critiques of precise Bayesianism are forthcoming, though previewed a bit here.
I think I'm happy to say that in this example, you're warranted in reasoning like: "I have no information about the biases of the three coins except that they're in the range [0.2, 0.7]. The space 'possible biases of the coin' seems like a privileged space with respect to which I can apply the principle of indifference, so there's a positive motivation for having a determinate probability distribution about each of the three coins centered on 0.45."
But many epistemic situations we face in the real world, especially when reasoning about the far future, are not like that. We don't have a clear, privileged range of numbers to which we can apply the principle of indifference. Rather we have lots of vague guesses about a complicated web of things, and our reasons for thinking a given action could be good for the far future are qualitatively different from (hence not symmetric with) our reasons for thinking it could be bad. (Getting into the details of the case for this is better left for top-level posts I'm working on, but that's the prima facie idea.)