fin

Research scholar @ FHI and assistant to Toby Ord. Philosophy student before that. I do a podcast about EA called Hear This Idea. finmoorhouse.com

Wiki Contributions

Comments

Sorted by
Answer by fin50

As Buck points out, Toby's estimate of P(AI doom) is closer to the 'mainstream' than MIRI's, and close enough that "so low" doesn't seem like a good description.

I can't really speak on behalf of others at FHI, of course, by I don't think there is some 'FHI consensus' that is markedly higher or lower than Toby's estimate.

Also, I just want to point out that Toby's 1/10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don't involve extinction (forms of 'lock-in'). Therefore his estimate for extinction caused by AI is lower than 1/10.

fin20

Yes, I'm almost certain it's too 'galaxy brained'! But does the case rely on entities outside our light cone? Aren't there many 'worlds' within our light cone? (I literally have no idea, you may be right, and someone who knows should intervene)

I'm more confident that this needn't relate to the literature on infinite ethics, since I don't think any of this relies on inifinities.

fin20

Thanks, this is useful.

fin30

There are some interesting and tangentially related comments in the discussion of this post (incidentally, the first time I've been 'ratioed' on LW).

fin10

Thanks, really appreciate it!

fin50

Was wondering the same thing — would it be possible to set others' answers as hidden by default on a post until the reader makes a prediction?

fin40

I interviewed Kent Berridge a while ago about this experiment and others. If folks are interested, I wrote something about it here, mostly trying to explain his work on addiction. You can listen to the audio on the same page.

fin10

Got it, thanks very much for explaining.

fin10

Thanks, that's a nice framing.

fin30

Thanks for the response. I'm bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a 'measure of existence' — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more 'weight' to minds instantiated by brains rather than a mug of coffee. But this idea of 'weight' is ambiguous to me. Why should sampling weight (you're more likely to find yourself as a real vs Boltzmann brain, or 'thick' vs 'arbitrary' computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here's Lev Vaidman, suggesting it shouldn't: “there is a sense in which some worlds are larger than others", but "note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they're in, while recognising they 'feel' precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There's no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that's all I mean by 'conscious experience'. After all, if I'm understanding this right, you're in a 'branch' right now that is many orders of magnitude less real than the larger, 'parent' branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without 'diluting' the weight of future experiences over time.

Also, presumably there's some technical way of actually cashing out the idea of something being 'less real'? Literally speaking, I'm guessing it's best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.

I'm afraid I'm confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don't add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.

Still don't know what to think about all this!

Load More