Comment author: TabAtkins 09 May 2013 02:31:09AM 6 points [-]

Am I crazy, or does Bostrom's argument in that paper fall flat almost immediately, based on a bad moral argument?

His first, and seemingly most compelling, argument for Duplication over Unification is that, assuming an infinite universe, it's certain (with probability 1) that there is already an identical portion of the universe where you're torturing the person in front of you. Given Unification, it's meaningless to distinguish between that portion and this portion, given their physical identicalness, so torturing the person is morally blameless, as you're not increasing the number of unique observers being tortured. Duplication makes the two instances of the person distinct due to their differing spatial locations, even if every other physical and mental aspect is identical, so torturing is still adding to the suffering in the universe.

However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it endlessly. Each duplicate is distinct, and so you're increasing the amount of joy in the universe every time you make a copy. It would be a net win, in fact, if you killed every human and replaced the earth with a computer doing nothing but running copies of that one person experiencing a moment of bliss. Unification takes care of this, by noting that duplicating someone adds, at most, a single bit of information to the universe, so spamming the universe with copies of the happy moment counts either the same as the single experience, or at most a trivial amount more.

Am I thinking wrong here?

Comment author: Tehom 20 May 2013 06:25:44PM 4 points [-]

However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it endlessly.

True just if your summum bonum is exactly an aggregate of moments of happiness experienced.

I take the position that it is not.

I don't think one even has to resort to a position like "only one copy counts".

Comment author: Sniffnoy 06 May 2013 10:56:10PM 5 points [-]

I think the simpler solution is just to use a bounded utility function. There are several things suggesting we do this, and I really don't see any reason to not do so, instead of going through contortions to make unbounded utility work.

Consider the paper of Peter de Blanc that you link -- it doesn't say a computable utility function won't have convergent utilities, but rather that it will iff said function is bounded. (At least, in the restricted context defined there, though it seems fairly general.) You could try to escape the conditions of the theorem, or you could just conclude that utility functions should be bounded.

Let's go back and ask the question of why we're using probabilities and utilities in the first place. Is it because of Savage's Theorem? But the utility function output by Savage's Theorem is always bounded.

OK, maybe we don't accept Savage's axiom 7, which is what forces utility functions to be bounded. But then we can only be sure that comparing expected utilities is the right thing to do for finite gambles, not for infinite ones, so talking about sums converging or not -- well, it's something that shouldn't even come up. Or alternatively, if we do encounter a situation with infinitely many choices, each of differing utility, we simply don't know what to do.

Maybe we're not basing this on Savage's theorem at all -- maybe we simply take probability for granted (or just take for granted that it should be a real number and ground it in something like Cox's theorem -- after all, like Savage's theorem, Cox's theorem only requires that probability be finitely additive) and are then deriving utility from the VNM theorem. The VNM theorem doesn't prohibit unbounded utilities. But the VNM theorem once again only tells us how to handle finite gambles -- it doesn't tell us that infinite gambles should also be handled via expected utility.

OK, well, maybe we don't care about the particular grounding -- we're just going to use probability and utility because it's the best framework we know, and we'll make the probability countably additive and use expected utility in all cases hey, why not, seems natural, right? (In that case, the AI may want to eventually reconsider whether probability and utility really is the best framework to use, if it is capable of doing so.) But even if we throw all that out, we still have the problem de Blanc raises. And, um, all the other problems that have been raised with unbounded utility. (And if we're just using probability and utility to make things nice, well, we should probably use bounded utility to make things nicer.)

I really don't see any particular reason utility has to be unbounded either. Eliezer Yudkowsky seems to keep using this assumption that utility should be unbounded, or just not necessarily bounded, but I've yet to see any justification for this. I can find one discussion where, when the question of bounded utility functions came up, Eliezer responded, "[To avert a certain problem] the bound would also have to be substantially less than 3^^^^3." -- but this indicates a misunderstanding of the idea of utility, because utility functions can be arbitrarily (positively) rescaled or recentered. Individual utility "numbers" are not meaningful; only ratios of utility differences. If a utility function is bounded, you can assume the bounds are 0 and 1. Talk about the value of the bound is as meaningless as anything else using absolute utility numbers; they're not amounts of fun or something.

Sure, if you're taking a total-utilitarian viewpoint, then your (decision-theoretic) utility function has to be unbounded, because you're summing a quantity over an arbitrarily large set. (I mean, I guess physical limitations impose a bound, but they're not logical limitations, so we want to be able to assign values to situations where they don't hold.) (As opposed to the individual "utility" functions that your'e summing, which is a different sort of "utility" that isn't actually well-defined at present.) But total utilitarianism -- or utilitarianism in general -- is on much shakier ground than decision-theoretic utility functions and what we can do with them or prove about them. To insist that utility be unbounded based on total utilitarianism (or any form of utilitarianism) while ignoring the solid things we can say seems backwards.

Not everything has to scale linearly, after all. There seems to be this idea out there that utility must be unbounded because there are constants C_1 and C_2 such that adding to the world of person of "utility" (in the utilitarian sense) C_1 must increase your utility (in the decision-theoretic sense) by C_2, but this doesn't need to be so. This to me seems a lot like insisting "Well, no matter how fast I'm going, I can always toss a baseball forward in my direction at 1 foot per second relative to me; so it will be going 1 foot per second faster than me, so the set of possible speeds is unbounded." As it turns out, the set of possible speeds is bounded, velocities don't add linearly, and if you toss a baseball forward in your direction at 1 foot per second relative to you, it will not be going 1 foot per second faster.

My own intuition is more in line with earthwormchuck163's comment -- I doubt I would be that joyous about making that many more people when so many are going to be duplicates or near-duplicates of one another. But even if you don't agree with this, things don't have to add linearly, and utilities don't have to be unbounded.

Comment author: Tehom 20 May 2013 06:11:13PM -1 points [-]

I think the simpler solution is just to use a bounded utility function. There are several things suggesting we do >this, and I really don't see any reason to not do so, instead of going through contortions to make unbounded >utility work.

But that's essentially already the case. Just consider the bound to be 3^^^^3 utilons, or even an illimited number of them. Those are not infinite, but still allow all the situations and arguments made above.

Paradoxes of infinity weren't the issue in this case.

Comment author: Tehom 20 May 2013 03:57:26AM 0 points [-]

<blockquote> Then you present me with a brilliant lemma Y, which clearly seems like a likely consequence of my mathematical axioms, and which also seems to imply X - once I see Y, the connection from my axioms to X, via Y, becomes obvious. </blockquote>

Seems a lot like learning a proof of X. It shouldn't surprise us that learning a proof of X increases your confidence in X. The mugger genie has little ground to accuse you of inconsistency for believing X more after learning a proof of it.

Granted the analogy isn't exact; what is learned may fall well short of rigorous proof. You may have only learned a good argument for X. Since you assign only 90% posterior likelihood I presume that's intended in your narrative.

Nevertheless, analogous reasoning seems to apply. The mugger genie has little ground to accuse you of inconsistency for believing X more after learning a good argument for it.

Comment author: Eliezer_Yudkowsky 12 July 2012 08:45:20PM 1 point [-]

EEG contains a trivial amount of information, probably not worth storing.

Comment author: Tehom 12 July 2012 09:01:11PM 0 points [-]

The ratio of the information it adds relative to the total available information is not the point. It's a separate modality. It's subject to a different set of noises and systematic distortions.

Comment author: Douglas_Knight 18 June 2012 08:50:03PM 14 points [-]

Wins for what? I don't think plastination is an option for human preservation today. When it becomes an option, it probably wins.

The problem with plastination is scaling up the volume that can be done at once. This is a matter of pumping fluids around. Tiny chunks of mouse brain that were plastinated 50 years ago have readable synapses today. The experiment is whether new methods of applying chemicals to whole mouse brains work as well as first cutting up the brain; and whether cutting after plastination preserves enough information.

After scaling up plastination, it has the remaining downside that it displaces lots of chemicals. RH asks to preserve "two dozen chemical densities," which it probably fails at. Also, lots of sub-synapse detail (eg, type and placement of ion channels) is probably lost.

Comment author: Tehom 12 July 2012 08:34:32PM -1 points [-]

(This comment is largely repeating something from my blog)

I would suggest storing, along with the brain, a representative "snapshot" of the working brain, possibly an EEG under standardized conditions.

In the cryonics model, storing your EEG's didn't make much sense. When (if) resuscitation "restarted your motor", your brainwaves would come back on their own. Why keep a reference for them?

But plastination assumes from the start that revival consists of scanning your brain in and emulating it. Reconstructing you would surely be done computationally, so any source of information could be fed into the reconstruction logic.

Ideally the plastinated brain would preserve all the information that is you, and preserve it undistorted. But what if it preserved enough information but garbled it? The information got thru but it was ambiguous. There would be no way to tell the difference between the one right answer that reconstructs your mind correctly and many other answers that construct someone or something else.

Having a reference point in a different modality could help a lot. I won't presume to guess how it would best be used in the future, but from an info-theory stance, there's a real chance that it might provide crucial information to reconstruct your mind correctly.

And having an external reference point could provide something less crucial but very nice: verification that the process worked.

Comment author: Tehom 07 April 2011 04:04:59AM 0 points [-]

It's the file-drawer problem in comic form.

Comment author: Tehom 05 May 2010 01:50:03AM 27 points [-]

"Somebody would have noticed" is shorthand for a certain argument. Like most shorthand arguments, it can be used well or badly. Using a shorthand argument badly is what we mean by a "fallacy".

A shorthand argument is used well, in my opinion, just if you could expand it to the longhand form and it would still work. That's not a requirement to always do the full expansion. You don't have to expand it each time, nor have 100% confidence of success, nor expand the whole thing if it's long or boring. But expanding it has to be a real option.

Critical questions that arise in expanding this particular argument:

  • What constitutes noticing?

    • Would other people who noticed understand what they saw?
    • Further, would they understand it the same way that we do?
      • How much potential is there for their understanding of the same phenomenon to be quite different from ours?
    • Further, if their understanding is similar to ours, would they express it in terms that we would recognize?
      • This could include actions that we recognize as relating to the phenomenon.
  • Would we know that they noticed?

    • Motivations: Would people who noticed have strong motivations for letting us know or for not letting others know?
      • Would they want others to see that they noticed?
      • Would they want others to see the phenomenon they noticed?
      • Would they want to do something about it that someone could easily see?
    • Ability:
      • If they did want others to know, could they easily show it?
      • Conversely, if they didn't, could they easily hide it?
    • Who witnesses it:
      • Would they want us in particular to see it (or not see it), as opposed to a select group? For instance, they might write a report about it that you and I probably wouldn't see.
      • If they revealed it to others but not directly to us, what's the likelihood that the information would make its way to us?
  • The suppressed premise in that emthymeme is that "Nobody noticed". Since we didn't ask everyone in the world, how did we determine that?

    • What is the population that would have noticed?
    • What sample size did we take?
    • How representative was our sampling?
    • Assuming we have reasonable answers to the above, what level of confidence can we place on our sampling?
Comment author: Tehom 16 April 2010 10:15:38PM 3 points [-]

One suggestion, instead of putting "proof" in suspicion-quotes, you could say "argument" instead. A proof is just an air-tight argument.

Comment author: Tehom 14 April 2010 09:41:07PM 2 points [-]

This was proposed as an alternative to GDP, but it's not clear that it actually measures something similar. Even broadly understanding both as attempts to measure human happiness, it doesn't seem similar.

Since we have no access to time-machines, we cannot give anyone a real choice between travelling back to 1700 and staying in 2010. There are no actual consequences to what they choose. So we are not even measuring people's naive preferences, we are just measuring what they like to say or believe about 1700 vs 2010.

Comment author: Tehom 10 April 2010 10:35:17PM 3 points [-]

Reality is that which, when you stop believing in it, doesn't go away.

Phillip K. Dick

View more: Next