TheOtherDave comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 17 September 2013 08:41:47PM 1 point [-]

It's easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them).

I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can't see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience.

OTOH, I don't expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that's kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences.

If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.

Comment author: Bugmaster 17 September 2013 09:05:37PM *  -1 points [-]

It's easy to conflate uploads and augments, here...

Wait, why shouldn't they be conflated ? Granted, an upload does not necessarily have to possess augmented intelligence, but IMO most if not all of them would obtain it in practice.

I can't see why that experience would change upon a substrate change, such as uploading.

Agreed, though see above.

If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences. If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.

I agree completely; that was my point as well.

Edited to add:

I believe that, however incomprehensible one's new values might be after augmentation, I am reasonably certain that they would not include "an altruistic attitude toward humanity" (as per our current understanding of the term). By analogy, I personally neither love nor hate individual insects; they are too far beneath me.

Comment author: TheOtherDave 18 September 2013 12:24:54AM 1 point [-]

Mostly, I prefer not to conflate them because our shared understanding of upload is likely much better-specified than our shared understanding of augment.

I agree completely; that was my point as well.

Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn't contain.

By analogy, I personally neither love nor hate individual insects; they are too far beneath me.

Turning that analogy around.... I suspect that if I remembered having been an insect and then later becoming a human being, and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects' favor.

With respect to altruism and vast intelligence gulfs more generally... I dunno. Five-day-old infants are much stupider than I am, but I generally prefer that they not suffer. OTOH, it's only a mild preference; I don't really seem to care all that much about them in the abstract. OTGH, when made to think about them as specific individuals I end up caring a lot more than I can readily justify over a collection. OT4H, I see no reason to expect any of that to survive what we're calling "intelligence augmentation", as I don't actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly. OT5H, there are things we might call "intelligence augmentation", like short-term-memory buffer-size increases, that might well be modular in this way.

Comment author: Bugmaster 18 September 2013 12:53:57AM 0 points [-]

Except that, as you say later, you have confidence about what those supposedly incomprehensible values would or wouldn't contain.

More specifically, I have confidence only about one specific thing that these values would not contain. I have no idea what the values would contain; this still renders them incomprehensible, as far as I'm concerned, since the potential search space is vast (if not infinite).

I suspect that if I remembered having been an insect and then later becoming a human being...

I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday. The situation may be more analogous to remembering what it was like being a newborn.

Most people don't remember what being a newborn baby was like; but even if you could recall it with perfect clarity, how much of that information would you find really useful ? A newborn's senses are dull; his mind is mostly empty of anything but basic desires; his ability to affect the world is negligible. There's not much there that is even worth remembering... and, IMO, there's a good chance that a transhuman intelligence would feel the same way about its past humanity.

... and I believed that was a reliably repeatable process, both my emotional stance with respect to the intrinsic value of insect lives and my pragmatic stance with respect to their instrumental value would be radically different than they are now and far more strongly weighted in the insects' favor.

I agree with your later statement:

OT4H, I see no reason to expect any of that to survive what we're calling "intelligence augmentation", as I don't actually think my cognitive design allows my values and my intelligence (ie my optimize-environment-for-my-values) to be separated cleanly.

To expand upon it a bit:

I agree with you regarding the pragmatic stance, but disagree about the "intrinsic value" part. As an adult human, you care about babies primarily because you have a strong built-in evolutionary drive to do so. And yet, even that powerful drive is insufficient to overcome many people's minds; they choose to distance themselves from babies in general, and refuse to have any of their own, specifically. I am not convinced that an augmented human would retain such a built-in drive at all (only targeted at unaugmented humans instead/in addition to infants), and even if they did, I see no reason to believe that it would have a stronger hold over transhumans than over ordinary humans.

Comment author: TheOtherDave 18 September 2013 01:15:57AM 0 points [-]

Like you, I am unconvinced that a "sufficiently augmented" human would continue to value unaugmented humans, or infants.

Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants.

Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all. It might turn out that all "sufficiently augmented" human minds promptly turn themselves off. It might turn out that they value unaugmented humans more than anything else in the universe. Or insects. Or protozoa. Or crystal lattices. Or the empty void of space. Or paperclips.

More generally, when I say I expect my augmented self's values to be incomprehensible to me, I actually mean it.

I am not entirely convinced that a vastly augmented mind would remember being a regular human in the same way that we humans remember what we had for lunch yesterday.

Mostly, I think that will depend on what kinds of augmentations we're talking about. But I don't think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of "vastly augmented" and analogies to insects and protozoa, so I'm content to posit either that it does, or that it doesn't, whichever suits you.

My own intuition, FWIW, is that some such minds will remember their true origins, and others won't, and others will remember entirely fictionalized accounts of their origins, and still others will combine those states in various ways.

There's not much there that is even worth remembering.

You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It's not at all clear to me why. I am perfectly happy to take your word for it that you don't value anything about your hypothetical memories of infancy, but generalizing that to other minds seems unjustified.

For my own part... well, my mom is not a particularly valuable person, as people go. There's no reason you should choose to keep her alive, rather than someone else; she provides no pragmatic benefit relative to a randomly selected other person. Nevertheless, I would prefer that she continue to live, because she's my mom, and I value that about her.

My memories of my infancy might similarly not be particularly valuable as memories go; I agree. Nevertheless, I might prefer that I continue to remember them, because they're my memories of my infancy.

And then again, I might not. (Cf incomprehensible values of augments, above.)

Comment author: Bugmaster 18 September 2013 03:37:58AM *  0 points [-]

Unlike you, I am also unconvinced it would cease to value unaugmented humans, or infants. Similarly, I am unconvinced that it would continue to value its own existence, or, well, anything at all.

Even if you don't buy my arguments, given the nearly infinite search space of things that it could end up valuing, what would its probability of valuing any one specific thing like "unaugmented humans" end up being ?

But I don't think we can actually sustain this discussion with an answer to that question at any level more detailed than a handwavy notion of "vastly augmented" and analogies to insects and protozoa, so I'm content to posit either that it does, or that it doesn't, whichever suits you.

Fair enough, though we could probably obtain some clues by surveying the incredibly smart -- though merely human -- geniuses that do exist in our current world, and extrapolating from there.

My own intuition, FWIW, is that some such minds will remember their true origins...

It depends on what you mean by "remember", I suppose. Technically, it is reasonably likely that such minds would be able to access at least some of their previously accumulated experiences in some form (they could read the blog posts of their past selves, if push comes to shove), but it's unclear what value they would put on such data, if any.

You keep talking like this, as though these kinds of value judgments were objective, or at least reliably intersubjective. It's not at all clear to me why.

Maybe it's just me, but I don't think that my own, personal memories of my own, personal infancy would differ greatly from anyone else's -- though, not being a biologist, I could be wrong about that. I'm sure that some infants experienced environments with different levels of illumination and temperature; some experienced different levels of hunger or tactile stimuli, etc. However, the amount of information that an infant can receive and process is small enough so that the sum total of his experiences would be far from unique. Once you've seen one poorly-resolved bright blob, you've seen them all.

By analogy, I ate a banana for breakfast yesterday, but I don't feel anything special about it. It was a regular banana from the store; once you've seen one, you've seen them all, plus or minus some minor, easily comprehensible details like degree of ripeness (though, of course, I might think differently if I was a botanist).

IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you've seen one human, you've seen them all, plus or minus some minor details...

Comment author: TheOtherDave 18 September 2013 03:57:20AM *  1 point [-]

what would its probability of valuing any one specific thing like "unaugmented humans" end up being ?

Vanishingly small, obviously, if we posit that its pre-existing value system is effectively uncorrelated with its post-augment value system, which it might well be. Hence my earlier claim that I am unconvinced that a "sufficiently augmented" human would continue to value unaugmented humans. (You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we're simply not understanding one another.)

we could probably obtain some clues by surveying the incredibly smart -- though merely human -- geniuses that do exist in our current world, and extrapolating from there.

Sure, we could do that, which would give us an implicit notion of "vastly augmented intelligence" as something like naturally occurring geniuses (except on a much larger scale). I don't think that's terribly likely, but as I say, I'm happy to posit it for discussion if you like.

it's unclear what value they would put on such data, if any. [...] I don't think that my own, personal memories of my own, personal infancy would differ greatly from anyone else's [...] IMO it is likely that an augmented mind might think the same way about ordinary humans. Once you've seen one human, you've seen them all, plus or minus some minor details...

I agree that it's unclear.

To say that more precisely, an augmented mind would likely not value its own memories (relative to some roughly identical other memories), or any particular ordinary human, any more than an adult human values its own childhood blanket rather than some identical blanket, or values one particular and easily replaceable goldfish.

The thing is, some adult humans do value their childhood blankets, or one particular goldfish.

And others don't.

Comment author: Bugmaster 19 September 2013 10:36:37PM 0 points [-]

You seem to expect me to disagree with this, which puzzles me greatly, since I just said the same thing myself; I suspect we're simply not understanding one another.

That's correct; for some reason, I was thinking that you believed that a human's preference for the well-being his (formerly) fellow humans is likely to persist after augmentation. Thus, I did misunderstand your position; my apologies.

The thing is, some adult humans do value their childhood blankets, or one particular goldfish.

I think that childhood blankets and goldfish are different from an infant's memories, but perhaps this is a topic for another time...

Comment author: TheOtherDave 20 September 2013 12:35:02AM 0 points [-]

I'm not quite sure what other time you have in mind, but I'm happy to drop the subject. If you want to pick it up some other time feel free.