Comment author: Simulation_Brain 16 March 2015 10:56:44PM 0 points [-]

Really? Can you say a little more about why you think you have that value? I guess I'm not convinced that it's really a terminal value if it varies so widely across people of otherwise similar beliefs. Presumably that's what lalartu meant as well, but I just don't get it. I like myself, so I'd like more of myself in the world!

Comment author: DefectiveAlgorithm 17 March 2015 01:22:52AM 0 points [-]

I think a big part of it is that I don't really care about other people except instrumentally. I care terminally about myself, but only because I experience my own thoughts and feelings first-hand. If I knew I were going to be branched, then I'd care about both copies in advance as both are valid continuations of my current sensory stream. However, once the branch had taken place, both copies would immediately stop caring about the other (although I expect they would still practice altruistic behavior towards each other for decision-theoretic reasons). I suspect this has also influenced my sense of morality: I've never been attracted to total utilitarianism, as I've never been able to see why the existence of X people should be considered superior to the existence of Y < X equally satisfied people.

So yeah, that's part of it, but not all of it (if that were the extent of it, I'd be indifferent to the existence of copies, not opposed to it). The rest is hard to put into words, and I suspect that even were I to succeed in doing so I'd only have succeeded in manufacturing a verbal rationalization. Part of it is instrumental, each copy would be a potential competitor, but that's insufficient to explain my feelings on the matter. This wouldn't be applicable to, say, the Many-Worlds Interpretation of quantum mechanics, and yet I'm still bothered by that interpretation as it implies constant branching of my identity. So in the end, I think that I can't offer a verbal justification for this preference precisely because it's a terminal preference.

Comment author: NancyLebovitz 10 March 2015 03:17:33AM 2 points [-]

Addressed to everyone, not just AnthonyC: if your episodic memory were deleted and your procedural memory remained (and you could look at it from the outside), to what extent would you consider yourself to still exist?

Comment author: DefectiveAlgorithm 10 March 2015 04:19:56AM *  3 points [-]

Approximately the same extent to which I'd consider myself to exist in the event of any other form of information-theoretic death. Like, say, getting repeatedly shot in the head with a high powered rifle, or having my brain dissolved in acid.

Comment author: Lumifer 24 February 2015 06:20:56PM 1 point [-]

If you actually believe that burning a witch has some chance of saving her soul from eternal burning in hell (or even only provide a sufficient incentive for others to not agree to pacts with Satan and so surrender their soul to eternal punishment), wouldn't you be morally obligated to do it?

Comment author: DefectiveAlgorithm 24 February 2015 08:54:34PM *  0 points [-]

I mean the sufficiency of the definition given. Consider a universe which absolutely, positively, was not created by any sort of 'god', the laws of physics of which happen to be wired such that torturing people lets you levitate, regardless of whether the practitioner believes he has any sort of moral justification for the act. This universe's physics are wired this way not because of some designer deity's idea of morality, but simply by chance. I do not believe that most believers in objective morality would consider torturing people to be objectively good in this universe.

Comment author: Lumifer 24 February 2015 05:16:17PM 0 points [-]

If torturing people let us levitate, would we call that 'objective morality'?

Sure, see e.g. good Christians burning witches.

Comment author: DefectiveAlgorithm 24 February 2015 05:19:37PM 0 points [-]

Hm. I'll acknowledge that's consistent (though I maintain that calling that 'morality' is fairly arbitrary), but I have to question whether that's a charitable interpretation of what modern believers in objective morality actually believe.

Comment author: Lumifer 24 February 2015 04:50:10PM 0 points [-]

"There is objective morality" basically means that morality is part of physics and just like there are natural laws of, say, gravity or electromagnetism, there are natural laws of morals because the world just works that way. Consult e.g. Christian theology for details.

Think of a system where, for example, a yogin can learn to levitate (which is a physical phenomenon) given that he diligently practices and leads a moral life. If he diligently practices but does not lead a moral life, he doesn't get to levitate. In such a system morality would be objective.

Note that this comment is not saying that objective morality exists, it just attempts to explain what the concept means.

Comment author: DefectiveAlgorithm 24 February 2015 05:11:31PM *  3 points [-]

Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it's relevant, it gives you advantages you wouldn't otherwise have. Though even in the sense you've described, I'm not sure whether the word 'morality' really seems applicable. If torturing people let us levitate, would we call that 'objective morality'?

EDIT: To be clear, my intent isn't to nitpick. I'm simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn't obviously seem to equate those patterns with 'morality' in any sense of the word that I'm familiar with.

Comment author: KatjaGrace 23 February 2015 09:19:21PM 5 points [-]

If there is an objective morality, but we don't care about it, is it relevant in any way?

Comment author: DefectiveAlgorithm 24 February 2015 04:09:12PM 6 points [-]

I have no idea what 'there is an objective morality' would mean, empirically speaking.

Comment author: Evan_Gaensbauer 22 February 2015 12:33:14AM *  5 points [-]

Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they're so unlikely or nonsensical it's not even worth acknowledging them. However, I don't know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it's worth considering them, but I'm not claiming it's a big enough deal that nobody should sign up for cryonics.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky's notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to "make something else". If the future turns into the sort of Malthusian trap Hanson predicts, it doesn't seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us.

I'm curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won't object to or suffer through whatever tribulations they must labor through?

Addendum: shminux reasons through it here, concluding it's a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn't explain why Omega would bother reviving us of all minds to do it.

Comment author: DefectiveAlgorithm 22 February 2015 01:32:36PM *  4 points [-]

More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we'd like.

Comment author: cousin_it 21 February 2015 11:38:41AM *  6 points [-]

What the hell? Making horcruxes for your friends doesn't actually test the invention. You also need to kill your friends and hope that the invention works. That doesn't sound so nice, does it? And we don't have a good explanation why Riddle missed this idea anymore.

Comment author: DefectiveAlgorithm 21 February 2015 12:32:30PM 18 points [-]

I don't think Harry meant to imply that actually running this test would be nice, but rather that one cannot even think of running this test without first thinking of the possibility of making a horcrux for someone else (something which is more-or-less nice-ish in itself, the amorality inherent in creating a horcrux at all notwithstanding).

Comment author: pinyaka 06 February 2015 03:41:31PM 1 point [-]

Two other people in this thread have pointed out that the value collapse into wireheading or something else is a known and unsolved problem and that the problems of an intelligence that optimizes for something assumes that the AI makes it through this in some unknown way. This suggests that I am not wrong, I'm just asking a question for which no one has an answer yet.

Fundamentally, my position is that given 1.) an AI is motivated by something 2.) That something is a component (or set of components) within the AI and 3.) The AI can modify that/those components then it will be easier for the AI to achieve success by modifying the internal criteria for success instead of turning the universe into whatever it's supposed to be optimizing for. A "success" at whatever is analogous to a reward because the AI is motivated to get it. For the fully self modifying AI, it will almost always be easier to become a monk replacing the goals/values it starts out with and replacing them with something trivially easy to achieve. It doesn't matter what kind of motivation system you use (as far as I can tell) because it will be easier to modify the motivation system than to act on it.

Comment author: DefectiveAlgorithm 08 February 2015 03:36:33AM *  2 points [-]

A paperclip maximizer won't wirehead because it doesn't value world states in which its goals have been satisfied, it values world states that have a lot of paperclips.

In fact, taboo 'values'. A paperclip maximizer is an algorithm the output of which approximates whichever output leads to world states with the greatest expected number of paperclips. This is the template for maximizer-type AGIs in general.

Comment author: KatjaGrace 25 January 2015 08:58:37PM 2 points [-]

Why would you want to actively avoid having a copy?

Comment author: DefectiveAlgorithm 25 January 2015 09:31:42PM 2 points [-]

Because I terminally value the uniqueness of my identity.

View more: Prev | Next