DataPacRat comments on Open thread, Sep. 19 - Sep. 25, 2016 - Less Wrong

2 Post author: DataPacRat 19 September 2016 06:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 28 September 2016 02:22:46PM 0 points [-]

is that future-me might be even less trustable to work towards my values

If whoever revives you deliberately modifies you, you're powerless to stop it. And if you're worried that future-you will be different from past-you, well, that's how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago.

As to precommitment, I don't think you have any power to precommit, and I don't think it's a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...

Comment author: DataPacRat 28 September 2016 09:57:05PM 0 points [-]

If whoever revives you deliberately modifies you, you're powerless to stop it.

True, which is why I'm assuming a certain minimal amount of good-will on the part of whoever revives me. However, just because the reviver has control over the technology allowing my revival doesn't mean they're actually technically competent in matters of computer security - I've seen too many stories in /r/talesfromtechsupport of computer-company executives being utterly stupid in fundamental ways for that. The main threat I'm trying to hold off is, roughly, "good-natured reviver leaves the default password in my uploaded self's router unchanged, script-kiddie running automated attacks on the whole internet gains access, script turns me into a sapient bitcoin-miner-equivalent for that hacker's benefit". That's just one example of a large class of threats. No hostile intent by the reviver is required, just a manager-level understanding of computer security.

A future-you in five years will be different from current-you who is different from the past-you of five years ago.

Yes, I know. This is one reason that I am trying not to specify /what/ it is I value in the request-doc, other than 1) instrumental goals that are good for achieving many terminal goals, and 2) valuing my own life both as an instrumental and a terminal goal, which I confidently expect to remain as one of my fundamental values for quite some time to come.

I don't think you have any power to precommit

I'll admit that I'm still thinking on this one. Socially, precommitting is mainly useful as a deterrence, and I'm working out whether trying to precommit to work against anyone who modifies my mind without my consent, or any other variation of the tactic, would be worthwhile even if I /can/ follow through.

Comment author: DataPacRat 29 September 2016 01:36:40AM 0 points [-]

I'm still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I've hammered these into good enough shape to include in the revival-request doc), here's a few paragraphs I've been working on:

Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of "it was for their own good". To that end, I am going to attempt a strategy called "precommitment". Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.

I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts - if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct - but given how easily people are able to fool themselves, if non-consensually altering someone's mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.

While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to "alter" my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.

Comment author: Lumifer 29 September 2016 02:29:03PM 0 points [-]

I am going to precommit to work against the interests of anyone who alters my mind without my consent

And how are you going to do this? Precommitment is not a promise, it's making it so that you are unable to choose in the future.

Comment author: DataPacRat 29 September 2016 03:27:37PM 0 points [-]

Well, if you don't mind my tweaking your simple and absolute "unable" into something more like "unable, at least without suffering significant negative effects, such as a loss of wealth", then I am aware of this, yes. Precommitment for something on this scale is a big step, and I'm taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me's resources to fund the development of a version of me without the editing.

Comment author: Lumifer 29 September 2016 03:33:37PM 1 point [-]

If you're unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can "fund the development of a version of me without the editing", don't you just want to do it unconditionally?

Comment author: DataPacRat 29 September 2016 04:01:01PM 0 points [-]

I think we're bumping up against some conflicting assumptions. At least at this stage of the drafting process, I'm focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn't a neutral third-party around with a duty to try to act in my best interests... then, in such a future, I'm reasonably confident that it doesn't matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self's basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.