You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DataPacRat comments on Open thread, Sep. 19 - Sep. 25, 2016 - Less Wrong Discussion

2 Post author: DataPacRat 19 September 2016 06:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread.

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: turchin 19 September 2016 11:23:14PM 5 points [-]

These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.

  • I don't understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven't solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.

  • I understand that all choices contain risk. However, I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:

Comment author: DataPacRat 19 September 2016 11:55:42PM 1 point [-]

The intended meaning, which it seems I will need to rephrase to clarify: "If you are experimenting with uploading, and can meet these minimal common-sense standards, then I'm willing to volunteer ahead of time be your guinea pig. If you can't meet them, then I'd rather stay frozen a little longer. Just FYI."

Comment author: WhySpace 21 September 2016 04:32:05AM *  1 point [-]

This is potentially quite important.

MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.

Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:

  1. Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.

  2. The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone's mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.

The obvious solution is to hand Bostrom's Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.

Comment author: DataPacRat 20 September 2016 05:28:36PM 3 points [-]
Comment author: DataPacRat 21 September 2016 04:10:03PM 1 point [-]

Today's version: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.3.txt

The change: Added new paragraph:

There is no such thing as being able to have 100% certainty that a piece of software is without flaws or errors. One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection. Without that strategy, not only are bugs much more likely to remain, but when someone does manage to find a bug, it is likely to remain secret and uncorrected. Such uncorrected bugs can be used by unscrupulous people to do just about anything to any data stored on a computer. This is bad enough when that data is merely personal email, or even a bank's financial records; when the data is a sapient mind, the possibilities are horrifying. Given the possible downsides, I find it difficult to trust the motives of anyone who wishes to run an uploaded mind on a computer that uses closed-source software. Therefore, if there is a choice between uploading my mind using uninspectable, closed-source software, and not being revived, I would choose not to be uploaded in that fashion, even if doing so increases the risk of never being revived at all. If there is a choice between being uploading my mind using closed-source software that the uploaded mind can inspect, then if that includes all the documentation that is necessary for the uploaded mind to learn how to understand the software, I would reluctantly agree to the uploading procedure as being preferable to risking never being revived at all.

Comment author: Lumifer 21 September 2016 04:38:04PM 1 point [-]

One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection.

That's a claim often made ("With enough eyes all bugs are shallow") but it's not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to 'sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes.

Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You're looking for impossible guarantees.

And remember, that you are not making choices, but requests. You can't "trust the motives" or not -- if someone revives you with malicious intent, he can ignore your requests easily enough.

Comment author: DataPacRat 21 September 2016 04:46:44PM 0 points [-]

a lot of open-source projects are very buggy and remain very buggy

Yep.

there is closed-source software which is considerably more bug-free

Yep.

You're looking for impossible guarantees.

I'm not looking for guarantees at all. (Put another way, I'm well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I'm willing to make an important choice based on whether or not a piece of software is open-source.

And remember, that you are not making choices, but requests.

True, as far as it goes. However, this document I'm writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can't even conceive of. Thus, I'm writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be.

if someone revives you with malicious intent

If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I'm focusing my attention on scenarios involving at least some measure of non-malicious intent.

Comment author: Lumifer 21 September 2016 08:47:26PM 0 points [-]

open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

Not to mention that you're assuming that "open-source" and "closed-source" concepts will still make sense in that high-tech future. As an example, let's say I give you a trained neural net. It's entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won't tell you how I trained that NN. Are you going to trust it?

Comment author: DataPacRat 21 September 2016 11:20:51PM 0 points [-]

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

That's true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I'll admit it's not a common worry; of course, this isn't a common sort of document.

(If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow's talks or essays on 'the war on general-purpose computation'.)

As an example

You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source - and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don't give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)

Comment author: Lumifer 22 September 2016 02:44:40PM 0 points [-]

I am well aware of the war on general computation, but I fail to see how it's relevant here. If you are saying you don't want to be alive in a world where this war has been lost, that's... a rather strong statement.

To make an analogy, we're slowly losing the ability to fix, modify, and, ultimately, control our own cars. I think that is highly unfortunate, but I'm unlikely to declare a full boycott of cars and go back to horses and buggy whips.

Since you're basically talking about security, you might find it useful to start by specifying a threat model.

how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted

What do you mean by "such NNs"? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly -- it's too general for a meaningful answer.

In any case, the point is that the preference for open-source relies on it being useful, that is, the ability to gain helpful information from examining the code, and the ability to modify it to change its behaviour. You can examine a sufficiently complex trained NN all you want, but the information you'll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.

Comment author: DataPacRat 22 September 2016 06:37:52PM 0 points [-]

Since you're basically talking about security, you might find it useful to start by specifying a threat model.

I thought I had; it's the part around the word 'horrifying'.

What do you mean by "such NNs"? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly -- it's too general for a meaningful answer.

We actually already have a lot of the fundamental software required to run an "emulate brain X" program - stuff that accesses hardware, shuffles swap space around, arranges memory addresses, connects to networking, models a virtual landscape and avatars within, and so on. Some scientists have done extremely primitive emulations of neurons or neural clusters, so we've got at least an idea of what software is likely to need to be scaled up to run a full-blown human mind. None of this software has any particular need for neural-nets. I don't know how such NNs as you propose would be necessary to emulate a brain; I don't know what service they would add, how fundamental they would be, what sort of training data would be used, and so on.

Put another way, as best as I can interpret your question, it's like saying "And what if future cars required an algae system?", without even saying whether the algae tubing is connected to the fuel, or the exhaust, or the radiator, or the air conditioner. You're right that NNs are general-purpose; that is, in fact, the issue I was trying to raise.

You can examine a sufficiently complex trained NN all you want, but the information you'll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.

Alright. In this model, in which it appears that the training data is unavailable, that the existing NN can't be retrained or otherwise modified, and that there doesn't seem to be any mention of being able to train up a replacement NN with different behaviours, then it appears to match the relevant aspects of "closed-source" software much more closely than "open-source", in that if a hostile exploiter finds a way to, say, leverage increased access and control of the computer through the NN, there is little-to-no chance of detecting or correcting the aspects of the NN's behaviour which allow that. I'll spend some time today seeing if I can rework the relevant paragraphs so that this conclusion can be more easily derived.

Comment author: Lumifer 23 September 2016 03:00:45PM 0 points [-]

the part around the word 'horrifying'

That's not a threat model. A threat model is basically a list of adversaries and their capabilities. Typically, defensive measures help against some of them, but not all of them -- a threat model helps you figure out the right trade-offs and estimate who you are (more or less) protected from, and who you are vulnerable to.

stuff that accesses hardware, shuffles swap space around, arranges memory addresses

That stuff usually goes by the name of "operating system". Why do you think that brain emulations will run on top of something that's closely related to contemporary operating systems?

a hostile exploiter

You seem to worry a lot about your brain emulation being hacked from the outside, but you don't worry as much about what the rightful owner of the hardware and the software on top of which your em lives might do?

Comment author: moridinamael 20 September 2016 02:15:02PM *  3 points [-]

If I were going to make such a document, I would make it minimally restrictive. I would rather be brought back even in less-than-ideal circumstances, so that I could to observe how the world had developed, and then decide whether I wanted to stay. At least then I would have a me-like agent operating on my own behalf.

If they bring me back as a qualia-less em, then at least there's a chance that the em will be able to say, "Hey, this is cool and everything, but this isn't actually what my predecessor wanted. So even though I don't have qualia, I'll make it my personal mission to try to bring myself back with qualia." Precommitting to such an attitude now while you're alive boosts the odds of this. At worst, if it turns out to be impossible to revive the "observer", there's a thing-like-you running around in the future spreading your values, even if it doesn't have your consciousness, and I can't see that as a bad thing.

Comment author: Houshalter 21 September 2016 08:05:36PM 0 points [-]

Well what if suicide is illegal in the future? And even if it isn't, suicide is really hard to go through with. A lot of people have preferences that they would prefer not to be revived with brain damage, but people with brain damage do not commonly kill themselves.

Comment author: Dagon 22 September 2016 04:23:47PM 2 points [-]

I see this combination of expressed preference and actions (would prefer not to live with brain damage, but then actually choose to live with brain damage) as a failure of imagination and incorrect far-mode statements, NOT as an indication that the prior statement true, but was thwarted by some outside force.

Future-me instances have massively more information about what they're experiencing in the future than present-me has now. It's ludicrous for present-me to try to constrain future-me's decisions, and even more so to try to identify situations where present-me's wishes will be honored but future-me's decisions won't.

You can prevent adverse revival by cremation or burial (in which case you also prevent felicitous revival). If an evil regime wants you, any contract language is useless. If an individual-respecting regime considers your revival, future you would prefer to be revived and asked rather than being held to a past-you document that cannot predict the details of the current the situation very well.

Comment author: Lumifer 21 September 2016 08:50:33PM 1 point [-]

Well what if suicide is illegal in the future?

More to the point, what if suicide is impossible? It's not hard at all to prevent an em from committing suicide and, of course, if you have copies and backups, he can suicide all he wants...

Comment author: ChristianKl 20 September 2016 10:25:35AM 2 points [-]

You don't seem to describe what you would consider as a revived copy of you. How much of your personality has to stay intact?

Comment author: turchin 19 September 2016 11:28:26PM 2 points [-]

I would add lines about would you prefer to be revived together with your friends, family members, before them or after.

May be I would add a secret question to check if you are restored properly.

I would also add all my digital immortality back-up information, which could be used to fill gaps in case if some information is lost.

I also expect that revival may happen in maybe 20-30 years from my death so I should add some kind of will about how to manage my property during my absence.

Comment author: DataPacRat 20 September 2016 12:02:45AM 0 points [-]

I'm afraid that none of my friends or family are interested in cryo.

I already created one recognition protocol, but it's more for multiple copies of myself meeting. I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc.

I already have provisions in place for my other data, which will end up in that "perpetual storage drawer" I mentioned.

Preserving assets while im dead is an entirely different kettle of fish, and assumes that I will have any worth preserving, which, given my financial situation, I don't expect to be the case.

Comment author: ChristianKl 20 September 2016 10:26:30AM 1 point [-]

I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc.

I think MD5 hashes are likely broken by the time of any resurrection. MD5 already has collision problems today.

Comment author: scarcegreengrass 19 September 2016 09:22:51PM 2 points [-]

Interesting idea! I guess you could add a 'when in doubt' for whether you'd rather be revived in an early period (eg, if resurrection is possible with a 80% success rate) or to be downprioritized until resurrection is very mature and safe.

Comment author: DataPacRat 20 September 2016 12:04:17AM 1 point [-]

It shouldn't be too hard to add some quantitative numbers, or at least which numbers I'd like potential revivers to consider.

Comment author: pcm 22 September 2016 07:23:33PM 1 point [-]

My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.

Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.

My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.

Comment author: siIver 20 September 2016 03:49:27PM 1 point [-]

Great idea. I will probably do a similar thing myself at some point, and it will probably look similar to yours.

The only thing I see that might be missing is advise for a scenario in which the odds of revival go down with time, creating pressure to revive you sooner rather than later. In that case your wishes may contradict with each other (since later revival could still increase the odds of living indefinitely). That seems far fetched but not entirely impossible.

Other than that, I'd say be more specific to avoid any possible misinterpretation. You never know how much bureaucracy will be involved in the process when it finally happens.