NancyLebovitz comments on Uploading: what about the carbon-based version? - Less Wrong

8 Post author: NancyLebovitz 23 July 2012 08:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: JaneQ 24 July 2012 12:25:16PM *  3 points [-]

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one's own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn't even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.

Comment author: NancyLebovitz 24 July 2012 05:46:45PM 2 points [-]

Some of the basic problems will presumably be (partially?) solved with animal research before uploading is tried with humans.

One of the challenges of uploading would be including not just the current record, but also the ability to learn and heal.

Comment author: JaneQ 26 July 2012 01:59:32PM *  0 points [-]

With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.

The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation - it is mere data processing - it might become human experimentation decades after functional uploads.

Comment author: NancyLebovitz 26 July 2012 02:47:23PM 0 points [-]

There's a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.

Comment author: wedrifid 26 July 2012 03:26:38PM *  0 points [-]

There's a consensus here that conscious computer programs have the same moral weight as people

No there isn't. I would have remembered something like that happened, what with all the disagreeing I would have been doing.

The mindspace of 'conscious computer programs' is unfathomably large and most of those programs are morally worthless. A "some" and/or "could" inserted in there could make the 'consensus' correct.

Comment author: NancyLebovitz 26 July 2012 04:21:24PM 0 points [-]

I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.

I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.

Comment author: wedrifid 27 July 2012 12:10:21AM 0 points [-]

I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.

Yes, that 'have' vs 'can have' distinction changes everything---but most people are less picky with general claims than I.

I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.

Food for thought. I wouldn't rule this out as a possibility and certainly the proportion of 'morally relevant' programs in this group skyrockets over the broader class. I'm not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?

Comment author: NancyLebovitz 27 July 2012 12:55:52AM 0 points [-]

I think creating sentience is a much easier project than FAI, especially proven FAI. We've got plenty of examples of sentience.

Creating sentience which isn't much like the human model seems very difficult-- I'm not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?

Comment author: wedrifid 27 July 2012 01:48:34AM 0 points [-]

I think creating sentience is a much easier project than FAI, especially proven FAI. We've got plenty of examples of sentience.

This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)

The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.

Comment author: NancyLebovitz 27 July 2012 02:48:19AM 0 points [-]

I didn't think about the case of a sentient UFAI-- I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you're a hard-core utilitarian.

Comment author: JaneQ 27 July 2012 11:44:07AM *  -1 points [-]

Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.

Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.

edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.

Comment author: NancyLebovitz 27 July 2012 02:52:58PM 0 points [-]

This is an area that hasn't been addressed by the law, for the very good reason that it isn't close to being a problem yet. I don't know whether people outside LW have been looking at the ethical status of uploads.

I agree with you that there's no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.

That's a good point about consent.