AlephNeil comments on "Ray Kurzweil and Uploading: Just Say No!", Nick Agar - Less Wrong

4 Post author: gwern 02 December 2011 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlephNeil 02 December 2011 10:59:03PM 2 points [-]

For any particular proposal for mind-uploading, there's probably a significant risk that it doesn't work, but I understand that to mean: there's a risk that what it produces isn't functionally equivalent to the person uploaded. Not "there's a risk that when God/Ripley is watching everyone's viewscreens from the control room, she sees that uploaded person's thoughts are on a different screen from the original."

Comment author: gwern 02 December 2011 11:10:26PM 4 points [-]

Of course there is such a risk. We can't even do formal mathematics without significant and ineradicable risk in the final proof; what on earth makes you think any anti-zombie or anti-Riply proof is going to do any better? And in formal math, you don't usually have tons of experts disagreeing with the proof and final conclusion either. If you think uploading is so certain the risk it is fundamentally incorrect is zero or epsilon, you have drunk the koolaid.

Comment author: Nornagest 03 December 2011 12:50:00AM *  3 points [-]

I'd rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that's a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn't, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.

The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don't bother to note or even notice. Which is a somewhat disquieting thought, but one I'll have to live with.

Comment author: vi21maobk9vp 03 December 2011 09:07:23AM 0 points [-]

Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not - but it does seem to be a technological problem.

Comment author: Nornagest 03 December 2011 06:09:19PM 1 point [-]

Yeah, I was assuming a destructive upload for simplicity's sake. Processes similar to the one you outline don't generate an obvious discontinuity, so I imagine they'd seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn't accept them.

Comment author: TheOtherDave 03 December 2011 01:37:35AM *  1 point [-]

This double-negative "if you really believe not-X then you're wrong" framing is a bit confusing, so I'll just ask.

Consider the set P of all processes that take a person X1 as input and produce X2 as output, where there's no known test that can distinguish X1 from X2. Consider three such processes:
P1 - A digital upload of X1 is created.
P2 - X1 is cryogenically suspended and subsequently restored.
P3 - X1 lives for a decade of normal life.

Call F(P) the probability that X2 is in any sense that matters not the same person as X1, or perhaps not a person at all.

Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)?
Do you think F(P2) is more than epsilon different from F(P3)?

For my part, I consider all three within epsilon of one another, given the premises.

Comment author: gwern 04 December 2011 09:21:37AM 0 points [-]

Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)? Do you think F(P2) is more than epsilon different from F(P3)?

Erm, yes, to all three. The two transitions all involve things which are initially plausible and have not been driven down to epsilon (which is a very small quantity) by subsequent research.

For example, we still don't have great evidence that brain activity isn't dynamicly dependent on electrical activity (among others!) which is destroyed by death/cryonics. All we have are a few scatter-shot examples about hypothermia and stuff, which is a level of proof I would barely deign to look at for supplements, much less claim that it's such great evidence that it drives down the probability of error to epsilon!

Comment author: TheOtherDave 04 December 2011 03:01:16PM 0 points [-]

OK, thanks for clarifying.

Comment author: billswift 03 December 2011 06:19:56PM *  0 points [-]

Indeed, the line in the quote:

argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.

Could apply equally well to crossing a street. There is very, very little we can do without some "ineliminable risk" being attached to it.

We have to balance the risks and expected benefits for our actions; which requires knowledge not philosophical "might-be"s.

Comment author: gwern 03 December 2011 06:31:36PM 2 points [-]

Yes, I agree, as do the quotes and Agar even: because this is not Pascal's wager where the infinites render the probabilities irrelevant, we ultimately need to fill in specific probabilities before we can decide that destructive uploading is a bad idea, and this is where Agar goes terribly wrong - he presents poor arguments that the probabilities will be low enough to make it an obviously bad idea. But I don't think this point is relevant to this conversation thread.

Comment author: billswift 04 December 2011 12:53:08AM *  0 points [-]

It occurred to me when I was reading the original post, but I was inspired to post it here mostly as a me-too to your line:

We can't even do formal mathematics without significant and ineradicable risk in the final proof

That is, reinforcing that everything has some "ineradicable risk".