Comment author: woodchopper 25 April 2016 10:48:07AM 0 points [-]

I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist.

Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't.

What if the persistence of personal identity is a meaningless pursuit?

Comment author: qmotus 25 April 2016 11:27:36AM *  0 points [-]

Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?

Comment author: woodchopper 25 April 2016 07:35:49AM 0 points [-]

If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?

If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night?

(Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)

Comment author: qmotus 25 April 2016 10:18:41AM 0 points [-]

If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer.

But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.

Are you talking about the hard problem of consciousness? I'm mostly with Daniel Dennett here and think that the hard problem probably doesn't actually exist (but I wouldn't say that I'm absolutely certain about this), but if you think that the hard problem needs to be solved, then I guess this identity business also becomes somewhat more problematic.

Comment author: woodchopper 24 April 2016 05:31:58PM 0 points [-]

The thing is, I'm just not sure if it's even a reasonable thing to talk about 'immortality' because I don't know what it means for one personal identity ('soul') to persist. I couldn't be sure if a computer simulated my mind it would be 'me', for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a similar person but not the one who originally went under the anaesthetic. So from the perspective of the original person, undergoing their operation was pointless, because they are dead anyway. The person who wakes from the operation is someone else entirely.

I guess I'm just trying to say that immortality makes heaps of sense if we can somehow solve the question of personal identity, but if we can't, then 'immortality' may be pretty nonsensical to talk about, simply because if we cannot say what it takes for a single 'soul' to persist over time, the very concept of 'immortality' may be ill-defined.

I like your post about the heat death of the universe, if you ever figure anything out regarding the persistence of a personal identity, I'd like you to message me or something.

Comment author: qmotus 24 April 2016 06:15:42PM 0 points [-]

Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.

Comment author: James_Miller 15 April 2016 03:42:44AM 0 points [-]

Doesn't anthropics strongly push us to figure that the universe is infinite?

Comment author: qmotus 15 April 2016 06:55:37AM *  1 point [-]

I suppose so, and that's where the problems for consequentialism arise.

Comment author: RowanE 11 April 2016 01:22:53PM 2 points [-]

I'll come in to say yes I agree these problems are confusing, although my ethics are weird and I'm only kind if a consequentialist.

(I identify as amoral, in practice what it means is I act like an egoist but give consequentialist answers to ethical questions)

Comment author: qmotus 13 April 2016 06:04:42PM 0 points [-]

What I've noticed is that this has caused me to slide towards prioritizing issues that affect me personally (meaning that I care somewhat more about climate change and less about animal rights than I have previously done).

Comment author: qmotus 11 April 2016 12:11:33PM *  2 points [-]

Past surveys show that most LessWrongers are consequentialists, and many are also effective altruism advocates. What do they think of infinities in ethics?

As I've intuitively always favoured some kind of negative utilitarianism, this has caused me some confusion.

Comment author: Lumifer 05 April 2016 03:50:49PM 1 point [-]

Unfortunately, we can't.

I am sure we can. Peak oil said we'd run out of oil Real Soon Now, full stop. The cost of oil has been rising since early XX century, as you point out, that's not what peak oil was all about.

those rebuilding the civilization from scratch today

Again, we have confusion of technology and scale. The average cost of oil extraction is higher than it used to be. But that cost varies, considerably. If you are trying to rebuild you don't need much oil, so you only use the cheapest oilfields (e.g. the Saudi ones) and don't try to pave over the North Sea with oil rigs or set them up all over the Arctic.

Comment author: qmotus 05 April 2016 04:13:02PM 1 point [-]

Peak oil said we'd run out of oil Real Soon Now, full stop

Peak oil refers to the moment when the production of oil has reached a maximum and after which it declines. It doesn't say that we'll run out of it soon, just that production will slow down. If consumption increases at the same time, it'll lead to scarcity.

If you are trying to rebuild you don't need much oil

Well, that probably depends on how much damage has been done. If civilization literally had to be rebuilt from scratch, I'd wager that a very significant portion of that cheap oil would have to be used.

In response to comment by gjm on Lesswrong 2016 Survey
Comment author: Lumifer 05 April 2016 03:03:30PM -1 points [-]

Indefinitely, in the scenario I described -- we'd have lost the technology necessary to rebuild the technology.

We built it from scratch to start with.

abundant energy

I think you're confusing technology and scale. Besides, can we now finally admit peak oil was wrong?

Comment author: qmotus 05 April 2016 03:31:36PM *  1 point [-]

Besides, can we now finally admit peak oil was wrong?

Unfortunately, we can't. While we're not going to run out of oil soon (in fact, we should stop burning it for climate reasons long before we do; also, peak oil is not about oil depletion), we are running out of cheap oil. The EROEI of oil has fallen significantly since we started extracting it on a large scale.

This is highly relevant for what is discussed here. In the early 20th century, we could produce around 100 units of energy from oil for every unit of energy we used to extract it; those rebuilding the civilization from scratch today or in the future would have to make do with far less.

Comment author: gjm 03 April 2016 10:37:57PM 1 point [-]

I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.

Comment author: qmotus 05 April 2016 10:11:49AM 0 points [-]

Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).

Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like "if the world was like the normal intuitions of most people say it is like", in which case I still think there's a world of difference between very small probability and very small measure.

I'm not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of "branches" ("branches" or "worlds" of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I'm in a submarine with turchin and x-risk is about to be realized, I don't get how I could "expect" that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don't understand how to adopt any other attitude.

Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically "so I'm immortal, yay; now I could play quantum russian roulette and make myself rich"; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say "I'm alive" if even a very small fraction of my original measure still exists.

Comment author: gjm 29 March 2016 03:00:01PM 0 points [-]

So it mostly used as universal objection to any strange things.

Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven't fallen into such sloppiness myself.

Your interpretation of Egan's law is that everything useful should already be used by evolution.

No, I didn't intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution's goals, which of course need not be ours) then that isn't generally invalidated by new discoveries about why the world is the way that's made those things evolutionarily fruitful.

(Of course it could be, given the "right" discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)

In case of QI it has some similarities to anthropic principle, by the way

You could have deduced that I'd noticed that, from the fact that I wrote

what I'm claiming is that those things aren't invalidated by saying words like "anthropic" or "quantum".

but no matter.

You also suggest to use Egan's law as normative: don't do strange risky things.

I didn't intend to say or imply that, either, and this one I don't see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan's law something like "If something is a terrible risk, discovering new scientific underpinnings for things doesn't stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences". Whether that applies in the present case is, I take it, one of the points under dispute.

so my best strategy should not be normal

I take it you mean might not be; it could turn out that even in this rather unusual situation "normal" is the best you can do.

even if QI doesn't work

I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit -- which is what you need to say e.g. "I will survive" -- seems to me like a decision rather than a proposition, and I don't know what it would mean to say that it does or doesn't work.)

cryonics

I'm not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you'd be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)

Comment author: qmotus 03 April 2016 09:13:20PM 0 points [-]

I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics.

Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If "QI works", this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.

Of course, it could be that if you've accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.

View more: Prev | Next