Comment author: NancyLebovitz 01 August 2010 02:13:33PM 12 points [-]

Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.

Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.

Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.

The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.

This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.

Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.

As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.

From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.

Comment author: daedalus2u 01 August 2010 03:49:36PM 3 points [-]

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

Comment author: mattnewport 30 July 2010 12:43:34AM *  0 points [-]

I don't see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.

Comment author: daedalus2u 30 July 2010 02:49:21AM 0 points [-]

I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.

Only a part of what the brain does is conscious. The visual cortex isn't conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.

There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols.

Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness?

Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia?

I suspect that the question of consciousness isn't so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.

Comment author: mattnewport 29 July 2010 10:56:12PM 3 points [-]

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

It's no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It's also no more useful.

Comment author: daedalus2u 30 July 2010 12:33:38AM 0 points [-]

Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.

If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?

I don't have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is.

My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person's lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.

Comment author: SilasBarta 29 July 2010 07:06:44PM 1 point [-]

Hold on -- those are important articles to read, and they do move you toward a resolution of that problem. But I don't think they fully dissolve/answer the exact question daedalus2u is asking.

For example, EY has written this article, grappling with but ultimately not resolving the question of whether you should care about "other copies" of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc.

I don't deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don't cover everything he asked.

Unless I've missed something?

Comment author: daedalus2u 29 July 2010 10:50:01PM 0 points [-]

SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can't “dissolve” the question.

If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

Comment author: daedalus2u 29 July 2010 06:18:15PM 0 points [-]

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an older (and wiser) version of the earlier entity.

But the selection of the transform that changes the first entity into the second one is arbitrary. In principle there is a transform that will change any Turing equivalent into any other Turing equivalent. Is every entity that can be instantiated as a TM equivalent to every other TM entity?

I appreciate this does not apply to entities instantiated in a biological format because such substrates are not stable over time (even a few seconds). However that does raise another problem, how can a human be “the same” entity over their lifetime?

Comment author: JoshuaZ 28 July 2010 11:59:54PM *  13 points [-]

I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

Comment author: daedalus2u 29 July 2010 12:07:22AM 6 points [-]

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Comment author: Roko 27 July 2010 07:51:26PM *  1 point [-]

And my own answer to this question is that we're a fragmented philosopher, with many different humans, each of whom has many different intuitions. In the example given of Bayesian Updating versus UDT, we have both timeless intuitions and Bayesian ones. The timeless/updateless/acausal intuitions come from the human intuitions about pride, dignity, honor, etc, which were developed because humans interacted with other humans.

Comment author: daedalus2u 28 July 2010 02:06:26AM -2 points [-]

I think this is correct. Using my formulation, the Bayseian system is what I call a "theory of reality", the timeless one is the "theory of mind", which I see as the trade-off along the autism spectrum.

Comment author: arundelo 26 July 2010 11:42:38PM 0 points [-]

There's also a help link under the comment box.

* Bullet lists look like this.
1. Ordered lists look like this.
Comment author: daedalus2u 26 July 2010 11:48:42PM *  -1 points [-]

Yes, thankyou just one problem

  • too obvious

and

  • too easy
Comment author: WrongBot 13 July 2010 03:41:42PM 6 points [-]

My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things.

But I am also personally terrified of exactly the sort of thing I describe, because I can't see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don't think I could prevent that from affecting my behavior in some way. I don't think it's possible. And I disvalue such a result very strongly, so I avoid it.

I bring up dangerous thoughts because I am genuinely scared of them.

Comment author: daedalus2u 26 July 2010 11:43:53PM 2 points [-]

I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.

I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill.

What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream.

However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain.

It is exactly analogous to Nietzsche's quote “if you look into the abyss, the abyss looks back into you”. We can only perceive something if we have pattern recognition for that something instantiated in our neural networks. If we don't have the neuroanatomy to instantiate an idea, we can't perceive the idea, we can't even think the idea. To see into the abyss, you have to have a map of the abyss in your visual cortex to decode the image of the abyss that is being received on your retina.

Bigots as a rule are incapable of understanding the objects of their bigotry (I am not including self-loathing here because that is a special case), and it shows, they attribute all kinds of crazy, wild, and completely non-realistic thinking processes to the objects of their bigotry. I think this was the reason why many invader cultures committed genocide on native cultures by taking children away from natives and fostering them with the invader culture (example US, Canada, Australia) (I go into more detail on that). What bigots often do is make up reasons out of pure fantasy to justify the hatred they feel toward the objects of their bigotry. The Blood Libel against the Jews is a good example. This was the lie that Jews used the blood of Christians in Passover rituals. This could not be correct. Passover long predated Christianity, blood is never kosher, human blood is never kosher, no observant Jew could ever use human blood in any religious ceremony. It never happened, it was a total lie. A lie used to justify the hatred that some Christians felt toward Jews. The hate came first, the lie was used to justify the feelings of hatred.

Bigots as a rule are afraid of associating with the objects of their bigotry because they will then come to understand them. The term “xenophobia” is quite correct. There is a fear of associating with the other because then some of “the other” will rub off on you and you will necessarily become more “other-like”. You will have a map that understands “the other” in your neuroanatomy.

In one sense, to the bigot, understanding “the other” is a “dangerous thought” because it changes the bigot's utility function such that certain individuals are no longer so low on the social hierarchy as to be treated as non-humans.

There are some thoughts that are dangerous to humans. These activate the “fight or flight” state in an uncontrolled manner and that can be lethal. This usually requires a lot of priming (years). There are too many safeties that kick-in for it to happen by accident. I think this is what the Kundalini kindling is. For the most part there isn't enough direct coupling between the part of the brain that thinks thoughts and the part that controls the stuff that keeps you alive. There is some, and that can be triggered in a heart beat when you are being chased by a bear, but there is lots of feedback via feelings before you get to dangerous levels. I don't recommend trying to work yourself into that state because it is quite dangerous because the safeties do get turned off (that is unless a bear is actually chasing you).

Drugs of abuse can trigger the same things which is one of the reasons they are so dangerous.

Comment author: arundelo 26 July 2010 10:28:37PM 0 points [-]

http://daringfireball.net/projects/markdown/syntax

I'm not sure what effect you're !
going for, but indenting by four !
spaces allows you to do things like !
this. !
Comment author: daedalus2u 26 July 2010 11:15:44PM 0 points [-]

Thanks, I was trying to make a list, maybe I will figure it out. I just joined and am trying to focus on getting up to speed on the ideas, the syntax of formating things is more difficult for me and less rewarding.

View more: Prev | Next