All of daedalus2u's Comments + Replies

Inklesspen's argument (which you said you agreed with) was was that my belief in a lack of personal identity continuity was incompatible with being unwilling to accept a painless death and that this constitutes a fatal flaw in my argument.

If there are things you want to accomplish and where you believe the most effective way for you to accomplish those things is via uploading what you believe will be a version of your identity into an electronic gizmo; all I can say is good luck with that. You are welcome to your beliefs.

In no way does that address Ink... (read more)

Yes. I would consider those states to be “unconscious”. I am not using “conscious” or “unconscious” as pejorative terms or as terms with any type of value, but purely as descriptive terms that describe the state of an entity. If an entity is not self-aware in the moment, then it is not conscious.

People are not self-aware of the data processing their visual cortex is doing (at least I am not). When you are not aware of the data processing you are doing, the outcome of that data processing is “transparent” to you, that is the output is achieved without... (read more)

I see this as analogous to what some religious people say when they are unable to conceive of a sense of morality or any code of behavior that does not come from their God.

If you are unable to conceive of a sense of purpose that is not attached to a personal sense of continued personal identity, I am not sure I can convince you otherwise.

But why you consider that my ability to conceive of a sense of purpose without a personal belief in a continued sense of personal identity is somehow a "flaw" in my reasoning is not something I quite understa... (read more)

0jmmcd
Your entire reply deals with arguments you wish I had made. Without coming down anywhere on the issue of continued personal identity being an illusion, OR the issue of a sense of purpose in this scenario, I'm trying to point out a purely logical inconsistency: If uploading for personal immortality is "pursuing an illusion", then so is living: so you should allow inklesspen to murder you. The other way around: if you want to accomplish things in the future with your current body, then you should be able to conceive of people wanting to accomplish things in their post-upload future. The continuity with the current self is equally illusory in each case, according to you.

Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.

There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state. ... (read more)

0NancyLebovitz
Do you consider flow states (being so fascinated by something that you forget yourself and the passage of time) as not being conscious?

If a being is not aware of being conscious, then it is not conscious no matter what else it is aware of.

I am not saying that all consciousness entails is being aware of being conscious, but it does at a minimum entail that. If an entity does not have self-awareness, then it is not conscious, no matter what other properties that entity has.

You are free to make up any hypothetical entities and states that you want, but the term “consciousness” has a generally recognized meaning. If you want to deviate from that meaning you have to tell me what you mea... (read more)

3cousin_it
10 seconds ago I was unaware of being conscious: my attention was directed elsewhere. Does that mean I was unconscious? How about a creature who spends all its life like that? - will you claim that it's only conscious because it has a potential possibility of noticing its own consciousness, or something?

It is your contention that an entity can be conscious without being aware that it is conscious?

There are entities that are not aware of being conscious. To me, if an entity is not aware of being conscious (i.e. is unconscious of being conscious), then it is unconscious.

By my understanding of the term, the one thing an entity must be aware of to be conscious is its own consciousness. I see that as an inherent part of the definition. I can not conceive of a definition of “consciousness” that allows for a conscious entity to be unaware that it is conscious.

Could you give me a definition of "consciousness" that allows for being unaware of being conscious?

3thomblake
if all that consciousness entails is being aware of being conscious, it doesn't mean anything at all, does it? We could just as well say: "My machine is fepton! I know this because it's aware of being fepton; just ask, and it well tell you that it's fepton! What's fepton, you ask? Well, it's the property of being aware of being fepton!" I'm not allowed, under your definition, to posit a conscious being that is aware of every fact about the universe except the fact of its own consciousness, only because a being with such a description would be unconscious, by definition. It seems to be a pretty useless thing to be aware of.

perplexed, how do you know you do not have a consciousness detector?

Do you see because you use a light detector? Or because you use your eyes? Or because you learned what the word “see” means?

When you understand spoken language do you use a sound detector? A word detector? Do the parts of your brain that you use to decode sounds into words into language into meaning not do computations on the signals those parts receive from your ears?

The only reason you can think a thought is because there are neural structures that are instantiating that thoug... (read more)

9Perplexed
I'm not sure there is a disagreement. As I said, I don't spend much time thinking about consciousness, and even less time reading about it, so please bear with me as I struggle to communicate. Suppose I have a genetic defect such that my consciousness detector is broken. How would I know that? As I say, I didn't discover that I am conscious by introspection. I was told that I am conscious when I was young enough to believe what I was told. I was told that all the other people I know are conscious - except maybe when they are asleep or knocked out after a fall. I was told that no one really knows for sure whether my dog Cookie was conscious. But that the ants in my ant farm almost certainly were not. Based on this information, I constructed a kind of operational definition for the term. But I really had (and still have) no idea whether my definition matched anyone else's. But here is the thing. I have a friend whose color-red qualia detector has a genetic defect. He learned the meaning of the word "red" and the word "green" as a child just like me. But he didn't know that he had learned the wrong meanings until a teacher became suspicious and sent him to have his vision checked. See, they can detect defects in color-red qualia detectors. So, he knows his is defective. He now knows that when he was told the meaning of the word red by example, he got the wrong idea. So how do I know that my consciousness detector is working? I do notice that even though most people were told the meaning of the word back in grade school just like me, they don't all seem to have the same idea. Are some of their consciousness detectors broken? Is mine broken? Are you and I in disagreement? If you think that we are in disagreement, do you now understand where the disagreement is coming from? One thing I am pretty sure of: if I do have a broken consciousness detector due to a genetic defect, this defect hasn't really hurt me too much. It doesn't seem to be something crucial. I do fine ju
6Pavitra
Assume you do, in fact, have a consciousness detector. Do you trust it to work correctly in weird edge cases? Humans have fairly advanced hardwired circuitry for detecting other humans, but our human detectors fail completely when presented with a photograph or a movie screen. We see a picture of a human, and it looks like a human.

GuySrinivasan, I really can't figure out what is being meant.

In my next sentence I say I am not trying to describe all computations that are necessary, and in the sentence after that I start talking about entity detection computation structures being necessary.

First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity

... (read more)
5cousin_it
That consciousness requires a self detector thingy. This may or may not be true - you haven't given enough evidence either way. Sure, humans are conscious and they can also self-detect; so what? At this stage it's like claiming that flight requires flapping your wings.

Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.

And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.

When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.

perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn't require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special. Until I see a reasonable argument otherwise; my default will be that my own consciousness is not special and that everyone else's consciousness is not special either.

I appreciate that some people do privilege their own consciousness. My interpretation of that self-privileging is... (read more)

-2thomblake
It's perplexing to me that you would be perplexed by this. Is it not your opinion? I would assume it is your opinion, since you have asserted it. It is clearly not your opinion that its negation is true.
5Perplexed
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else. You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes. During the evolutionary development of human cognition, we passed through these stages: * (1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others. * (2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others' beliefs is also a survival skill. * (3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves. * (4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization "consciousness" and believe it is a big deal. * (5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability "consciousness", rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other. So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of
8SarahNibs
If smart people disagree so bizarrely, smart money's on a misunderstanding, not a disagreement. e.g. here, cousin_it said: What might he have meant that's not insane? Perhaps that he wants evidence that there must be certain computational functions, rather than that he wants evidence that there must be certain computational functions.

To be a car; a machine at a minimum must have wheels. Wheels are not sufficient to make a machine into a car.

To be conscious, an entity must be self-aware of self-consciousness. To be self-aware of self-consciousness an entity must have a "self-consciousness-detector" A self-consciousness-detector requires data and computation resources to do the pattern recognition necessary to detect self-consciousness.

What else consciousness requires I don't know, but I know it must require detection of self-consciousness.

0wedrifid
"Necessary" but not sufficient.
0Zetetic
That seems like a very confusing way of saying this. You aren't 'self aware of self consciousness', self consciousness is, as far as I can tell in this context, equivalent to self awareness. The phrase totally redundant. The only meaningful reduction I can make out here is that you think to be conscious a person has to be self aware. I think it's probably a mistake to propose a "self consciousness detector". What is really going on? You can focus on previously made patterns of thought and actions and ask questions about them for future reference. Why did I do this? Why did I think that? You are noticing a very complex internal process and in doing so applying another complex internal process to the memory of that process in order to gather useful or attractive (I am ignorant of the physical processes that dictate when and about what we think about during metacognition) information.

My purpose in pointing this out was to say that yes, people today are making the same types of category errors as Kelvin was; the mistaken belief that some types of objects are fundamentally not comparable (in Kelvin's case living things and machines), in the example I used computations by a sensory neural network and computations by a machine pattern recognition system.

They are both doing computations, they can both be compared as computing devices; they both need computation resources to accomplish the computations and data to do the computations on. ... (read more)

[anonymous]130

then perhaps LW is not ready to discuss such things

Uh, what? The post is poorly written along a number of dimensions, and was downvoted because people don't want to see poorly written posts on the front page. The comments are pointing out specific problems with it. To interpret that as a problem with the community is a fairly egregious example of cognitive dissonance.

4Perplexed
So, if I understand you, detecting consciousness in someone else is something like detecting anger in someone else - of course we can't do it perfectly, but we can still do it. Makes sense to me. Happy to have fed you the straight-line. I understand your frustration. FWIW, I upvoted you some time ago, not because I liked your post, but rather because it wasn't nearly bad enough to be downvoted that far. Maybe I felt a bit guilty. I don't really think there is "the need/desire to keep “consciousness” as a special category of things/objects", at least not in this community. However, there is a kind of exhaustion regarding the topic, and an intuition that the topic can quickly become a quicksand. As I said, I found your title attractive because I thought it would be something like "here are the computations which we know/suspect that a conscious entity must accomplish, and here is how big/difficult they are". Well, maybe the posting started with that, but then it shifted from computation to establish/maintain consciousness to computation to recognize consciousness, to who knows what else. My complaint was that your posting was disorganized. But down at the sentence/paragraph level, it struck me as competent and occasionally interesting. I hope you don't let this bad experience drive you from LW.

Is there something wrong with my interpretation of Stockholm Syndrome other than it is not the “natural interpretation"? Is it inconsistent with anything known about Stockholm Syndrome, how people interact, or how humans evolved?

Would we consider it surprising if humans did have a mechanism to try and emulate a “green beard” if having a green beard became essential for survival?

We know that some people find many green-beard-type reasons for attacking and even killing other humans. Race, ethnicity, religion, sexual orientation, gender, and so on ... (read more)

Yes, and some people today don't realize that the brain does computations on sensory input in order to accomplish pattern recognition, and without that computation there is no pattern recognition and no perception. Of anything.

Perplexed170

I confess, I am lost. It seems we are in an arguments as soldiers situation in which everyone is shooting at everyone else. To recap:

  • You said "we can never “know for sure” that an entity is actually experiencing consciousness". (Incidentally, I agree.)
  • Cousin_it criticised, comparing you to Kelvin.
  • You responded, pointing out that the Kelvin quote is odd, given what we suspect Kelvin knew (Why did you do this?)
  • I suggest the Kelvin quote was maybe not so odd, given his misconceptions (Why did I do this???)
  • You point out that people today (wh
... (read more)

I had read mysterious answers to mysterious questions. I think I do have an explanation that makes consciousness seem less mysterious and which does not introduce any additional mysteries. Unfortunately I seem to be the only one who appreciates that.

Maybe if I had started out to discuss the computational requirements of the perception of consciousness there would have been less objection. But I don't see any way to differentiate between perception of consciousness and consciousness. I don't think you can have one without the other.

nawitus, my post was too long as it is. If I had included multiple discussions of multiple definitions of consciousness and qualia, you would either still be reading it or would have stopped because it was too long.

0nawitus
And that's why we need an article somewhere which would define some common terms, so you don't have to define them all over again in every article about consciousness.

With all due respect to Lord Kelvin, he personally knew of heavier than air flying machines. We now call them birds. He called them birds too.

Perplexed100

I'm not sure he realized they were machines, though.

We can't “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.

So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibratio... (read more)

-6obx

I am talking about minimum requirements, not sufficient requirements.

I am not sure what you mean by "understand relevant features of its own source code".

I don't know any humans that I would consider conscious that don't fit the definition of consciousness that I am using. If you have a different definition I would be happy to consider it.

2wedrifid
Those two seem to be the same thing in this context. No, it's as good as any. Yet the 'any' I've seen are all incomplete. Just be very careful that when you are discussing one element of 'consciousness' you are careful to only come to conclusions that require that element of consciousness and not some part of consciousness that is not included in your definition. For example I don't consider the above definition to be at all relevant to the Fermi paradox.

Yvain, what I mean by illusion is:

perceptions not corresponding to objective reality due to defects in sensory information processing used as the basis for that perception.

Optical illusions are examples of perceptions that don't correspond to reality because of how our nervous system processes light signals. Errors in perception; either false positives or false negatives are illusions.

In some of the meditative traditions there is the goal of "losing the self". I have never studied those traditions and don't know much about them. I do know ... (read more)

[Consciousness] :The subjective state of being self-aware that one is an autonomous entity that can differentially regulate what one is thinking about.

3[anonymous]
This needs further unpacking - you seem to be referring to (at least) 3 things simultaneously: Qualia, Self-Awareness, and Executive Control. I can imagine having any one of those without the others, which may be why so many people are disputing some of your assertions, and why your post seems so disorganized.
0KrisC
Is memory necessary?
4JoshuaZ
"self-aware" "differentially regulate" and "what one is thinking about" carry almost as much baggage as consciousness. I'm not sure that this particularly unpacking helps much.
0Jayson_Virissimo
I interpret your definition as being specifically about self-consciousness, not consciousness in general. Is this a good interpretation? Do you mean explicit (conceptual) self-awareness or implicit self-awareness, or both? If the former, young children probably wouldn't be conscious, but if the latter, then just about every animal would be.
5wedrifid
So, for example, any computer program that has the ability to to parse and understand relevant features of its own source code and also happens to have a few 'if' statements in some of the relevant areas. It may actually exclude certain humans that I would consider conscious. (I believe Yvain mentioned this too.)

No, there are useful things I want to accomplish with the remaining lifespan of the body I have. That there is no continuity of personal identity is irrelevant to what I can accomplish.

That continuity of personaal identity is an illusion simply means that the goal of indefinite extension of personal identity is a useless goal that can never be achieved.

I don't doubt that a machine could be programmed to think it was the continuation of a flesh-and-blood entity. People have posited paper clip maximizers too.

0jmmcd
There might be useful things I want to accomplish with my post-upload body and brain. I agree with inklesspen: this is a fatal inconsistency.

This is my first article on LW, so be gentle.

This is why it's strongly recommended to try out an article idea on the Open Thread first.

You owe it to your readers to have clearly organized and well-explained thoughts before writing a top-level post, and the best way to get there is to discuss your ideas with veterans first. If you say in advance that you want to write a top-level post, we'll respect that; I've never seen anyone here poach a post idea (though of course others may want to write their own ideas on the topic).

5Oscar_Cunningham
EDIT: I realise that you asked us to be gentle, and all I've done is point out a flaws. Feel free to ignore me. You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you've said is correct. The first example of this is this statement: How do you know? What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened? Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself). Recommended reading: http://lesswrong.com/lw/jl/what_is_evidence/
3wedrifid
To be honest you lost me at 'consciousness'. The whole question of computational requirements here seems to be one that is just a function of an arbitrary and not included word definition.
8[anonymous]
There seems to be a problem with the paragraph formatting at the beginning. More line breaks maybe?

I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.

When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.

How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don't understand what it is and believe that things like more affluence will resolve it.

Suicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.

0gwern
Yes yes, this is an argument for suicide rates never going to zero - but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse... (read more)

This is how people with Asperger's or autism experience interacting with people who are neurotypically developed (for the most part).

I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.

Only a part of what the brain does is conscious. The visual cortex isn't conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.

There are many aspects of brain information processing that are like this.... (read more)

Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.

If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?

I don't have an answer, I suspect that the problem is... (read more)

0mattnewport
I don't see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.

SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can't “dissolve” the question.

If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I ap... (read more)

1SilasBarta
You may be interested that I probed a similar question regarding how "qualia" come into play with this post about when two (classical) beings trade experiences.
5mattnewport
It's no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It's also no more useful.

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an olde... (read more)

1ata
That's the kind of question that a traditional philosopher would try to answer by coming up with the Ultimate Perfect True Definition of Identity, while an LWer would probably try to dissolve it. This is actually a fairly easy problem and should make good practice — "Dissolving the Question", "Righting a Wrong Question", and "How An Algorithm Feels From the Inside" should be good places to start. The "Quantum Mechanics and Personal Identity" subsequence may also be useful if you're considering any concept of identity that involves continuity of constituent matter.
2WrongBot
Eliezer's sequence on quantum mechanics and personal identity is almost exactly what you're looking for, I think.

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

I think this is correct. Using my formulation, the Bayseian system is what I call a "theory of reality", the timeless one is the "theory of mind", which I see as the trade-off along the autism spectrum.

Yes, thankyou just one problem

  • too obvious

and

  • too easy

I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.

I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they d... (read more)

Thanks, I was trying to make a list, maybe I will figure it out. I just joined and am trying to focus on getting up to speed on the ideas, the syntax of formating things is more difficult for me and less rewarding.

0arundelo
There's also a help link under the comment box. * Bullet lists look like this. 1. Ordered lists look like this.

I disagree. I think there is the functional equivalent of a “social-co-processor”, what I see as the fundamental trade-off along the autism spectrum, the trading of a "theory of mind" (necessary for good and nuanced communication with neurotypically developing individuals and a “theory of reality”, (necessary for good ability at tool making and tool using).

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

Because the maternal pelvis is limited in size, the infant brain is limited at birth (still ~1% of women die per ... (read more)

For me, essentially zero, that is I would act (or attempt to act) as if I had zero credence that I was in a rescue sim.

Test for data, factual knowledge and counterfactual knowledge. True rationalists will have less counterfactual knowledge than non-rationalists because they will have filtered it out. Non-rationalits will have more false data because their counterfactual knowledge will feedback and cause them to believe things that are false are actually true. For example that Iraq or Iran was involved in 9/11.

What you really want to measure is the relative proportion of factual and counterfactual knowledge someone has, and in what particular areas. Then including are... (read more)

0arundelo
http://daringfireball.net/projects/markdown/syntax I'm not sure what effect you're ! going for, but indenting by four ! spaces allows you to do things like ! this. !

The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts.

I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy tak... (read more)

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

2daedalus2u
The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts. I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts. A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.

Human utility functions change all the time. They are usually not easily changed through conscious effort, but drugs can change them quite readily, for example exposure to nicotine changes the human utility function to place a high value on consuming the right amount of nicotine. I think humans place a high utility on the illusion that their utility function is difficult to change and an even higher utility in rationalizing false logical-seeming motivations for how they feel. There are whole industries (tobacco, advertising, marketing, laws, religions, ... (read more)

I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

I would really like an answer to this question because it is the predicament that I am quite sure I find myself in. I can't get people to pay enough attention to even tell me where I am wrong. :(

When the ToMs don't match, I think it triggers xenophobia.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

Effectively when people meet and try to communicate, they do a Turing Test, and if the error rate is too high, it triggers feelings of xenophobia via the uncanny valley effect. If you allow your ToM to change to accommodate and understand the person you feel xenophobia for, then the xenophobia will go away. If you don't, then the feelings of xenophobia remain. The decision to allow your ToM to change is what differentiates a non-racist from a racist.

I think this idea is essentially correct, but instead of near-mode vs far-mode, I think the balance is more between a "theory of mind" and a "theory of reality" which I have written about.

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

The only things that can be communicated are mental concepts. To communicate a concept, the concept needs to be converted into the communication data stream using a communication protocol that can be decoded at the other end of the communication link. The communication pro... (read more)

-1daedalus2u
When the ToMs don't match, I think it triggers xenophobia. http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html Effectively when people meet and try to communicate, they do a Turing Test, and if the error rate is too high, it triggers feelings of xenophobia via the uncanny valley effect. If you allow your ToM to change to accommodate and understand the person you feel xenophobia for, then the xenophobia will go away. If you don't, then the feelings of xenophobia remain. The decision to allow your ToM to change is what differentiates a non-racist from a racist.

I think the 416,000 US military dead and their families would disagree that the war made them better off.

1Blueberry
That's irrelevant. Of course you can always cherry-pick people whom some event made worse off. The question was whether the war made the country better as a whole, not whether any individuals suffered.
0puls
Of course I agree with you. I am merely thinking in dollars and cents here, since that is the primary measure of value in the "civilized" world.

To me a reasonable utility function has to have a degree of self-consistency. A reasonable utility function wouldn't value both doing and undoing the same action simultaneously.

If an entity is using a utility function to determine its actions, then for every action the entity can perform, its utility function must be able to determine a utility value which then determines whether the entity does the action or not. If the utility function does not return a value, then the entity still has to act or not act, so the entity still has a utility function fo... (read more)

-1DanArmak
To the extent humans have utility functions (e.g. derived from their behavior), they are often contradictory, yet few humans try to change their utility functions (in any of several applicable senses of the word) to resolve such contradictions. This is because human utility functions generally place negative value on changing your own utility function. This is what I think of when I think "reasonable utility function": they are evolutionarily stable. Returning to your definition, just because humans have inconsistent utility functions, I don't think you can argue that they are not 'intelligent' (enough). Intelligence is only a tool; utility is supreme. AIs too have a high chance of undergoing evolution, via cloning and self-modification. In a universe where AIs were common, I would expect a stranger AI to have a self-preserving utility function, i.e., one resistant to changes.
2FAWS
I said as many times, not as much as possible. The AI might value that particular kind and degree of annoyance uniquely, say as a failed FAI that was programmed to maximize rich, not strongly negative human experience according to some screwed up definition of rich experiences, and according to this definition your state of mind between reading and replying to that message scores best, so the AI spends as many computational resources as possible on simulating you reacting to that message. Or perhaps it was supposed to value telling the truth to humans, there is a complicated formula for evaluating the value of each statement, due to human error it values telling the truth without being believed higher (the programmer thought non-obvious truths are more valuable), and simulating you reacting to that statement is the most efficient way to make a high scoring true statement that will not be believed. Or it could value something else entirely that's just not obvious to a human. There should be an infinite number of non-contradictory utility functions valuing doing what it supposedly did, even though the prior for most of them is pretty low (and only a small fraction of them should value still simulating you now, so by now you can be even more sure the original statement was wrong than you could be then for reasons unrelated to your deduction)

I agree if the utility function was unknown and arbitrary. But an AI that has already done 3^^^3 simulations and believes it then derives further utility from doing 3^^^3+1 simulations while sending (for the 3^^^3+1th time) an avatar to influence the entities it is simulating through intimidation and fear while offering no rationale for those fears and to a website inhabited by individuals attempting to be ever more rational does not have an unknown and arbitrary utility function.

I don't think there is any reasonable utility function that is consistent ... (read more)

3DanArmak
What is your definition of 'reasonable' utility functions, which doesn't reference any other utility functions (such as our own)?
8FAWS
There is no connection between the intelligence or power of an agent and its values other than its intelligence functioning as an upper bound on the complexity of its values. An omnipotent actor can have just as stupid values as everyone else. An omnipotent AI could have have a positive utility for annoying you with stupid and redundant tests as many times as possible, either as part of a really stupid utility function that it somehow ended up with on accident, or a non-stupid (if there even is such a thing) utility function that just looks like nonsense to humans.

I deduce you are lying.

If you were an AI and had simulated me for 3^^^3 times, there would be no utility in running my simulation 3^^^3+1 times because it would simply be a repetition of an earlier case. Either you don't appreciate this and are running the simulation again anyway, or you and your simulation of me are so imperfect that you are unable to appreciate that I appreciate it. In the most charitable case, I can deduce you are far from omnipotent.

That must be quite torturous for you, to have a lowly simulation deduce your feet of clay.

0Richard_Kennaway
Consider the scenario suitably modified.
1FAWS
Your deduction is faulty even though your conclusion is doubtlessly correct. The argument that there is no utility in running the simulation one more time requires that the utility of running an exact repetition is lower the second time and that there is an alternative course of action that offers more utility. Neither is necessarily a given for a completely unknown utility function.
Load More