Comment author: MichaelVassar 15 July 2010 04:53:54PM 8 points [-]

There's no social coprocessor, we evolved a giant cerebral cortex to do social processing, but some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.

Comment author: daedalus2u 26 July 2010 11:06:05PM 0 points [-]

I disagree. I think there is the functional equivalent of a “social-co-processor”, what I see as the fundamental trade-off along the autism spectrum, the trading of a "theory of mind" (necessary for good and nuanced communication with neurotypically developing individuals and a “theory of reality”, (necessary for good ability at tool making and tool using).

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

Because the maternal pelvis is limited in size, the infant brain is limited at birth (still ~1% of women die per childbirth (in the wild) due to cephalopelvic disproportion). The “best” time to program the fundamental neuroanatomy of the brain is in utero, during the first trimester when the fundamental neuroanatomy of the brain is developing and when the epigenetic programming of all the neurons in the brain is occurring.

The two fundamental human traits, language and tool making and tool using both require a large brain with substantial plasticity over the individual's lifetime. But other than that they are pretty much orthogonal. I suspect there has been evolutionary pressure to optimize the neuroanatomy of the human infant brain at birth so as to optimize the neurological tasks that brain is likely to need to do over that individual's lifetime.

Comment author: SforSingularity 26 July 2010 09:57:09PM 4 points [-]

1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear?

I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.

Comment author: daedalus2u 26 July 2010 10:08:54PM 4 points [-]

For me, essentially zero, that is I would act (or attempt to act) as if I had zero credence that I was in a rescue sim.

Comment author: Eliezer_Yudkowsky 15 March 2009 07:26:01PM 1 point [-]

Then use more obscure questions.

Comment author: daedalus2u 26 July 2010 04:08:21PM *  0 points [-]

Test for data, factual knowledge and counterfactual knowledge. True rationalists will have less counterfactual knowledge than non-rationalists because they will have filtered it out. Non-rationalits will have more false data because their counterfactual knowledge will feedback and cause them to believe things that are false are actually true. For example that Iraq or Iran was involved in 9/11.

What you really want to measure is the relative proportion of factual and counterfactual knowledge someone has, and in what particular areas. Then including areas like religion, medicine, alternative medicine, and politics in the testing space is advantageous because then you can see where the idea space is that the individuals are most non-rational in.

This can be tricky because many individuals are extremely invested in their counterfactual knowledge and will object to it being identified as counterfactual. A lot of fad-driven science is based on counterfactual knowledge, but the faddists don't want to acknowledge that.

A way to test this would be to see how well people can differentiate correct facts (data) from factual knowledge (based on and consistent with only data) from counterfactual knowledge (based on false facts and not consistent with correct facts) from opinion consistent with facts or opinion consistent with false facts.

An example: in the neurodegenerative disease of Alzheimer's, there is the association of the accumulation of amyloid with dementia. It remains not established if amyloid is a cause, or an effect or is merely associated with dementia. However there have been studies where amyloid has been removed via vaccination against amyloid and a clearing of amyloid by the immune system with no improvement.

I imagine a list of a very large number of statements to be labeled as 1.true (>99% likelihood) 2.false (>99% likelihood to be false) [edited to improve definition of false] 3.opinion based on true facts 4.opinion based on false ideas 5.no one knows 6.I don't know

A list of some examples

Iraq caused 9/11 2 WMD were found in Iraq 2 Amyloid is found in Alzheimer's 1 Amyloid causes Alzheimer's 2 (this happens to be a field I am
working in so I have non-public knowledge as to the real cause) Greenhouse gases are causing GW 1 Vaccines cause autism 2 Acupuncture is a placebo 1 There is life on Mars 5

You don't want to test for obscure things, you want to test for common things that are believed but which are wrong. I think you also want to explicitly tell people that you are testing them for rationality, so they can put themselves into “rational-mode” (a state that is not always socially acceptable).

The table-like lists look fine in the edit box but not fine once I post. :(

In response to comment by [deleted] on Open Thread: July 2010, Part 2
Comment author: daedalus2u 26 July 2010 01:13:31PM 2 points [-]

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

Comment author: daedalus2u 26 July 2010 01:15:53PM 1 point [-]

The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts.

I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts.

A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.

Comment author: [deleted] 20 July 2010 07:27:11PM 0 points [-]

sounds good.

In response to comment by [deleted] on Open Thread: July 2010, Part 2
Comment author: daedalus2u 26 July 2010 01:13:31PM 2 points [-]

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

Comment author: DanArmak 25 July 2010 09:35:25PM 0 points [-]

While an arbitrary utility function can in principle occur, an intelligent entity with a self-contradictory utility function would achieve greater utility by modifying its utility function until it was less self-contradictory.

To the extent humans have utility functions (e.g. derived from their behavior), they are often contradictory, yet few humans try to change their utility functions (in any of several applicable senses of the word) to resolve such contradictions.

This is because human utility functions generally place negative value on changing your own utility function. This is what I think of when I think "reasonable utility function": they are evolutionarily stable.

Returning to your definition, just because humans have inconsistent utility functions, I don't think you can argue that they are not 'intelligent' (enough). Intelligence is only a tool; utility is supreme. AIs too have a high chance of undergoing evolution, via cloning and self-modification. In a universe where AIs were common, I would expect a stranger AI to have a self-preserving utility function, i.e., one resistant to changes.

Comment author: daedalus2u 26 July 2010 12:16:48AM 1 point [-]

Human utility functions change all the time. They are usually not easily changed through conscious effort, but drugs can change them quite readily, for example exposure to nicotine changes the human utility function to place a high value on consuming the right amount of nicotine. I think humans place a high utility on the illusion that their utility function is difficult to change and an even higher utility in rationalizing false logical-seeming motivations for how they feel. There are whole industries (tobacco, advertising, marketing, laws, religions, brainwashing, etc.) set up to attempt to change human utility functions.

Human utility functions do change over time, but they have to because humans have needs that vary with time. Inhaling has to be followed by exhaling, ingesting food has to be followed by excretion of waste, being awake has to be followed by being asleep. Also humans evolved as biological entities; their evolved utility function evolved so as to enhance reproduction and survival of the organism. There are plenty of evolved “back-doors” in human utility functions that can be used to hack into and exploit human utility functions (as the industries mentioned earlier do).

I think that human utility functions are not easily modified in certain ways because of the substrate they are instantiated in, biological tissues, and because they evolved; not because humans don't want to modify their utility function. They are easily modified in some ways (the nicotine example) for the same reason. I think the perceived inconsistency in human utility functions more relates to the changing needs of their biological substrate and its limitations rather than poor specification of the utility function.

Since an AI is artificial, it would have an artificial utility function. Since even an extremely powerful AI will still have finite resources (including computational resources), an efficient allocation of those resources is a necessary part of any reasonable utility function for that AI. If the resources the AI has change over time, then the utility function the AI uses to allocate those resources has to change over time also. If the AI can modify its own utility function (optimal, but not strictly necessary for it to match its utility function to its available resources), reducing contradictory and redundant allocations of resources is what a reasonable utility function would do.

Comment author: soreff 05 May 2010 05:21:26PM *  8 points [-]

Agreed - consider C60. Would anyone in 1980 have believed that there was an unrecognized allotrope of carbon, stable at room temperature and pressure? To phrase it another way: The whole field of organic chemistry had been active for about a century at that point, and had not noticed another structure for their core element in all that time.

Comment author: daedalus2u 25 July 2010 08:56:13PM 2 points [-]

I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

Comment author: Roko 22 July 2010 10:00:19PM 3 points [-]

But what do you do once you find the important developments? You have to either fund it yourself, or somehow convince a skeptical and chaotic community to do lots more of it! And that costs money. Just because you know the answer... ... doesn't mean that you can just tell it to people and expect them to obey.

Comment author: daedalus2u 25 July 2010 06:57:08PM 1 point [-]

I would really like an answer to this question because it is the predicament that I am quite sure I find myself in. I can't get people to pay enough attention to even tell me where I am wrong. :(

Comment author: daedalus2u 25 July 2010 01:47:50PM 1 point [-]

I think this idea is essentially correct, but instead of near-mode vs far-mode, I think the balance is more between a "theory of mind" and a "theory of reality" which I have written about.

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

The only things that can be communicated are mental concepts. To communicate a concept, the concept needs to be converted into the communication data stream using a communication protocol that can be decoded at the other end of the communication link. The communication protocols that convert mental concepts into language (and back) is what I call the “theory of mind”. A good ToM is necessary for communication, but it can only be used for communicating with a ToM that matches it. If the two ToMs don't match, then they can't be used for communication.

Comment author: daedalus2u 25 July 2010 01:48:25PM 0 points [-]

When the ToMs don't match, I think it triggers xenophobia.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

Effectively when people meet and try to communicate, they do a Turing Test, and if the error rate is too high, it triggers feelings of xenophobia via the uncanny valley effect. If you allow your ToM to change to accommodate and understand the person you feel xenophobia for, then the xenophobia will go away. If you don't, then the feelings of xenophobia remain. The decision to allow your ToM to change is what differentiates a non-racist from a racist.

Comment author: MichaelVassar 21 July 2010 01:18:58PM 7 points [-]

I think the autistic/schizophrenic spectrum looks like a calibration spectrum for one's near-mode tolerance of type 1 and type 2 errors. Deviation from the mean in either direction causes near mode to be substantially less useful.

Comment author: daedalus2u 25 July 2010 01:47:50PM 1 point [-]

I think this idea is essentially correct, but instead of near-mode vs far-mode, I think the balance is more between a "theory of mind" and a "theory of reality" which I have written about.

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

The only things that can be communicated are mental concepts. To communicate a concept, the concept needs to be converted into the communication data stream using a communication protocol that can be decoded at the other end of the communication link. The communication protocols that convert mental concepts into language (and back) is what I call the “theory of mind”. A good ToM is necessary for communication, but it can only be used for communicating with a ToM that matches it. If the two ToMs don't match, then they can't be used for communication.

View more: Prev | Next