One of the best posts I've here on LW, congratulations. I think that the most important algorithms that the brain implements will probably be less complex than anticipated. Epigenesis and early ontogenetic adaptation are heavily depended on feedback from the environment and probably very general, even if the 'evolution of learning' and genetic complexity provides some of the domain specifications ab initio. Results considering bounded computation (computational resources and limited information) will probably show that the ULM viewpoint cluster is compatible with the existence of cognitive biases and heuristics in our cognition http://www.pnas.org/content/103/9/3198
Pinker tries to provide several complementary explanations for his thesis, including game-theoretic ones (asymmetric growth, comparative advantages and overall economic interdependence) which could be considered "not really nice reasons for measuring our (lack of) willingness to destroy each other". Like SA said, Braumoeller seems to conflate 'not very nice reasons to maintain cooperation' with 'our willingness to engage in war hasn't changed'. And this is one of the reasons why Taleb et al. missed the point on Pinker's thesis. To test if it's bu...
Just to update this thread with recent discussions on EP: the list of all commentaries and responses to SWT's 'The Ape That Thought It Was a Peacock: Does Evolutionary Psychology Exaggerate Human Sex Differences?' is here and most of the papers are available online freely by googling them. Very easy:
"The “subjective system” evolved from something like a basic reinforcement learning architecture, and it models subjective > expectation and this organism's immediate rewards, and isn't too strongly swayed by abstract theories and claims."
I think this overestimates the degree to which a) (primitive) subjective systems are reward-seeking and that b) "personal identity" are really definable non-volatile static entities and not folk-psychological dualistic concepts in a cartesian theater (c.f. Dennett). For sufficiently complex adapt...
In 1948 Norbert Wiener, in the book Cybernetics: Or the Control and Communication in the Animal and the Machine, said: "Prefrontal lobotomy... has recently been having a certain vogue, probably not unconnected with the fact that it makes the custodial care of many patients easier. Let me remark in passing that killing them makes their custodial care still easier."
How about a machine that maximizes your own concept of pleasure and makes you believe that it is probably not a machine simulation (or thinks that machine simulation is an irrelevant argument)?
Now I know why most LWers reported being aspies. I feel at home, I think :)
Great response.
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can't preserve high levels of empathy and share the same "computational space". If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative spa...
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can't preserve high levels of empathy and share the same "computational space". If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative sp...
Yes. That paper has been cited by Stuart J. Russell's "Rationality and Intelligence: A Brief Update" and in Valiant 's second paper on evolvability.