Fly2 has not written any posts yet.

In 1999 we hadn't experienced the dotcom crash or 911. Those events may have slowed consumer technology application by a few years. On the other hand, military robots, remotely piloted aircraft, and "social" software for tracking terrorist groups have seen accelerated development. Concerns about global warming and oil shortages will likely accelerate nanotech and biotech associated with energy production while reducing the pace of development in other fields. Computational power continues to increase by a factor of a thousand per decade. Biotech continues to advance exponentially.
If humanity were really approaching a technological singularity I'd expect to see rapid increases in average wealth. Stock market performance in the last decade doesn't reflect a growth in real wealth. Also death rates for common diseases aren't showing a significant decline.
Memories decay exponentially. This occurs both over time and over number of items to remember. Also, remembering requires brain attention. The vast majority of memories in a brain will never be activated sufficiently for conscious awareness. As memories accumulate, the fraction we actively access decreases.
The human mind is a flashlight that dimly illuminates the path behind. Moving forward, we lose sight of where we've been. Living a thousand years wouldn't make that flashlight any brighter.
Terren Suydam: "So genetics is not the whole story, and that's what I mean by group selection."
I use the term "multilevel selection" for what you are describing. I agree it has been important.
E.g., there has been selection between different species. Species with genomes that supported rapid adaptation to changing environments and that supported quick diversification when expanding into new niches spread far and wide. (Beetles have been extremely successful with around 350,000 known species.) Other specie branches died out. The genetic mechanisms and the animal body plans that persist to the present are the winners of a long between specie selection process.
My intuition is that selection operating at the individual level, whether genetic or cultural, suffices to produce cooperation and moral behavior. Multilevel selection probably played a supporting role.
Terren Suydam: "The first is that one has to adopt the group-selection stance."
(Technical jargon nitpick.)
In studying evolutionary biology, "group-selection" has a specific meaning, an individual sacrifices its own fitness in order to improve the group fitness. I.e., individual loss for a group gain. E.g., suppose you have a species that consists of many small family groups. Suppose a mutation produces a self-sacrificing individual in one of the groups. His fitness is slightly lower but his family group fitness is higher. His group tends to grow faster than other groups. So his group produces more splinter groups, some of which will have his alleles. Within any one group his allele tends to... (read more)
Caledonian: "It is much, much more elegant - and more compatible with what we know about cognition - to hold that the complex systems are built out of smaller, simpler systems over which the complex has no control."
The brain has feedback loops to even the earliest processing stages. Thus, I might choose to look for a lost contact lens. With that goal in mind, my unconscious visual processing systems will be primed to recognize signals that could be a contact lens. (The feedback loops can be observed in the neural tissue. There are cognitive science experiments that demonstrate that high level conscious decisions can affect neural processing in the earlier stages.)
The conscious... (read more)
EY: "human cognitive psychology has not had time to change evolutionarily over that period"
Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)
re: FAI and morality
From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure... (read 417 more words →)
"...if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?"
The mirror neurons also help you learn from watching other humans. They help you intuit the feelings of others which makes social prediction possible. They help communication. They also allow you to share in the joy and pleasure of others...e.g., a young child playing in a park.
I would like more control over how my mind functions. At times it would be good to turn-off some emotional responses, especially when someone is manipulating my emotions. So if the pill had only temporary effects, it were safe, and it would help me achieve my goals then, yes, I'd take the pill.
roko: "Game theory doesn't tell you what you should do, it only tells you how to do it. E.g. in the classic prisoner's dilemma, defection is only an optimal strategy if you've already decided that the right thing to do is to minimize your prison sentence."
Survival and growth affect the trajectory of a particle in mind space. Some "ethical systems" may act as attractors. Particles interact, clumps interact, higher level behaviors emerge. A super AI might be able to navigate the density substructures of mind space guided by game theory. The "right" decision would be the one that maximizes persistence/growth. (I'm not saying that this would be good for humanity. I'm only suggesting that a theory of non-human ethics is possible.)
(Phil Goetz, I wrote the above before reading your comment: "...variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds" Yes, this what I was trying to convey by "attractors" and navigation of density substructures in mind space.)
"Are there morally justified terminal (not instrumental) values, that don't causally root in the evolutionary history of value instincts?"
Such a morality should confer survival benefit. E.g., a tit-for-tat strategy.
Suppose an entity is greedy. It tries to garner all resources. In one-on-one competitions against weaker enties it thrives. But other entities see it as a major threat. A stronger entity will eliminate it. A group of weaker entities will cooperate to eliminate it.
A super intelligent AI might deduce or discover that other powerful entities exist in the universe and that they will adjust their behavior based on the AI's history. The AI might see some value in displaying non-greedy behavior to competing entities.... (read more)
Ape trounces the best of the human world in memory competition
http://www.dailymail.co.uk/news/article-510260/Im-chimpion--Ape-trounces-best-human-world-memory-competition.html