Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Fly200

In 1999 we hadn't experienced the dotcom crash or 911. Those events may have slowed consumer technology application by a few years. On the other hand, military robots, remotely piloted aircraft, and "social" software for tracking terrorist groups have seen accelerated development. Concerns about global warming and oil shortages will likely accelerate nanotech and biotech associated with energy production while reducing the pace of development in other fields. Computational power continues to increase by a factor of a thousand per decade. Biotech continues to advance exponentially.

If humanity were really approaching a technological singularity I'd expect to see rapid increases in average wealth. Stock market performance in the last decade doesn't reflect a growth in real wealth. Also death rates for common diseases aren't showing a significant decline.

Fly230

Memories decay exponentially. This occurs both over time and over number of items to remember. Also, remembering requires brain attention. The vast majority of memories in a brain will never be activated sufficiently for conscious awareness. As memories accumulate, the fraction we actively access decreases.

The human mind is a flashlight that dimly illuminates the path behind. Moving forward, we lose sight of where we've been. Living a thousand years wouldn't make that flashlight any brighter.

Fly200

Terren Suydam: "So genetics is not the whole story, and that's what I mean by group selection."

I use the term "multilevel selection" for what you are describing. I agree it has been important.

E.g., there has been selection between different species. Species with genomes that supported rapid adaptation to changing environments and that supported quick diversification when expanding into new niches spread far and wide. (Beetles have been extremely successful with around 350,000 known species.) Other specie branches died out. The genetic mechanisms and the animal body plans that persist to the present are the winners of a long between specie selection process.

My intuition is that selection operating at the individual level, whether genetic or cultural, suffices to produce cooperation and moral behavior. Multilevel selection probably played a supporting role.

Fly210

Terren Suydam: "The first is that one has to adopt the group-selection stance."

(Technical jargon nitpick.)

In studying evolutionary biology, "group-selection" has a specific meaning, an individual sacrifices its own fitness in order to improve the group fitness. I.e., individual loss for a group gain. E.g., suppose you have a species that consists of many small family groups. Suppose a mutation produces a self-sacrificing individual in one of the groups. His fitness is slightly lower but his family group fitness is higher. His group tends to grow faster than other groups. So his group produces more splinter groups, some of which will have his alleles. Within any one group his allele tends to die out, but the overall population frequency of the allele increases due to the increased number of splinter groups containing the allele. This is an example of group-selection.

Much more common is cooperation that doesn't lower the individual's fitness. In this case it is win-win, individual gain and group gain. Symbiosis is an example where the cooperation is between different species. Both individuals gain so it is not an example of group-selection.

There are a few known examples of group-selection but they tend to be the rare exception, not the rule. Often something appears to be group-selection but on closer analysis turns out to be regular selection. E.g., suppose a hunter shares his meat with the tribe. He isn't lowering his fitness because he already has enough meat for himself. He is publicly displaying his prowess as a food provider which increases his mating success. Thus his generosity directly improves his fitness. His generosity is a fitness increasing status display.

Cooperation can and usually does arise through regular selfish selection.

(I see EY also noted this.)

Fly210

Caledonian: "It is much, much more elegant - and more compatible with what we know about cognition - to hold that the complex systems are built out of smaller, simpler systems over which the complex has no control."

The brain has feedback loops to even the earliest processing stages. Thus, I might choose to look for a lost contact lens. With that goal in mind, my unconscious visual processing systems will be primed to recognize signals that could be a contact lens. (The feedback loops can be observed in the neural tissue. There are cognitive science experiments that demonstrate that high level conscious decisions can affect neural processing in the earlier stages.)

The conscious mind may be a dim reflection of the top level computation that makes choices but it does reflect some of the processing that occurs. The conscious mind is aware of possible future outcomes and potential paths to preferred outcomes. The conscious mind isn't aware of the total brain mechanism that makes decisions, but it is aware of important pieces of that computation.

Fly210

EY: "human cognitive psychology has not had time to change evolutionarily over that period"

Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)

re: FAI and morality

From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure of the typical human. There are some rules that are culture specific and change rapidly as the environment changes. (When your own children are likely to die of starvation, your society is much less concerned about children starving in distant lands. Much of modern Western morality is an outcome of the present wealth and security of Western nations.)

As a start I suggest that a FAI should first discover those three types of rules, including how the rules vary among different animals and different cultures. (This would be an ongoing analysis that would evolve as the FAI capabilities increased.) For cultural rules, the FAI would look for a subset of rules that permit different cultures to interact and prosper. Rules such as kill all strangers would be discarded. Rules such as forgive all trespasses would be discarded as they don't permit defense against aggressive memes. A modified form of tit-for-tat might emerge. Some punishment, some forgiveness, recognition that bad events happen with no one to blame, some allowance for misunderstandings, some allowance for penance or regret, some tolerance for diversity. Another good rule might be to provide everyone with a potential path to a better existence, i.e., use carrots as well as sticks. Look for a consistent set of cultural rules that furthers happiness, diversity, sustainability, growth, and increased prosperity. Look for rules that are robust, i.e., give acceptable results under a variety of societal environments.

A similar analysis of animal morality would produce another set of rules. As would an analysis of rules for transactions between any entities. The FAI would then use a weighted sum of the three types of moral rules. The weights would change as society changed, i.e., when most of society consists of humans then human culture rules would be given the greatest weight. The FAI would plan for future changes in society by choosing rules that permit a smooth transition from a human centered society to an enhanced human plus AI society and then finally to an AI with human origins future.

Humans might only understand the rules that applied to humans. The FAI would enforce a different subset of rules for non-human biological entities and another subset for AI's. Other rules would guide interactions between different types of entities. (My mental model is of a body made up of cells, each expressing proteins in a manner appropriate for the specific tissue while contributing to and benefitting from the complete animal system. Rules for each specific cell type and rules for cells interacting.)

The transition shouldn't feel too bad to the citizens at any stage and the FAI wouldn't be locked into an outdated morality. We might not recognize or like our children but at least we wouldn't feel our throats being cut.

Fly220

"...if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?"

The mirror neurons also help you learn from watching other humans. They help you intuit the feelings of others which makes social prediction possible. They help communication. They also allow you to share in the joy and pleasure of others...e.g., a young child playing in a park.

I would like more control over how my mind functions. At times it would be good to turn-off some emotional responses, especially when someone is manipulating my emotions. So if the pill had only temporary effects, it were safe, and it would help me achieve my goals then, yes, I'd take the pill.

Fly210

roko: "Game theory doesn't tell you what you should do, it only tells you how to do it. E.g. in the classic prisoner's dilemma, defection is only an optimal strategy if you've already decided that the right thing to do is to minimize your prison sentence."

Survival and growth affect the trajectory of a particle in mind space. Some "ethical systems" may act as attractors. Particles interact, clumps interact, higher level behaviors emerge. A super AI might be able to navigate the density substructures of mind space guided by game theory. The "right" decision would be the one that maximizes persistence/growth. (I'm not saying that this would be good for humanity. I'm only suggesting that a theory of non-human ethics is possible.)

(Phil Goetz, I wrote the above before reading your comment: "...variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds" Yes, this what I was trying to convey by "attractors" and navigation of density substructures in mind space.)

Fly200

"Are there morally justified terminal (not instrumental) values, that don't causally root in the evolutionary history of value instincts?"

Such a morality should confer survival benefit. E.g., a tit-for-tat strategy.

Suppose an entity is greedy. It tries to garner all resources. In one-on-one competitions against weaker enties it thrives. But other entities see it as a major threat. A stronger entity will eliminate it. A group of weaker entities will cooperate to eliminate it.

A super intelligent AI might deduce or discover that other powerful entities exist in the universe and that they will adjust their behavior based on the AI's history. The AI might see some value in displaying non-greedy behavior to competing entities. I.e., it might let humanity have a tiny piece of the universe if it increases the chance that the AI will also be allowed its own piece of the universe.

Optimal survival strategy might be a basis for moral behavior that is not rooted in evolutionary biology. Valued behaviors might be cooperation, trade, self restraint, limited reprisal, consistency, honesty, or clear signaling of intention.

Load More