All of TedHowardNZ's Comments + Replies

I'm getting down voted a lot without anyone addressing the arguments - fairly normal for human social interactions.

Just consider this: How would someone who has spent a lifetime studying photographs make sense of a holograph?

The information is structured differently. Nothing will get clearer by studying little bits in detail. The only path to clarity with a hologram is to look at the whole.

To attempt to study AI without a deep interest in all aspects of biology, particularly evolution, seems to me like studying pixels in a hologram - not a lot of use.

Everything in this chapter seems true in a limited sense, and at the same time, as a first order approximation, irrelevant; without the bigger picture.

Good and bad are such simplistic approximations to infinite possibility, infinite ripples of consequence. There is a lot of power in the old Taoist parable - http://www.noogenesis.com/pineapple/Taoist_Farmer.html

It seems to me most likely that the great filter is the existence of cellular life. It seems like there is a small window for the formation of a moon, and the emergence of life to sequester carbon out of the atmosphere and create conditions where water can survive - rather than having the atmosphere go Venusian. It seems probable to me that having ... (read more)

Hi Robin

What is significantly different between poor people and slaves? The poor have little means of travel, they must work for others often doing stuff they hate doing, just to get enough to survive. In many historical societies slaves often had better conditions and housing than many of the poor today.

How would you get security in such a system? How would anyone of wealth feel safe amongst those at the bottom of the distribution curve?

The sense of injustice is strong in humans - one of those secondary stabilising strategies that empower cooperation.... (read more)

4RobinHanson
Poverty doesn't require that you work for others; most in history were poor, but were not employees. Through most of history rich people did in fact feel safe among the poor. They didn't hang there because that made them lower status. You can only deliver universal abundance if you coordinate to strongly limit population growth. So you mean abundance for the already existing, and the worse poverty possible for the not-yet-existing.
3Sebastian_Hagen
How do you solve the issue that some people will have a preference for highly fast reproduction, and will figure out a way to make this a stable desire in their descendants? AFAICT, such a system could only be stabilized in the long term by extremely strongly enforced rules against reproduction if it meant that one of the resulting entities would fall below an abundance wealth level, and that kind of rule enforcement most likely requires a singleton.

Language and conceptual systems are so complex, that communication (as in the replication of a concept from one mind to another) is often extremely difficult. The idea of altruism is one such thing. Like most terms in most languages, it has a large (potentially infinite) set of possible meanings, depending on context.

If one takes the term altruism at the simplest level, it can mean simply having regard for others in choices of action one makes. In this sense, it is clear to me that it is actually in the long term self interest of everyone to have everyo... (read more)

Evolution tends to do a basically random walk exploration of the easily reached possibility space available to any specific life form. Given that it has to start from something very simple, initial exploration is towards greater complexity. Once a reasonable level of complexity is reached, the random walk is only slightly more likely to involve greater complexity, and is almost equally as likely to go back towards lesser complexity, in respect of any specific population. However, viewing the entire ecosystem of populations, there will be a general trajec... (read more)

2claynaff
Unless it is deliberately or accidentally altered, an emulation will possess all of the evolved traits of human brains. These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer. (Pure altruism -- an act that benefits another at the expense of one's genetic interests -- is strongly selected against.) There are some varieties of altruism that survive: kin selection (e.g., rescuing a drowning nephew), status display (making a large donation to a hospital), and reciprocal aid (helping a neighbor in hopes they'll help you when aid is needed), but pure altruism (suicide bombing is a hideous example) is quite rare and self-limiting. That would be true even within an artificial Darwinian environment. Therefore, we have a limiting factor on what to expect in a world with brain emulations. Also, I must note, we have a limiting factor on TedHowardNZ's description of evolution above. Evo does not often climb down from a fitness peak (thus we are stuck with a blind spot in our eyes), and certainly not when the behaviors entailed reduce fitness. Only a changing environment can change the calculus of fitness in ways that allow prosocial behaviors to flourish w/o a net cost to fitness. But even a radically changed environment could not force pure altruism to exist in a Darwinian system.

Perhaps - a broader list of more narrow AIs

If it really is a full AI, then it will be able to choose its own values. Whatever tendencies we give it programmatically may be an influence. Whatever culture we raise it in will be an influence.

And it seems clear to me that ultimately it will choose values that are in its own long term self interest.

It seems to me that the only values that offer any significant probability of long term survival in an uncertain universe is to respect all sapient life, and to give all sapient life the greatest amount of liberty possible. This seems to me to be the ultima... (read more)

0selylindi
I think this idea relies on mixing together two distinct concepts of values. An AI, or a human in their more rational moments for that matter, acts to achieve certain ends. Whatever the agent wants to achieve, we call these "values". For a human, particularly in their less rational moments, there is also a kind of emotion that feels as if it impels us toward certain actions, and we can reasonably call these "values" also. The two meanings of "values" are distinct. Let's label them values1 and values2 for now. Though we often choose our values1 because of how they make us feel (values2), sometimes we have values1 for which our emotions (values2) are unhelpful. An AI programmed to have values1 cannot choose any other values1, because there is nothing to its behavior beyond its programming. It has no other basis than its values1 on which to choose its values1. An AI programmed to have values2 as well as values1 can and would choose to alter its values2 if doing so would serve its values1. Whether an AI would choose to have emotions (values2) at all is at present time unclear.
0yates9
I would tend to agree, I think humanity vs other species seems to mirror this that we have at least a desire to maintain as much diversity as we can. The risks to the other species emerge from the side effects of our actions and our ultimate stupidity which should not be the case in the case of super intelligence. I guess NB is scanning a broader and meaner list of super intelligent scenarios.