KatjaGrace comments on Superintelligence 18: Life in an algorithmic economy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
If evolution doesn't basically imply forward progress, why do you think it seems like we are doing so much better than our ancestors?
Because the "doing better" history is written by the victors. It's our values that are being used to judge the improvement. Further evolutionary change, if left to the same blind idiot god, is highly likely to leave our descendants with changed - and worse - values. So long as the value drift is slight and the competence keeps increasing, our descendants will live better lives. But if and when the value drift becomes large, that will reverse. That's why we've got to usurp the powers of the blind idiot god before it's too late.
Closely related: Scott Alexander's Meditations on Moloch.
We are doing better, because we are achieving outcomes that have always been valued, like longer lifespan and health. The pharohs and emperors of yore would have envied the painless dentistry and flat screen TV's now enjoyed by the average person.
The Molochian argument is that there is a pressure towards the sacrifice of a subset of those valued outcomes , the ones which require coordination, which is motivated by the subset of values which are self centered and do not promote coordination. There is no wholesale sacrifice of values. If we do something to sacrifice one thing we value, our motivation is another value.
There is also a pressure in the other direction, towards the promotion of coordination, and that pressure is ethics. Ethics is a distributed Gardener. (Lesswrongian and Codexian ethical thinking are both equally and oddly uninterested in the question: what is ethics?) Typical ethical values such as fairness,equality, and justice all promote coordination.
Ethical values are not a passive reflection of what society is, but instead push it in a more coordinative direction.
Ethical values at a given time are tailored to what is achievable. Under circumstances where warfare is unavoidable, for instance, ethical values ameliorate the situation by promoting courage, chivalry, etc. This situation is often misread as "our ancestors valued war, but we value peace".
There are no guarantees one way or the other about which tendency will win out.
The ethical outlook of a society is shaped by the problems it needs to solve, and can realistically solve, but not down to unanimity. Different groups within society have different interests, which is the origin of politics. Politics is disagreement about what to coordinate and how to coordinate.
I agree with most of that, including
but not:
I mean, that might be what Scott had in mind for the word Moloch, but the actual logic of the situation raises another challenge. The fragility of value, and the misalignment between human values and "whatever reproduces well, not just in the EAE but wherever and whenever", creates a dire problem.
Molochian problems would be direr without the existence of a specific mechanism to overcome them.
I'm not a believer in the fragility of value.
http://lesswrong.com/lw/y3/value_is_fragile/br8k
You gave me the chance to check whether I was using "fragility of value" correctly. (I think so.) Your reply in that thread doesn't fit the fragility thesis: you're reading too much into it. EY is asserting that humanly-valuable outcomes are a small region in a high-dimensional space. That's basically all there is to it, though some logical consequences are drawn that flesh it out, and some of the evidence for it is indicated.
If he is asserting only what you say, he is asserting nothing of interest. What FoV is usually taken to mean is that getting FAI right is difficult ... and that is right called fragility, because it is a process. However, it is not a conclusion supported by a premise about higher dimensional spaces, because that is not a process.
Evolution tends to do a basically random walk exploration of the easily reached possibility space available to any specific life form. Given that it has to start from something very simple, initial exploration is towards greater complexity. Once a reasonable level of complexity is reached, the random walk is only slightly more likely to involve greater complexity, and is almost equally as likely to go back towards lesser complexity, in respect of any specific population. However, viewing the entire ecosystem of populations, there will be a general trajectory of expansion into new territory of possibility. The key thing to get is that in respect of any specific population or individual (when considering the population of behavioural memes within that individual), there is an almost equal likelihood of going back into territory already explored as there is of exploring new territory.
There is a view of evolution that is not commonly taught, that acknowledges the power of competition as a selection filter between variants, and also acknowledges that all major advances in complexity of systems are characterised by new levels of cooperation. And all cooperative strategies require attendant strategies to prevent invasion by "cheats". Each new level of complexity is a new level of cooperation.
There are many levels of attendant strategies that can and do speed evolution of subsets of any set of characters.
Evolution is an exceptionally complex set of systems within systems. At both the genetic and mimetic levels, evolution is a massively recursive process, with many levels of attendant strategies. Darwin is a good introduction, follow it with Axelrod, Maynard Smith, Wolfram; and there are many others worth reading - perhaps the best introduction is Richard Dawkins classic "Selfish Gene".
Unless it is deliberately or accidentally altered, an emulation will possess all of the evolved traits of human brains. These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer. (Pure altruism -- an act that benefits another at the expense of one's genetic interests -- is strongly selected against.) There are some varieties of altruism that survive: kin selection (e.g., rescuing a drowning nephew), status display (making a large donation to a hospital), and reciprocal aid (helping a neighbor in hopes they'll help you when aid is needed), but pure altruism (suicide bombing is a hideous example) is quite rare and self-limiting. That would be true even within an artificial Darwinian environment. Therefore, we have a limiting factor on what to expect in a world with brain emulations. Also, I must note, we have a limiting factor on TedHowardNZ's description of evolution above. Evo does not often climb down from a fitness peak (thus we are stuck with a blind spot in our eyes), and certainly not when the behaviors entailed reduce fitness. Only a changing environment can change the calculus of fitness in ways that allow prosocial behaviors to flourish w/o a net cost to fitness. But even a radically changed environment could not force pure altruism to exist in a Darwinian system.
Note that the employer in question might well be your own upload clan, which makes this near-analogous to kin selection. Even if employee templates are traded between employers, this trait would be exceptionally valuable in an employee, and so would be strongly selected for. General altruism might be rare, but this specific variant would probably enjoy a high fitness advantage.
Language and conceptual systems are so complex, that communication (as in the replication of a concept from one mind to another) is often extremely difficult. The idea of altruism is one such thing. Like most terms in most languages, it has a large (potentially infinite) set of possible meanings, depending on context.
If one takes the term altruism at the simplest level, it can mean simply having regard for others in choices of action one makes. In this sense, it is clear to me that it is actually in the long term self interest of everyone to have everyone having some regard for the interests of others in all choices of action. It is clear that having regard only for short term interest of self leads to highly unstable and destructive outcomes in the long term. Simple observation of any group of primates will show highly evolved cooperative behaviours (reciprocal altruism).
And I agree, that evolution is always about optimisation within some set of parameters. We are the first species that has had choice at all levels of the optimisation parameters that evolution gets to work with. And actually has the option of stepping entirely outside of the system of differential survival of individuals.
To date, few people have consciously exercised such choice outside of very restricted and socially accepted contexts. That seems to be exponentially changing.
Pure altruism to me means a regard for the welfare of others which is functionally equal to the regard one has for one's own welfare. I distinguish this from exclusive altruism (a regard for the welfare of others to the exclusion of self interest) - which is, obviously, a form of evolutionary, logical, and mathematical suicide in large populations (and even this trait can exist at certain frequencies within populations in circumstances of small kin groups living in situations that are so dangerous that some members of the group must sacrifice themselves periodically or the entire group will perish - so is a form of radical kin selection - and having evolved there, the strategy can remain within much larger populations for extended periods without being entirely eliminated).
There is no doubt that we live in an environment that is changing in many different dimensions. In some of those dimensions the changes are linear, and in many others the changes are exponential, and in some the systemic behaviour is so complex that it is essentially chaotic (in the mathematical sense, where very tiny changes in system parameters {within measurement uncertainty levels} produce orders of magnitude variations in some system state values).
There are many possible choices of state calculus. It seems clear to me that high level cooperation gives the greatest possible probability of system wide and individual security and freedom. And in the evolutionary sense, cooperation requires attendant strategies to prevent invasion by short term "cheating".
Given the technical and social and "spiritual" possibilities available to us today, it is entirely reasonable to classify the entire market based economic structure as one enormous set of self reinforcing cheating strategies. And prior to the development of technologies that enabled the possibility of full automation of any process that was not the case, and now that we can fully automate processes it most certainly is the case.
So it is a very complex set of systems, and the fundamental principles underlying those systems are not all that complex, and they are very different from what accepted social and cultural dogma would have most of us believe.