I previously wrote a post hypothesizing that inter-group conflict is more common when most humans belong to readily identifiable, discrete factions.
This seems relevant to the recent human gene editing advance. Full human gene editing capability probably won't come soon, but this got me thinking anyway. Consider the following two scenarios:
1. Designer babies become socially acceptable and widespread some time in the near future. Because our knowledge of the human genome is still maturing, they initially aren't that much different than regular humans. As our knowledge matures, they get better and better. Fortunately, there's a large population of "semi-enhanced" humans from the early days of designer babies to keep the peace between the "fully enhanced" and "not at all enhanced" factions.
2. Designer babies are considered socially unacceptable in many parts of the world. Meanwhile, the technology needed to produce them continues to advance. At a certain point people start having them anyway. By this point the technology has advanced to the point where designer babies clearly outclass regular babies at everything, and there's a schism between "fully enhanced" and "not at all enhanced" humans.
Of course, there's another scenario where designer babies just never become widespread. But that seems like an unstable equilibrium given the 100+ sovereign countries in the world, each with their own set of laws, and the desire of parents everywhere to give birth to the best kids possible.
We already see tons of drama related to the current inequalities between individuals, especially inequality that's allegedly genetic in origin. Designer babies might shape up to be the greatest internet flame war of this century. This flame war could spill over in to real world violence. But since one of the parties has not arrived to the flame war yet, maybe we can prepare.
One way to prepare might be differential technological development. In particular, maybe it's possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence. This could allow designer baby technology to become socially acceptable and widespread before "fully enhanced" humans were possible. Just as with emulations, a slow societal transition seems preferable to a fast one.
Other ideas (edit: speculative!): extend the benefits of designer babies to everyone for free regardless of their social class. Push for mandatory birth control technology so unwanted and therefore unenhanced babies are no longer a thing. (Imagine how lousy it would be to be born as an unwanted child in a world where everyone was enhanced except you.) Require designer babies to possess genes for compassion, benevolence, and reflectiveness by law, and try to discover those genes before we discover genes for intelligence. (Edit: leaning towards reflectiveness being the most important of these.) (Researching the genetic basis of psychopathy to prevent enhanced psychopaths also seems like a good idea... although I guess this would also create the knowledge necessary to deliberately create psychopaths?) Regulate the modification of genes like height if game theory suggests allowing arbitrary modifications to them would be a bad idea.
I don't know very much about the details of these technologies, and I'm open to radically revising my views if I'm missing something important. Please tell me if there's anything I got wrong in the comments.
Sure… and if they operate using reason and evidence, we call them “scientists”, “economists”, etc. (Making the world better is an implicit value premise in lots of academic work, e.g. there’s lots of Alzheimer’s research being done because an aging population is going to mean lots of Alzheimer’s patients. Most economists write papers on how to facilitate economic growth, not economic crashes. Etc.) I agree that releasing a bunch of average intelligence, average reflectiveness altruists on the world is not necessarily a good idea and I didn’t propose it.
I mean, the Allied soldiers that died during WWII were sacrificed for the greater good in a certain sense, right? I feel like the real problem here might be deeper, e.g. willingness of the population to accept any proposal that authorities say is for the greater good (not necessarily quite the same thing as altruism… see below).
I think there are a bunch of related but orthogonal concepts that it's important to separate:
Individualism vs collectivism (as a sociological phenomenon, e.g. "America's culture is highly individualistic"). Maybe the only genetic tinkering that's possible would also increase collectivism and cause problems.
Looking good vs being good. Maybe due to the conditions human altruism evolved in (altruistic punishment etc.), altruists tend to be more interested in seeming good (e.g. obsess about not saying anything offensive) than being good (e.g. figure out who's most in need and help that person without telling anyone). It could be that you are sour on altruism because you associate it with people who try to look good (self-proclaimed altruists), which isn't necessarily the same group as people who actually are altruists (anything from secretly volunteering at an animal shelter to a Fed chairman who thinks carefully, is good at their job, and helps more poor people than 100 Mother Teresas). Again, in principle it seems like these axes are orthogonal but maybe in practice they're genetically related.
Utilitarianism vs deontology (do you flip the lever in the trolley problem). EY wrote a sequence about how these are a useful safeguard on utilitarianism. I specified that my utopia would have people who were highly reflective, so they should understand this suggestion and either follow it or improve on it.
Whatever dimension this quiz measures. Orthogonal in theory, maybe related in practice.
A little knowledge is a dangerous thing--sometimes people are just wrong about things. Even non-communists thought communist economies would outdo capitalist ones. I think in a certain sense the failure of communism says more about the fact that society design is a hard problem than the dangers of altruism. Probably a good consideration against tinkering with society in general, which includes genetic engineering. However, it sounds like we both agree that genetic engineering is going to happen, and the default seems bad. I think the fundamental consideration here is how much to favor the status quo vs some new unproven but promising idea. Again, seems theoretically orthogonal to altruism but might be related in practice.
Gullibility. I’d expect that agreeable people are more gullible. Orthogonal in theory, maybe related in practice.
And finally, altruism vs selfishness (insofar as one is a utilitarian, what's the balance of your own personal utility vs that of others). I don't think making people more altruistic along this axis is problematic ceteris paribus (as long as you don't get in to pathological self-sacrifice territory), but maybe I'm wrong.
This is a useful list of failure modes to watch for when modifying genes that seem to increase altruism but might change other stuff, so thanks. Perhaps it’d be wise to prioritize reflectiveness over altruism. (Need for cognition might be the construct we want. Feel free to shoot holes in that proposal if you want to continue talking :P)
I am relieved :-P
And yes, I think the subthread has drifted sufficiently far so I'll bow out and leave you to figure out by yourself the orthogonality of being altruistic and being gullible :-)