Sorry for the late response! I was avoiding LW.
Here's a copy of an e-mail where I summarized the argument in TWDHN before, and suggested some directions for the paper (note that this was written before I had read the post on Moloch):
In a nutshell, the argument goes something like:
- Evolution adapts creatures to the regularities of their environment, with organisms evolving to use those regularities to their advantage.
- A special case of such regularities are constraints, things which an organism must adapt to even though it may be costly: for example, very cold weather forces an organism to spend a part of its energy reserves on growing a fur or other forms of insulation.
- If a constraint disappears from the environment, evolution will gradually eliminate the costly adaptations that developed in response to it. If the Arctic Circle were to become warm, polar bears would eventually lose their fur or be outcompeted by organisms that never had a thick fur in the first place.
- Many fundamental features of human nature are likely adaptations to various constraints: e.g. the notion of distinct individuals and personal identity may only exist because we are incapable of linking our brains directly together and merging into one vast hive mind. Conscious thought may only exist because consciousness acts as an "error handler" to deal with situations where our learned habits are incapable of doing the job right, and might become unnecessary if there was a way of pre-programming us with such good habits that they always got the job done. Etc.
- The process of technological development acts to remove various constraints in our environment: for example, it may one day become possible to actually link minds together directly.
- If technology does remove previous constraints from our environment, the things that we consider fundamental human values would actually become costly and unnecessary, and be gradually eliminated as organisms without those "burdens" would do better.
What I'd like to do in the paper would be to state the above argument more rigorously and clearly, provide evidence in favor of it, clarify things that I'm uncertain about (Does it make sense to distinguish constraints from just regularities in general? Should one make a distinction between constraints in the environment and constraints from what evolution can do with biological cells?), discuss various possible constraints as well as what might eliminate them and how much of an advantage that would give to entities that didn't need to take them into account, raise the possibility of some of this actually being a good thing, etc. Stuff like that. :-)
Does that argument sound sensible (rather than something that represents a total misunderstanding of evolutionary biology) and something that you'd like to work on? Thoughts on how to expand it to take Moloch into account?
Also, could you say a little more about your background and amount of experience in the field?
This argument seems like something I would need to think long and hard about, which I see as a good thing: it seems rare to me that non-trivial things are simple and apparent. I don't see any glaring misinterpretation of natural selection. I would be interested in working on it in a "dialogue intellectually and hammer out more complete and concrete ideas" sense. I'm answering this quickly in a tired state because I'm not on LW as much as I used to be and I don't want to forget.
I'm getting a PhD in a biological field that is not Evolution. Both th...
Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.
Go read it.
Don't worry, I can wait. I'm only a piece of text, my patience is infinite.
De-dum, de-dum.
You sure you've read it?
Ok, I believe you...
Really.
I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?
Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.
Academic Moloch
Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").
The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.
One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.
Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:
This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.
Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.
The point of the post
The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.