PhilGoetz comments on Holden's Objection 1: Friendliness is dangerous - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (428)
Would it be fair to say that your philosophy is similar to davidad's? Both of you seem to ultimately value some hard-to-define measure of complexity. He thinks the best way to maximize complexity is to develop technology, whereas you think the best way is to preserve evolution.
I think that evolution will lead to a local maximum of complexity, which we can't "help" it avoid. The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there's huge amounts of redundancy. Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren't constrained by "survival of the fittest".
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it. If that utility function contains a term for "complexity", which seems plausible given people like you and davidad (and even I'd probably prefer greater complexity to less, all else being equal), then it ought to at least get somewhat close to the global complexity maximum (since the constraint of simultaneously trying to maximize other values doesn't seem too burdensome, unless there are people who actively disvalue complexity).
This is true; but I favor systems that can evolve, because they are evolutionarily stable. Systems that aren't, are likely to be unstable and vulnerable to collapse, and typically have the ethically undesirable property of punishing "virtuous behavior" within that system.
True. I spoke imprecisely. Life is increasing in complexity, in a meaningful way that is not the same as the negative of entropy, and which I feel comfortable calling "progress" despite Stephen J. Gould's strident imposition of his sociological agenda onto biology. This is the thing I'm talking about maximizing. Whatever utility function an FAI is given, it's only going to involve concepts that we already have, which represent a small fraction of possible concepts; and so it's not going to keep increasing as much in that way.