jacobt comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (428)
If you can convince people that something is better than present human values, then CEV will implement these new values. I mean, if you just took CEV(PhilGoetz), and you have the desire to see the universe adopt "evolved" values, then CEV will extrapolate this desire. The only issue is that other people might not share this desire, even when extrapolated. In that case insisting that values "evolve" is imposing minority desires on everyone, mostly people who could never be convinced that these values are good. Which might be a good thing, but it can be handled in CEV by taking CEV(some "progressive" subset of humans).
This seems a nice place to link to Marcello's objection to CEV, which says you might be able to convince people of pretty much anything, depending on the order of arguments.
I think Marcello's objection dissolves when the subject becomes aware of the order-of-arguments effects. After all, those effects are part of the factual information that the subject considers in refining its values. Most people don't like to have values that change depending on the order in which arguments are presented, so they will reflect further until they each find a stable value set. At least, that would be my hypothesis.
I think it would be impossible to convince people (assuming suitably extrapolated intelligence and knowledge) that total obliteration of all life on Earth is a good thing, no matter the order of arguments. And this is a very good value for a FAI. If it optimizes this (saves life) and otherwise interferes the least, it already done excellent.
There are nihilists who at least claim that position.
Lots of people honestly wish for the literal end of the universe to come, because they believe in an afterlife/prophecy/etc.
You might say they would change their minds given better or more knowledge (e.g. that there is no afterlife and the prophecy was false/fake/wrong). But such people are often exposed to such arguments and reject them; and they make great efforts to preserve their current beliefs in the face of evidence. And they say these beliefs are very important to them.
There may well be methods of "converting" them anyway, but how are these methods ethically or practically different from "forcibly changing their minds" or their values? And if you're OK with forcibly changing their minds, why do you think that's ethically better than just ignoring them and building a partial-CEV that only extrapolates your own wishes and those of people similar to yourself?
I (and CEV) do not propose changing their minds or their values. What happens is that their current values (as modeled within FAI) get corrected in the presence of truer knowledge and lots of intelligence, and these corrected values are used for guiding the FAI.
If someone's mind & values are so closed as to be unextrapolateable - completely incompatible with truth - then I'm ok with ignoring these particular persons. But I don't believe there are actually any such people.
So the future is built to optimize different values. And their original values aren't changed. Wouldn't they suffer living in such a future?
Even if they do, it will be the best possible thing for them, according to their own (extrapolated) values.
Who cares about their extrapolated values? Not them (they keep their original values). Not others (who have different actual and extrapolated values). Then why extrapolate their values at all? You could very easily build a much happier life for them just by allocating some resources (land, computronium, whatever) and going by their current values.
Well... ok, lets assume a happy life is their single terminal value. Then by definition of their extrapolated values, you couldn't build a happier life for them if you did anything else other than follow their extrapolated values!
This is completely wrong. People are happy, by definition, if their actual values are fulfilled; not if some conflicting extrapolated values are fulfilled. CEV was supposed to get around this by proposing (without saying how) that people would actually grow to become smarter etc. and thereby modify their actual values to match the extrapolated ones, and then they'd be happy in a universe optimized for the extrapolated (now actual) values. But you say you don't want to change other people's values to match the extrapolation. That makes CEV a very bad idea - most people will be miserable, probably including you!
I think the standard sort of response for this is The Hidden Complexity of Wishes. Just off the top of my (non-superintelligent) head, the AI could notice a method for near-perfect continuation of life by preserving some bacteria at the cost of all other life forms.
I did not mean the comment that literally. Dropped too many steps for brevity, thought they were clear, I apologize.
It would be just as impossible (or even more impossible) to convince people that total obliteration of people is a good thing. On the other hand, people don't care much about bacteria, even whole species of them, and as long as a few specimens remain in laboratories, people will be ok about the rest being obliterated.
There are lots of people who do think that's a good thing, and I don't think those people are trolling or particularly insane. There are entire communities where people have sterilized themselves as part of a mission to end humanity (for the sake of Nature, or whatever).
I think those people do have insufficient knowledge and intelligence. For example, the skoptsy sect, who believed they followed the God's will, were, presumably, factually wrong. And people who want to end humanity for the sake of Nature, want that instrumentally - because they believe that otherwise Nature will be destroyed. Assuming FAI is created, this belief is also probably wrong.
You're right in there being people who would place "all non-intelligent life" before "all people", if there was such a choice. But that does not mean they would choose "non-intelligent life" before "non-intelligent life + people".
That depends a lot on what I understand Nature to be.
If Nature is something incompatible with artificial structuring, then as soon as a superhuman optimizing system structures my environment, Nature has been destroyed... no matter how many trees and flowers and so forth are left.
Personally, I think caring about Nature as something independent of "trees and flowers and so forth" is kind of goofy, but there do seem to be people who care about that sort of thing.
What if particular arrangements of flowers, trees and soforth are complex and interconnected, in ways that can be undone to the net detriment of said flowers, trees and soforth? Thinking here of attempts at scientifically "managing" forest resources in Germany with the goal of making them as accessible and productive as possible. The resulting tree farms were far less resistant to disease, climatic abberation, and so on, and generally not very healthy, because it turns out that illegible, sloppy factor that made forest seem less-conveniently organized for human uses was a non-negligible portion of what allowed them to be so productive and robust in the first place.
No individual tree or flower is all that important, but the arrangement is, and you can easily destroy it without necessarily destroying any particular tree or flower. I'm not sure what to call this, and it's definitely not independent of the trees and flowers and soforth, but it can be destroyed to the concrete and demonstrable detriment of what's left.
That's an interesting question, actually.
I don't know forestry from my elbow, but I used to read a blog by someone who was pretty into saltwater fish tanks. Now, one property of these tanks is that they're really sensitive to a bunch of feedback loops that can most easily be stabilized by approximating a wild reef environment; if you get the lighting or the chemical balance of the water wrong, or if you don't get a well-balanced polyculture of fish and corals and random invertebrates going, the whole system has a tendency to go out of whack and die.
This can be managed to some extent with active modification of the tank, and the health of your tank can be described in terms of how often you need to tweak it. Supposing you get the balance just right, so that you only need to provide the right energy inputs and your tank will live forever: is that Nature? It certainly seems to have the factors that your ersatz German forest lacks, but it's still basically two hundred enclosed gallons of salt water hooked up to an aeration system.
That's something like my objection to CEV-- I currently believe that some fraction of important knowledge is gained by blundering around and (or?) that the universe is very much more complex than any possible theory about it.
This means that you can't fully know what your improved (by what standard?) self is going to be like.
I'm not quite sure what you mean to ask by the question. If maintaining a particular arrangement of flowers, trees and so forth significantly helps preserve their health relative to other things I might do, and I value their health, then I ought to maintain that arrangement.
Presumably, because their knowledge and intelligence are not extrapolated enough.
Well, I certainly agree that increasing my knowledge and intelligence might have the effect of changing my beliefs about the world in such a way that I stop valuing certain things that I currently value, and I find it likely that the same is true of everyone else, including the folks who care about Nature.
Not that I'm a proponent of voluntary human extinction, but that's an awfully big conditional.
It's not even strictly true. It's entirely conceivable that FAI will lead to the Sol system being converted into a big block of computronium to run human brain simulations. Even if those simulations have trees and animals in them, I think that still counts as the destruction of nature.
But if FAI is based on CEV, then this will only happen if this is the extrapolated wish of everybody. Assuming existence of people truly (even after extrapolation) valuing Nature in its original form, such computroniums won't be forcefully built.
But it's the only relevant one, when we're talking about CEV. CEV is only useful if FAI is created, so we can take it for granted.
Ah, the FAI problem in a nutshell.