Phil_Goetz6

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

"I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation."

Does this boundary even exist? It's a distinction we can make, for purposes of discussion; but not a hard boundary we can draw. You can find examples that fall clearly into one category (reflex) or another (addition), but you can also find examples that don't. This is just the sort of thing I was talking about in my post on false false dichotomies. It's a dichotomy that we can sometimes use for discussion, but not a true in-the-world binary distinction.

Eliezer responds yes: "Anna, you're talking about a messiness of the human system, not a difficulty in drawing hard distinctions between human-style messiness and evolutionary-style messiness."

I can't figure out what that's supposed to mean. I think it means Eliezer didn't understand what she said. The "messiness" is that you can't draw that hard distinction.

The entire discussion is cast in terms that imply Eliezer thinks evolutionary psychology deals with issues of conscious vs. subconscious motivations. AFAIK it sidesteps the issue whenever possible. Psychologists don't want to ask whether behavior comes from conscious or subconscious motivations. They want to observe behavior, record it, and explain it. Not trying to slice it up into conscious vs. subconscious pieces is the good part of behaviorism.

Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"

Huh? No need. Why would you think I'm unaware of that?

I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 100 times as many "equivalent", equally happy people. Make an argument for not doing that. And don't, as komponisto did, just say that it's the right thing to do.

Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.

An argument against universe-tiling would also be welcome.

You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?
Oh, snap.

Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.
We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? And by the failure to maximize social utility.

Fun is a cognitive phenomenon. Whatever your theory of fun is, I predict that more fun will be better than less fun, and the moral thing to do seems to be to pack in as much fun as you can before the heat death of the universe. Following that line of thought could lead to universe-tiling.

Suppose you develop a theory of fun/good/morality. What are arguments for not tiling the universe in a way that maximizes it? Are there any such arguments that don't rely on either diversity as an inherent good, or on the possibility that your theory is wrong?

Your post seems to say that fun and morality are the same. But we use the term "moral" only in cases when the moral thing to do isn't fun. I think morality = fun only if it's a collective fun. If that collective fun is also summed over hypothetical agents you could create, then we come back to moral outrage at humans.

The problem brings to mind the colonization of America. Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization that can support a population of about 100 times as many people, who think they are living more pleasurable and interesting lives, and hardly ever cut out their neighbors' hearts on the tops of temples to the sun god? Intellectuals today unanimously say "yes". But I don't think they've allowed themselves to actually consider the question.

What is the moral argument for not colonizing America?

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
And the twenty lines are from the "spam" sketch. :)

Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."

Are you actually hoping that won't happen? That we'll still be human a million years from now?

"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."

I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.

To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)

What things give us the most pleasure today? I would say, sex, creative activity, social activity, learning, and games.

Elaborating on sexual pleasure probably leads to wireheading. I don't like wireheading, because it fails my most basic ethical principle, which is that resources should be used to increase local complexity. Valuing wireheading qualia also leads to the conclusion that one should tile the universe with wireheaders, which I find revolting, although I don't know how to justify that feeling.

Social activity is difficult to analyze, especially if interpersonal boundaries, and the level of the cognitive hierarchy to relate to as a "person", are unclear. I would begin by asking whether we would get any social pleasure from interacting with someone whose thoughts and decision processes were completely known to us.

Creative activity and learning may or may not have infinite possibilities. Can we continue constructing more and more complex concepts, to infinity? If so, then knowledge is probably also infinite, for as soon as we have constructed a new concept, we have something new to learn about. If not, then knowledge - not specific knowledge of what you had for lunch today, but general knowledge - may be limited. Creative activity may have infinite possibilities, even if knowledge is finite.

(The answer to whether intelligence has infinite potential has many other consequences; notably, Bayesian reasoners are likely only in a universe in which there are finite useful concepts, because otherwise it will be preferable to be a non-Bayesian reasoning over more complex concepts using faster algorithms.)

Games largely rely on uncertainty, improving mastery, and competition. Most of what we get out of "life", besides relationships and direct hormonal pleasures like sex, food, and fighting, is a lot like what we get from playing a game. One fear is that life will become like playing chess when you already know the entire game tree.

If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge. A future of endless war may be preferable to a future in which someone has won. It may even be preferable to a future of endless peace. If you study the middle ages of Europe, you will probably at some point ask, "Why did these idiots spend so much time fighting, when they could have all become wealthier if they simply stopped fighting long enough for their economies to grow?" Well, those people didn't know that economies could grow. They didn't believe that there was any further progress to be made in any domain - art, science, government - until Jesus returned. They didn't have any personal challenges; the nobility often weren't even allowed to do work. If you read what the nobles wrote, some of them said clearly that they fought because they loved fighting. It was the greatest thrill they ever had. I don't like this option for our future, but I can't rule out the possibility that war might once again be preferable to peace, if there actually is no more progress to be made and nothing to be done.

The answers to these questions also have a bearing on whether it is possible for God, in the long run, to be selfish. It seems that God would be the first person to have his desires saturated, and enter into this difficult position where it is hard to imagine how to want anything. I can imagine a universe, rather like the Buddhist universe, in which various gods, like bubbles, successively float to the top, and then burst into nothingness, from not caring anymore. I can also imagine an equilibrium, in which there are many gods, because the greater power than one acquires, the less interest one has in preserving that power.

It would have been better of me to reference Eliezer's Al Qaeda argument, and explain why I find it unconvincing.

Vladimir:

Phil, in suggesting to replace an unFriendly AI that converges on a bad utility by a collection of AIs that never converge, you are effectively trying to improve the situation by injecting randomness in the system.
You believe evolution works, right?

You can replace randomness only once you understand the search space. Eliezer wants to replace the evolution of values, without understanding what it is that that evolution is optimizing. He wants to replace evolution that works, with a theory that has so many weak links in its long chain of logic that there is very little chance it will do what he wants it to, even supposing that what he wants it to do is the right thing to do.

Vladimir:

Your perception of lawful extrapolation of values as "stasis" seems to stem from intuitions about free will.
That's a funny thing to say in response to what I said, including: 'One question is where "extrapolation" fits on a scale between "value stasis" and "what a free wild-type AI would think of on its own."' It's not that I think "extrapolation" is supposed to be stasis; I think it may be incoherent to talk about an "extrapolation" that is less free than "wild-type AI", and yet doesn't keep values out of some really good areas in value-space. Any way you look at it, it's primates telling superintelligences what's good.

As I just said, clearly "extrapolation" is meant to impose restrictions on the development of values. Otherwise it would be pointless.

Vladimir:

it could act as a special "luck" that in the end results in the best possible outcome given the allowed level of interference.
Please remember that I am not assuming that FAI-CEV is an oracle that magically works perfectly to produce the best possible outcome. Yes, an AI could subtly change things so that we're not aware that it is RESTRICTING how our values develop. That doesn't make it good for the rest of all time to be controlled by the utility functions of primates (even at a meta level).

Here's a question whose answer could diminish my worries: Can CEV lead to the decision to abandon CEV? If smarter-than-humans "would decide" (modulo the gigantic assumption CEV makes that it makes sense to talk about what "smarter than humans would decide", as if greater intelligence made agreement more rather than less likely - and, no, they will not be perfect Bayesians) that CEV is wrong, does that mean an AI guided by CEV would then stop following CEV?

If this is so, isn't it almost probability 1 that CEV will be abandoned at some point?

Eliezer: "Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own. This pattern would explain a lot of Phil Goetz too."

No; the dynamic you're thinking of is that I raise objections to things that you have already analyzed, because I think your analyis was unconvincing. Eg., the recent Attila the Hun / Al Qaeda example. The fact that you have written about something doesn't mean you've dealt with it satsifactorily.

Eliezer: "and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever)."

Why do the values freeze? Because there is no more competition? And if that's the problem, why not try to plan a transition from pre-AI to an ecology of competing AIs that will not converge to a singleton? Or spell out the problem clearly enough that we can figure whether one can achieve a singleton that doesn't have that property?

(Not that Eliezer hasn't heard me say this before. I made a bit of a speech about AI ecology at the end of the first AGI conference a few years ago.)

Robin: "In a foom that took two years, if the AI was visible after one year, that might give the world a year to destroy it."

Yes. The timespan of the foom is important largely because it changes what the AI is likely to do, because it changes the level of danger that the AI is in and the urgency of its actions.

Eliezer: "When I try myself to visualize what a beneficial superintelligence ought to do, it consists of setting up a world that works by better rules, and then fading into the background."

There are many sociological parallels between Eliezer's "movement", and early 20th-century communism.

Eliezer: "I truly do not understand how anyone can pay any attention to anything I have said on this subject, and come away with the impression that I think programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI."

I wonder if you're thinking that I meant that. You can see that I didn't in my first comment on Visions of Heritage. But I do think you're going one level too few meta. And I think that CEV would make it very hard to escape the non-meta philosophies of the programmers. It would be worse at escaping them than the current, natural system of cultural evolution is.

Numerous people have responded to some of my posts by saying that CEV doesn't restrict the development of values (or equivalently, that CEV doesn't make AIs less free). Obviously it does. That's the point of CEV. If you're not trying to restrict how values develop, you might as well go home and watch TV and let the future spin out of control. One question is where "extrapolation" fits on a scale between "value stasis" and "what a free wild-type AI would think of on its own." Is it "meta-level value stasis"?

I think that evolution and competition have been pretty good at causing value development. (That's me going one more level meta.) Having competition between different subpopulations with different values is a key part of this. Taking that away would be disastrous.

Not to mention the fact that value systems are local optima. If you're doing search, it might make sense to average together some current good solutions and test the results out, in competition with the original solutions. It is definitely a bad idea to average together your current good solutions and replace them with the average.

Load More