All of Tim_Tyler's Comments + Replies

Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.

Looking at:

http://google.com/search?q=Marshall+site:lesswrong.com

...there were about 500 comments involving "Marshall" - and now they all appear to have been deleted - leaving a trail like this:

http://lesswrong.com/lw/9/the_most_important_thing_you_learned/53

Did you delete your account there?

Tim_Tyler-30

I don't pay much attention to karma - but it is weird what gets voted up and down.

For a rationist community, people seem to go for conformity and "applause signs" much more than I would have expcted - while criticisms and disagreements seem to be punished more than I would have thought.

Anyway, interesting raw material for groupthink studies - some day.

Re: First, foremost, fundamentally, above all else: Rational agents should WIN.

When Deep Blue beat Gary Kasparov, did that prove that Gary Kasparov was "irrational"?

It seems as though it would be unreasonable to expect even highly rational agents to win - if pitted against superior competition. Rational agents can lose in other ways as well - e.g. by not having access to useful information.

Since there are plenty of ways in which rational agents can lose, "winning" seems unlikely to be part of a reasonable definition of rationality.

But what good reason is there not to? How can you be worse off from knowing in advance what you'll do in the worse cases?

The answer seems trivial: you may have wasted a bunch of time and energy performing calculations relating to what to do in a hypothetical situation that you might never face.

If the calculations can be performed later, then that will often be better - since then more information will be available - and possibly the calculations may not have to be performed at all.

Calculating in advance can be good - if you fear that you may not have tim... (read more)

For the same reason that when you're buying a stock you think will go up, you decide how far it has to decline before it means you were wrong

Do any investors actually do that? I don't mean to be rude - but why haven't they got better things to do with their time?

Tim_Tyler-10

I didn't find "Engines" very positive. I agree with Moravec:

"I found the speculations absurdly anthropocentric. Here we have machines millions of times more intelligent, plentiful, fecund, and industrious than ourselves, evolving and planning circles around us. And every single one exists only to support us in luxury in our ponderous, glacial, antique bodies and dim witted minds. There is no hint in Drexler's discussion of the potential lost by keeping our creations so totally enslaved."

IMO, Drexler's proposed future is an unlikely nightmare world.

Anon, you are arguing for "incorrect", not "cynical". Please consider the difference.

Like it or not, biologists are basically correct in identifying the primary goal of organisms as self-reproduction. That is the nature of the attractor to which all organisms' goal systems are drawn (though see also this essay of mine). Yes, some organisms break, and other organisms find themselves in unfamiliar environments - but if anything can be said to be the goal of organisms, then that is it. The exceptions (like your contraceptives) just prove... (read more)

3[anonymous]
What about religious people who take vows of celibacy? I think people care more about self-preservation than reproduction, honestly. I know I do! Edit: Upon reflection, and receiving some replies here, I actually think Tim made a pretty strong case. However, though "playing chess" may be the "single most helpful simple way to understand" Deep Blue's behavior, it is wrong to say this of "trying to reproduce" for human behavior. You could predict only a very small percentage of my behavior using that information (I've never kissed a girl or had sex, despite wanting to) - whereas, using "self-preservation" and "seeking novelty", you could predict quite a bit of it. I suspect this is not just true of me, but of many people. Edit 2: Though you could poke holes in my first edit. Like, maybe the reason I don't try to reproduce now is only because I've failed in the past. But this hinges on being a violation of Tim's point. Also, see this later comment I made, which I think is pretty much a knockdown refutation. Edit 3: See this comment by memoridem for a succinct summary of my position. Somehow, none of my comments in this conversation came out clearly.
Tim_Tyler-20
Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness.

What is "cynical" about that? It is a central organising principle in biology that organisms tend to act in such a way to promote their own inclusive genetic fitness. There are a few caveats - but why would viewing people like that be "cynical"? I do not see anything wrong with promoting your own genetic fitness - rather it seems like a perfectly natural... (read more)

8pnrjulius
The cynicism lies in thinking that this is a motive (it's not, not in any ordinary sense; do I have a "motive" of being drawn toward the center of the Earth?), and also in thinking that this somehow devalues actual human capacities for rationality, love, altruism, etc.
Tim_Tyler-40

Re: The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value.

I think that a better way to put this would be to say that the Canadian humans miscalculate reproductive value - using subconscious math more appropriate for bushmen.

If you want to look at the the importance of reproductive value represented by children to humans, the most obvious studies to look at are the ones that deal with adopted kids - comparing them with more typical ones. For example look at the statistics about how much such kids get beaten, suffer from child abuse, die or commit suicide.

Re: Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake [...]

Except where paternity suits are involved, presumably.

[Tim, you post this comment every time I talk about evolutionary psychology, and it's the same comment every time, and it doesn't add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they're your own personal versions. I've already asked you to stop. --EY]

Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

One big problem is that they tend to systematically ignore memes.

Human brains are parasitised by replicators that hijack them for their own ends. The behaviour of a catholic priest has relatively little to do with the inclusive genetic fitness of the priest - and a... (read more)

-4Nanani
Actually, no. The priest's siblings and the decendants of those siblings got a boost from being related to a priest. The same is true of families with a daughter who became a nun. Also, the relatives who now find themselves with a member of a church in their immediate circle will feel a much higher pressure to have a lot of kids, as per the catholic doctrine. It really is about inclusive fitness.

Wasn't there some material in CFAI about solving the wirehead problem?

The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.

In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.

The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Su... (read more)

The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.

The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome... (read more)

The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.

If Deep Blue had emotions and desires that were attached to th... (read more)

I note that filial cannibalism is quite common on this planet.

Gamete selection has quite a few problems. It only operates on half the genome at a time - and selection is performed before many of the genes can be expressed. Of course gamete selection is cheap.

What spiders do - i.e. produce lots of offspring, and have many die as infants - has a huge number of evolutionary benefits. The lost babies do not cost very much, and the value of the selection that acts on them is great.

Human beings can't get easily get there - since they currently rely on gestation... (read more)

Tim_Tyler-20
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

That is silly - the associated utility function is the one you have just explicitly given. To rephrase:

if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;

Here's another example:
... (read more)
0tut
No it isn't. It is a list of preferences. The corresponding utility function would be a function U(X) from {A,B,C} to the real numbers such that 1) U(A)>U(B) 2) U(B)>U(C) and 3) U(C)>U(A) But only some lists of preferences can be described by utility functions, and this one can't, because 1) and 2) imply that U(A)>U(C), which contradicts 3).
Tim_Tyler-10
But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.

You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.

Tim_Tyler-10
Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.

They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology - and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.

Tim_Tyler-10
I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe.

So: you think a "paperclip maximiser" would be "dull"?

How is that remotely defensible? Do you think a "paperclip maximiser" will master molecular nanotechnology, artificial intelligence, space travel, fusion, the art of dismantling planets and stellar farming?

If so, how could that possibly be "dull"? If not, what reason do you have for thinking that those technologies would not help with t... (read more)

Tim_Tyler-20

Thanks for the probability assessments. What is missing are supporting arguments. What you think is relatively clear - but why you think it is not.

...and what's the deal with mentioning a "sense of humour"? What has that to do with whether a civilization is complex and interesting? Whether our distant descendants value a sense of humour or not seems like an irrelevance to me. I am more concerned with whether they "make it" or not - factors affecting whether our descendants outlast the exploding sun - or whether the seed of human civilisation is obliterated forever.

Tim_Tyler-20

This post seems almost totally wrong to me. For one thing, its central claim - that without human values the future would, with high probability be dull is not even properly defined.

To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be "dull" (it might help to say a bit more about what dullness means too, but that is a side issue for now).

That claim might well be true for typical "shortest-first"... (read more)

3AgentME
We don't particularly value copying DNA sequences for its own sake either though. Imagine a future where an unthinking strain of bacteria functioned like grey goo and replicated itself using all matter in its light cone, and it was impervious to mutations. I wouldn't rate that future as any more valuable than a future where all life went extinct. The goals of evolution aren't necessarily our goals.
4benelliott
But you don't need very many, and you're free to enslave them while they work then kill them once they're done. They might not need to be conscious, and they certainly don't need to enjoy their work. Probably, they will just be minor subroutines of the original AI, deleted and replaced once they learn everything necessary, which won't take long for a smart AI.
6Roko
Yes, but there would be no persons. There would be no scientists, no joy of discovery, no feeling of curiosity. There would just be a "process" that, from the outside, would look like an avalanche of expanding machinery, and on the inside would have no subjective experience. It would contain a complex intelligence, but there would be no-one to marvel at the complex intelligence, not even itself, because there would be "no-one home" in all likelihood. For me, what proved decisive in coming to a low estimate of the value of such a system was the realization that the reason that I liked science, technology, etc, was because of my subjective experiences of finding out the answer. Interestingness is in the eye of the beholder, but this piece argues that the beholder would have no eye; that there would be an optimizing process that lacked the ability to experience joy over any of its discoveries.
2akshatrathi
Making a DNA sequence will count as (an extremely low level activity) [http://lesswrong.com/lw/xr/in_praise_of_boredom/] which is necessary to support non-boring activities. It is a very simple argument that these are the very activity we stop thinking about and concentrate on novel activities.

The core of most of my disagreements with this article find their most concentrated expression in:

"Happiness" is an idiom of policy reinforcement learning, not expected utility maximization.

Under Omohundro's model of intelligent systems, these two approaches converge. As they do so, the reward signal of reinforcement learning and the concept of expected utility also converge. In other words, it is rather inappropriate to emphasize the differences between these two systems as though it was a fundamental one.

There are differences - but they are... (read more)

Dictionaries disagree with Ferris - e.g.:

"Happiness [...] antonym: sadness" - Encarta

"Boredom" makes a terrible opposite of "happiness". What is the opposite of boredom? Something interesting, to be sure, but many more things than just happiness fit that description.

David Pearce has written extensively on the topic of the elimination of suffering - e.g. see: THE ABOLITIONIST PROJECT and Paradise Engineering.

Eliezer, I think you have somehow gotten very confused about the topic of my now-deleted post.

That post was entirely about cultural inheritance - contained absolutely nothing about sexual selection.

Please don't delete my posts - unless you have a good reason for doing so.

[Deleted. Tim, you've been requested to stop talking about your views on sexual selection here. --EY]

As I recall, Arnold's character faced pretty-much this dilemma in Total Recall.

There's a broadly-similar episode of Buffy the Vampire Slayer.

Both characters wind up going on with their mission of saving the planet.

General Optimizer, you seem like a prospect for responding to this question: "in the interests of transparency, would anyone else like to share what they think their utility function is?"

Intelligent machines will not really be built "from scratch" because augmentation of human intelligence by machines makes use of all the same technology as is present is straight machine intelligence projects, plus a human brain. Those projects have the advantage of being competitive with humans out of the box - and they interact synergetically with traditional machine intelligence projects. For details see my intelligence augmentation video/essay.

The thing that doesn't make much sense is building directly on the human brain's wetware with more ... (read more)

It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly.

Er, no, it doesn't.

Robin, it sounds as though you are thinking about the changes that could be made after brain digitalisation.

That seems like a pretty different topic to me. Once you have things in a digital medium, it is indeed much easier to make changes - even though you are still dealing with a nightmarish mess of hacked-together spaghetti code.

This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?

There's considerable scope for the answer to this question being: "because of resource costs". Resource costs for nutrients today are radically different from those in the environment of our ancestors.

We are not designed for our parts to be upgraded. Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they
... (read more)

My guess is that it's a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject - and anyway, it seems off-topic on this thread, so I won't go into details.

If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.

Well, that is so vague as to hardly be worth the trouble of responding to - but I will say that I do hope you were not thinking of referring me here.

However, I should perhaps add that I overspoke. I did not literally mean "any sufficiently-powerful optimisation process". Only that such things are natural tendencies - that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.

One of the primary principles of evolutionary psychology is that "Our modern skulls house a stone age mind"

Our minds are made by (essentially) stone-age genes, but they import up-to-date memes - and are a product of influences from both sources.

So: our minds are actually pretty radically different from stone-age minds - because they have downloaded and are running a very different set of brain-software routines. This influence of memes explains why modern society is so different from the societies present in the stone age.

And that, to this end, we would like to know what is or isn't a person - or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

So: define such a function - as is done by the world's legal systems. Of course, in a post-human era, it probably won't "carve nature at the joints" much better than the "how many hairs make a beard" function manages to.

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.

A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.

So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and belie... (read more)

CEV runs once on a collection of existing humans then overwrites itself [...]

Ah. My objection doesn't apply, then. It's better than I had thought.

You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder?

While we are on the topic, the problem I see in this area is not that friendliness has too many extra conditions appended on it. It's that the concept is so vague and amorphous that only Yudkowsky seems to know what it means.

When I last asked what it meant, I was pointed to the CEV document - whi... (read more)

FWIW, Phil's point there seems to be perfectly reasonable - and not in need of correction: if a moral system tells you to do what you were going to do anyway, it isn't going to be doing much work.

Moral systems usually tell you not to do things that you would otherwise be inclined to do - on the grounds that they are bad. Common examples include taking things you want - and having sex.

0taryneast
I'd say that moral systems explain the deeper consequences of an action you may not have thought deeply about.
If you would balk at killing a million people with a nuclear weapon, you should balk at this.

The main problem with death is that valuable things get lost.

Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value.

In summary, I don't see why this issue would be much of a problem.

5taryneast
The AI has scanned you and decided that your expert knowledge of Scandinavian Baseball scores is genuinely valuable... but nothing else is. It erases you and keeps the scores on file somewhere. Are you ok with this?
Tim_Tyler-20

Re: Barry Schwartz's The Paradox of Choice [...] talks about how offering people more choices can make them less happy. A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become - that's how it should ought to work.

This makes no sense to me. A blind choice between lady and ... (read more)

Re: Vassar advocates that rationalists should learn to lie, I advocate that rationalists should practice telling the truth more effectively, and we're still having that argument.

Uh huh. What are the goals of these hypothetical rational agents?

Load More