Looking at:
http://google.com/search?q=Marshall+site:lesswrong.com
...there were about 500 comments involving "Marshall" - and now they all appear to have been deleted - leaving a trail like this:
http://lesswrong.com/lw/9/the_most_important_thing_you_learned/53
Did you delete your account there?
I don't pay much attention to karma - but it is weird what gets voted up and down.
For a rationist community, people seem to go for conformity and "applause signs" much more than I would have expcted - while criticisms and disagreements seem to be punished more than I would have thought.
Anyway, interesting raw material for groupthink studies - some day.
Re: First, foremost, fundamentally, above all else: Rational agents should WIN.
When Deep Blue beat Gary Kasparov, did that prove that Gary Kasparov was "irrational"?
It seems as though it would be unreasonable to expect even highly rational agents to win - if pitted against superior competition. Rational agents can lose in other ways as well - e.g. by not having access to useful information.
Since there are plenty of ways in which rational agents can lose, "winning" seems unlikely to be part of a reasonable definition of rationality.
But what good reason is there not to? How can you be worse off from knowing in advance what you'll do in the worse cases?
The answer seems trivial: you may have wasted a bunch of time and energy performing calculations relating to what to do in a hypothetical situation that you might never face.
If the calculations can be performed later, then that will often be better - since then more information will be available - and possibly the calculations may not have to be performed at all.
Calculating in advance can be good - if you fear that you may not have tim...
For the same reason that when you're buying a stock you think will go up, you decide how far it has to decline before it means you were wrong
Do any investors actually do that? I don't mean to be rude - but why haven't they got better things to do with their time?
I didn't find "Engines" very positive. I agree with Moravec:
"I found the speculations absurdly anthropocentric. Here we have machines millions of times more intelligent, plentiful, fecund, and industrious than ourselves, evolving and planning circles around us. And every single one exists only to support us in luxury in our ponderous, glacial, antique bodies and dim witted minds. There is no hint in Drexler's discussion of the potential lost by keeping our creations so totally enslaved."
IMO, Drexler's proposed future is an unlikely nightmare world.
Anon, you are arguing for "incorrect", not "cynical". Please consider the difference.
Like it or not, biologists are basically correct in identifying the primary goal of organisms as self-reproduction. That is the nature of the attractor to which all organisms' goal systems are drawn (though see also this essay of mine). Yes, some organisms break, and other organisms find themselves in unfamiliar environments - but if anything can be said to be the goal of organisms, then that is it. The exceptions (like your contraceptives) just prove...
Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness.
What is "cynical" about that? It is a central organising principle in biology that organisms tend to act in such a way to promote their own inclusive genetic fitness. There are a few caveats - but why would viewing people like that be "cynical"? I do not see anything wrong with promoting your own genetic fitness - rather it seems like a perfectly natural...
Re: The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value.
I think that a better way to put this would be to say that the Canadian humans miscalculate reproductive value - using subconscious math more appropriate for bushmen.
If you want to look at the the importance of reproductive value represented by children to humans, the most obvious studies to look at are the ones that deal with adopted kids - comparing them with more typical ones. For example look at the statistics about how much such kids get beaten, suffer from child abuse, die or commit suicide.
Re: Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake [...]
Except where paternity suits are involved, presumably.
[Tim, you post this comment every time I talk about evolutionary psychology, and it's the same comment every time, and it doesn't add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they're your own personal versions. I've already asked you to stop. --EY]
Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.
One big problem is that they tend to systematically ignore memes.
Human brains are parasitised by replicators that hijack them for their own ends. The behaviour of a catholic priest has relatively little to do with the inclusive genetic fitness of the priest - and a...
Wasn't there some material in CFAI about solving the wirehead problem?
The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.
In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.
The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Su...
The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.
The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome...
I note that filial cannibalism is quite common on this planet.
Gamete selection has quite a few problems. It only operates on half the genome at a time - and selection is performed before many of the genes can be expressed. Of course gamete selection is cheap.
What spiders do - i.e. produce lots of offspring, and have many die as infants - has a huge number of evolutionary benefits. The lost babies do not cost very much, and the value of the selection that acts on them is great.
Human beings can't get easily get there - since they currently rely on gestation...
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.
That is silly - the associated utility function is the one you have just explicitly given. To rephrase:
if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;
Here's another example:...
Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.