Looking at:
http://google.com/search?q=Marshall+site:lesswrong.com
...there were about 500 comments involving "Marshall" - and now they all appear to have been deleted - leaving a trail like this:
http://lesswrong.com/lw/9/the_most_important_thing_you_learned/53
Did you delete your account there?
I don't pay much attention to karma - but it is weird what gets voted up and down.
For a rationist community, people seem to go for conformity and "applause signs" much more than I would have expcted - while criticisms and disagreements seem to be punished more than I would have thought.
Anyway, interesting raw material for groupthink studies - some day.
Re: First, foremost, fundamentally, above all else: Rational agents should WIN.
When Deep Blue beat Gary Kasparov, did that prove that Gary Kasparov was "irrational"?
It seems as though it would be unreasonable to expect even highly rational agents to win - if pitted against superior competition. Rational agents can lose in other ways as well - e.g. by not having access to useful information.
Since there are plenty of ways in which rational agents can lose, "winning" seems unlikely to be part of a reasonable definition of rationality.
But what good reason is there not to? How can you be worse off from knowing in advance what you'll do in the worse cases?
The answer seems trivial: you may have wasted a bunch of time and energy performing calculations relating to what to do in a hypothetical situation that you might never face.
If the calculations can be performed later, then that will often be better - since then more information will be available - and possibly the calculations may not have to be performed at all.
Calculating in advance can be good - if you fear that you may not have time to calculate later - or (obviously) if the calculations affect the choices to be taken now. However, the act of performing calculations has associated time and energy costs - so it is best to use your "calculating" time wisely.
For the same reason that when you're buying a stock you think will go up, you decide how far it has to decline before it means you were wrong
Do any investors actually do that? I don't mean to be rude - but why haven't they got better things to do with their time?
I didn't find "Engines" very positive. I agree with Moravec:
"I found the speculations absurdly anthropocentric. Here we have machines millions of times more intelligent, plentiful, fecund, and industrious than ourselves, evolving and planning circles around us. And every single one exists only to support us in luxury in our ponderous, glacial, antique bodies and dim witted minds. There is no hint in Drexler's discussion of the potential lost by keeping our creations so totally enslaved."
IMO, Drexler's proposed future is an unlikely nightmare world.
Anon, you are arguing for "incorrect", not "cynical". Please consider the difference.
Like it or not, biologists are basically correct in identifying the primary goal of organisms as self-reproduction. That is the nature of the attractor to which all organisms' goal systems are drawn (though see also this essay of mine). Yes, some organisms break, and other organisms find themselves in unfamiliar environments - but if anything can be said to be the goal of organisms, then that is it. The exceptions (like your contraceptives) just prove the rule. Such organisms are acting in a way that is intended to promote their genetic fitness. It is just that some of their assumptions about the environment might be wrong. Alas, contraceptives are not a very good example, because they prevent disease, make sex easier (thus helping to create pair bonds), and have other positive effects.
Organisms tend to act as though their number one motive is self-reproduction. Philosophers may be able to debate whether that motive is "explicitly represented in their brains" - but if it looks like a duck and quacks like a duck, whether philosophers are prepared to call it a duck seems like a side issue.
It is the same as with Deep Blue. Deep Blue acts as though its number one motive is to win games of chess (thus inflating IBM's stock price). That is the single most helpful simple way in which to understand its behaviour. If you actually look at its utility function, it has thousands of elements, not one of which refers to winning games of chess - but so what? It is not "cynical" to treat Deep Blue as trying to win games of chess. That is what it is doing!
Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness.
What is "cynical" about that? It is a central organising principle in biology that organisms tend to act in such a way to promote their own inclusive genetic fitness. There are a few caveats - but why would viewing people like that be "cynical"? I do not see anything wrong with promoting your own genetic fitness - rather it seems like a perfectly natural thing to do to me.
Looking at the population explosion, I would say that the world appears to be full of people who are acting in a manner that is highly effective at promoting their own genetic fitness. They are doing something wrong? What makes you think that?
Re: The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value.
I think that a better way to put this would be to say that the Canadian humans miscalculate reproductive value - using subconscious math more appropriate for bushmen.
If you want to look at the the importance of reproductive value represented by children to humans, the most obvious studies to look at are the ones that deal with adopted kids - comparing them with more typical ones. For example look at the statistics about how much such kids get beaten, suffer from child abuse, die or commit suicide.
Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.