Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Will_Pearson 27 February 2009 03:04:09PM 2 points [-]

All competitive situations against ideal learning agents are anti inductive in this sense. Because they can note regularities in their actions and avoid them in the future as well as you can note regularities in their actions and exploit them. The usefulness of induction is based on the relative speeds of the induction of the learning agents.

As such anti induction appears in situations like bacterial resistance to antibiotics. We spot a chink in the bacterias armour, and we can predict that that chink will become less prevalent and our strategy less useful.

So I wouldn't mark markets as special, just the most extreme example.

In response to Wise Pretensions v.0
Comment author: Will_Pearson 21 February 2009 01:01:39PM 0 points [-]

I find neither that convincing. Justice is not a terminal value for me, so I might sacrifice it for Winning. I prefered reading the first, but that is no indication of what a random person may prefer.

Comment author: Will_Pearson 19 February 2009 11:19:19PM 1 point [-]

With international affairs, isn't stopping the aggression the main priority? That is stopping the death and suffering of humans on both sides? Sure it would be good to punish the aggressors rather than the retaliators but if that doesn't stop the fighting it just means more people are dying.

Also there is a difference between the adult and the child, the adult relies on the law of the land for retaliation the child takes it upon himself when he continues the fight. That is the child is a vigilante, and he may punish disproportionately e.g. breaking a leg for a dead leg.

Comment author: Will_Pearson 03 February 2009 11:50:20AM 2 points [-]

I don't really have a good enough grasp on the world to predict what is possible, it all seems to unreal.

One possibility is to jump one star away back towards earth and then blow up that star, if that is the only link to the new star.

Comment author: Will_Pearson 31 January 2009 01:11:28AM 1 point [-]

Re: "MST3K Mantra"

Illustrative fiction is a tricky business, if this is to be part of your message to the world it should be as coherent as possible, so you aren't accidentally lying to make a better story.

If it is just a bit of fun, I'll relax.

Comment author: Will_Pearson 30 January 2009 07:24:13PM 1 point [-]

I wonder why the babies don't eat each other. There must be a huge selective pressure to winnow down your fellows to the point where you don't need to be winnowed. This would in turn select for small brained, large and quick growing at the least. There might also be selective pressure to be partially distrusting of your fellows (assuming there was some cooperation), which might follow over into adulthood.

I also agree with the points Carl raised. It doesn't seem very evolutionarily plausible.

In response to Value is Fragile
Comment author: Will_Pearson 29 January 2009 10:32:01AM 2 points [-]

"Except to remark on how many different things must be known to constrain the final answer."

What would you estimate the probability of each thing being correct is?

In response to Failed Utopia #4-2
Comment author: Will_Pearson 24 January 2009 03:05:00PM 1 point [-]

Reformulate to least regret after a certain time period, if you really want to worry about the resource usage of the genie.

Comment author: Will_Pearson 22 January 2009 10:37:43AM 1 point [-]

Personally I believe in the long slump. However I believe in human optimisim that will make people rally the market every so often. The very fact that most people believe the stock market will rise, will make it rise at least once or twice before people start to get the message that we are in the long slump.

In response to Failed Utopia #4-2
Comment author: Will_Pearson 22 January 2009 01:15:00AM 7 points [-]

Eliezer, didn't you say that humans weren't designed as optimizers? That we satisfice. The reaction you got is probably a reflection of that. The scenario ticks most of the boxes humans have, existence, self-determination, happiness and meaningful goals. The paper clipper scenario ticks none. It makes complete sense for a satisficer to pick it instead of annihilation. I would expect that some people would even be satisfied by a singularity scenario that kept death as long as it removed the chance of existential risk.

View more: Next