Nornagest comments on Open thread, Oct. 27 - Nov. 2, 2014 - Less Wrong

5 Post author: MrMind 27 October 2014 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (400)

You are viewing a single comment's thread. Show more comments above.

Comment author: gattsuru 27 October 2014 05:56:20PM *  5 points [-]

General :

  • There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

  • /Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we're desperately trying to Section Eight.

  • Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Political :

  • Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.

  • Privacy policies focused on preventing collection of identifiable data are ultimately doomed.

LessWrong-specific:

  • "Karma" is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it's disappointing.

  • The risks and costs of "Raising the sanity waterline" are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven't really looked at what this would mean on a national scale. "Nuclear Winter" as argued by Sagan was a very, very overt Pascal's Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective... several hundred pages of reading later.

  • "Rationality" is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you're competing with RationalWiki, the universe is trying to give you a Hint.

  • The type of Atheism that is certain it will win, won't. There's a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness ... and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it's poked and prodded by the blasphemy of actual practice. Lest you find the answer.

  • ((I'm /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))

MIRI-specific:

  • MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous... and it's several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.

  • MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

Comment author: Nornagest 27 October 2014 06:55:21PM 5 points [-]

Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Isn't this basically Goodhart's law?

Comment author: gattsuru 28 October 2014 12:11:33AM 2 points [-]

It's related. Goodhart's Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn't predict how that decoupling will occur. The common story of Goodhart's law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.

Sometimes this is a good thing : it's why, for one example, companies don't instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.

That said, while I'm convinced that's the pattern, it's not the only one or even the most obvious one, and most people seem to have different formalizations, and I can't find the evidence to demonstrate it.