Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong on Twitter

16 Grognor 22 June 2012 03:51PM

List of members of Less Wrong who are on Twitter:

continue reading »

I Stand by the Sequences

14 Grognor 15 May 2012 10:21AM

Edit, May 21, 2012: Read this comment by Yvain.

Forming your own opinion is no more necessary than building your own furniture.

- Peter de Blanc

There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have   admitted   this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:

  • I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
  • I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
  • I think mainstream science is too slow and we mere mortals can do better with Bayes.
  • I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
    • I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
  • I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
  • "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
  • Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.

There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.

Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.

I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.

Holden Karnofsky  said:

I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.

I can't understand this. How could the sequences  not  be relevant? Half of them were created when Eliezer was thinking about AI problems.

So I say this, hoping others will as well:
I stand by the sequences.

And with that, I tap out.  I have found the answer, so I am leaving the conversation.

Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.

After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.

[link] TEDxYale - Keith Chen - The Impact of Language on Economic Behavior

2 Grognor 07 April 2012 05:20PM


The short version is that if the language you speak requires different verbs for the present and the future, it causes you to think about it differently. Depending on the magnitude of the effect, this has important implications for construal level theory. If your language allows you to think about the future in Near mode, it may allow you to think about it more rationally.

Previous discussion on one of Keith Chen's papers here.

The Best Comments Ever

19 Grognor 18 March 2012 01:02PM

Of the title of this discussion post, we already have an approximation in the most voted-for lists for Main and Discussion. There are many problems with this metric, however, a subset of which are:

  • Both sections are cluttered. The top comments for Main are full of rationality quotes (which are better accessed elsewhere), and the top comments for Discussion are full of polls. (Can we please have a non-stupid way of putting polls in comments?)
  • Both sections are biased by exposure. A comment that a lot of people see generally gets more karma than comments which not many people see. As Less Wrong's growth rate increases, and as time goes by, this will increasingly bias these sections toward newer comments. Also, comments made by well-known LWers will be seen more often and correspondingly upvoted more.
  • Joke comments oftentimes get a lot more comments than insightful comments.
  • Entire threads can have weird voting patterns that don't match quality.
  • These sections list number of upvotes, so extremely controversial comments appear in between unanimously good ones. (May not actually be a problem at all.)
  • People use karma as behavior reinforcers, so comments like "I'll transcribe this" get lots of votes.
  • Dozens more little things I won't try to list
So instead of lists of comments that made a lot of Less Wrong accounts go "eh, have a karma", let's have a list of Less Wrong comments that are so good you'll never forget them, and you wish you could do more than merely upvote.
This list will have its own problems, like being biased toward memorable comments, but I hope the two lists will complement each other and whatever's missing isn't terribly important.
I got the idea from a note about how "Discussion is better than Main, comments are better than Discussion" because social norms prevent certain types of Main post from being made.
Anyway, here are some proposed rules, of which all but the first two are mere suggestions:
  • Don't post your own comments.
  • Don't post more than one comment in this thread. If you find more other-person comments you want to add, edit yours to include them.
  • Say why you think the comment is good, even if it's only a line.
  • Bonus points for finding  very old  comments, or those from threads that got very little exposure.
  • No joke comments or comments that solely quote people.
  • Don't retrieve your comment from the list of top comments.

I like my idea. Let's see how well it works.

[Pile of links] Miscommunication

4 Grognor 21 February 2012 10:02PM

Humans obviously can't communicate.

-Peter de Blanc

Miscommunication is something we talk about Less Wrong a lot, what with the illusion of transparency, the double illusion of transparency, and the 37 Ways Words Can Be Wrong sequence and better disagreement and levels of communication and mental metadata and this (not to mention Robin Hanson's Disagreement is Near/Far Bias). I thought about writing a new top-level post about it with some of the links I've found, but I figure they say all I could have said.

Here you go:

On Saying the Obvious

82 Grognor 02 February 2012 05:13AM

Related to: Generalizing from One Example, Connecting Your Beliefs (a call for help), Beware the Unsurprised

The idea of this article is something I've talked about a couple of times in comments. It seems to require more attention.

As a general rule, what is obvious to some people may not be obvious to others. Is this obvious to you? Maybe it was. Maybe it wasn't, and you thought it was because of hindsight bias.

Imagine a substantive Less Wrong comment. It's insightful, polite, easy to understand, and otherwise good. Ideally, you upvote this comment. Now imagine the same comment, only with "obviously" in front. This shouldn't change much, but it does. This word seems to change the comment in multifarious bad ways that I'd rather not try to list.

Uncharitably, I might reduce this whole phenomenon to an example of a mind projection fallacy. The implicit deduction goes like this: "I found <concept> obvious. Thus, <concept> is inherently obvious." The problem is that obviousness, like probability, is in the mind.

The stigma of "obvious" ideas has another problem in preventing things from being said at all. I don't know how common this is, but I've actually been afraid of saying things that I thought were obvious, even though ignoring this fear and just posting has yet to result in a poorly-received comment. (That is, in fact, why I'm writing this.)

Even tautologies, which are always obvious in retrospect, can be hard to spot. How many of us would have explicitly realized the weak anthropic principle without Nick Bostrom's help?

And what about implications of beliefs you already hold? These should be obvious, and sometimes are, but our brains are notoriously bad at putting two and two together. Luke's example was not realizing that an intelligence explosion was imminent until he read the I.J. Good paragraph. I'm glad he provided that example, as it has saved me the trouble of making one.

This is not (to paraphrase Eliezer) a thunderbolt of insight. I bring it up because I propose a few community norms based on the idea:

  • Don't be afraid of saying something because it's "obvious". It's like how your teachers always said there are no stupid questions.
  • Don't burden your awesome ideas with "obvious but it needs to be said".
  • Don't vote down a comment because it says something "obvious" unless you've thought about it for a while. Also, don't shun "obvious" ideas.
  • Don't call an idea obvious as though obviousness were an inherent property of the idea. Framing it as a personally obvious thing can be a more accurate way of saying what you're trying to say, but it's hard to do this without looking arrogant. (I suspect this is actually one of the reasons we implicitly treat obviousness as impersonal.)

I'm not sure if these are good ideas, but I think implementing them would decrease the volume of thoughts we cannot think and things we can't say.

[Transcript] Richard Feynman on Why Questions

61 Grognor 08 January 2012 07:01PM

I thought this video was a really good question dissolving by Richard Feynman. But it's in 240p! Nobody likes watching 240p videos. So I transcribed it. (Edit: That was in jest. The real reasons are because I thought I could get more exposure this way, and because a lot of people appreciate transcripts. Also, Paul Graham speculates that the written word is universally superior than the spoken word for the purpose of ideas.) I was going to post it as a rationality quote, but the transcript was sufficiently long that I think it warrants a discussion post instead.

Here you go:

continue reading »

[Transcript] Tyler Cowen on Stories

65 Grognor 17 December 2011 05:42AM

I was shocked, absolutely shocked, to find that Tyler Cowen's excellent TEDxMidAtlantic talk on stories had not yet been transcribed. It generated a lot of discussion in the thread about it where it was first introduced, so I went ahead and transcribed it. I added hyperlinks to background information where I thought it was due. Here you go:

continue reading »

View more: Next