Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pjeby 23 July 2014 11:22:47PM 7 points [-]

This belongs in Discussion, not Main. It's barely connected to rationality at all. Is there some lesson we're supposed to take from this, besides booing or yaying various groups for their smartness or non-smartness?

Downvoted for being trivia on Main.

Comment author: pjeby 13 July 2014 08:11:20PM 42 points [-]

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.

That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?

Comment author: chaosmage 21 March 2014 09:40:12AM *  5 points [-]

I model procrastination entirely differently. When I procrastinate, I seem to be temporarily unaware of my priorities. Whatever I do instead of what I should be doing eats up my attentional resources and pushes out elements of my rational reasoning for why I should be doing what I should be doing.

And this is why forcing myself to go cognitively idle (e.g. five minutes of mindfulness meditation), which frees up attentional resources, helps me stop procrastinating. If procrastination was (always) caused by internal conflict, freeing up attentional resources shouldn't help, but it does.

My personal experience is that things that easily eat up a lot of attention, i.e. reddit, are much more likely to draw me into procrastination mode than highly rewarding things that do not need as much attention, i.e. masturbation.

Comment author: pjeby 22 March 2014 10:14:10PM *  3 points [-]

My personal experience is that things that easily eat up a lot of attention, i.e. reddit, are much more likely to draw me into procrastination mode

Are you sure it's not the reverse? i.e., that you procrastinate in order to "eat up" those attentional resources?

Data point: I'm on LW right now in order to not think about something that I'd otherwise have to think about right now. ;-)

Comment author: pjeby 17 March 2014 01:32:27AM 4 points [-]

Other possible expansions of "I should X", which I find applicable at various times:

  • I could X
  • I might like the results of doing X
  • I think it might be a good idea to X
  • I wish I wanted to X
  • I think [bad thing] will happen if I don't X
Comment author: jimrandomh 14 March 2014 11:42:56PM -2 points [-]

I just went ahead and made the wikipedia page (as a stub article with no content except a sentence, a link to LW and a link to establish notability). Please feel free to add content to it.

Comment author: pjeby 15 March 2014 05:37:47AM 7 points [-]

Please feel free to add content to it.

Please don't, then maybe it can get speedily deleted. There is nothing that gets Wikipedians more up in arms than other sites doing clueless advocacy campaigns like this one. It's viewed on a par with the way certain countries view human rights advocates discussing their "matters internal to their country", and with much better justification for doing so.

Remember that as soon as you add positive content to this page, you are simply creating the opportunity for other people to say negative things, backed up by even more citations than you had for the positive things. Then where are you? Smack dab in the middle of arguments-as-soldiers territory, that's where.

Repeat after me: if I live in a world where LessWrong is positively notable by Wikipedia's standards (not LessWrong's standards), then I want to know that. But if I live in a world where it isn't, then I want to know that, too.

Guess which world we actually live in?

Would we want WIkipedians to come over here and tell us what they think should be considered quality content on LessWrong? I don't think so.

Comment author: pjeby 13 March 2014 07:58:34PM 20 points [-]

The flipside of this is that the page will not necessarily say things about LW that we would want it to say about LW; rather, it will say about LW what can be cited, which means that you would have to somehow wrangle positive coverage from the mundane media. This seems a rather quixotic goal, at best.

In response to Failed Utopia #4-2
Comment author: pjeby 27 February 2014 07:28:42PM 2 points [-]

I can't believe it took me five years to think to comment on this, but judging from the thread, nobody else has either.

If Stephen's utility function actually includes a sufficiently high-weighted term for Helen's happiness -- and vice versa -- then both Stephen and Helen will accept the situation and be happy, as their partner would want them to be. They might still be angry that the situation occurred, and still want to get back together, but not because of some sort of noble sacrifice to honor the symbolic or signaling value of love, but because they actually cared about each other.

Ironically, the only comment so far that even comes close to considering Stephen's utility in relation to what's happening to Helen is one that proposes her increased happiness would cause him pain, which is not the shape I would expect from a utility function that can be labeled "love" in the circumstances described here.

None of that makes this a successful utopia, of course, nor do I suggest that Stephen is overreacting in the moment of revelation -- you can want somebody else to be happy, after all, and still grieve their loss. But, dang it, the AI is right: the human race will be happier, and there's nothing horrific about the fact they'll be happier, at least to people whose utility function values their or others' happiness sufficiently high in comparison to their preference to be happy in a different way.

(Which of course means that this comment is actually irrelevant to the main point of the article, but it seemed to me that this was a point that should still be raised: the relevance of others' happiness as part of one's utility function gets overlooked often enough in discussions here as it is.)

In response to White Lies
Comment author: shware 08 February 2014 05:53:03PM *  38 points [-]

I find it takes a great deal of luminosity in order to be honest with someone. If I am in a bad mood, I might feel that its my honest opinion that they are annoying when in fact what is going on in my brain has nothing to do with their actions. I might have been able to like the play in other circumstances, but was having a bad day so flaws I might have been otherwise able to overlook were magnified in my mind. etc.

This is my main fear with radical honesty, since it seems to promote thinking that negative thoughts are true just because they are negative. The reasoning going 'I would not say this if I were being polite, but I am thinking it, therefore it is true' without realizing that your brain can make your thoughts be more negative from the truth just as easily as it can make them more positive than the truth.

In fact, saying you enjoyed something you didnt enjoy, and signalling enjoyment with appropriate facial muscles (smiling etc) can improve your mood by itself, especially if it makes the other person smile.

Many intelligent people get lots of practice pointing out flaws, and it is possible that this trains the brain into a mode where one's first thoughts on a topic will be critical regardless of the 'true' reaction. If your brain automatically looks for flaws in something and then a friend asks your honest opinion you would tell them the flaws; but if you look for things to compliment your 'honest' opinion might be different.

tl;dr honesty is harder than many naively think, because our brains are not perfect reporters of their state, and even if they were good luck explaining your inner feelings about something across the inferential distance. Better to just adjust all your reactions slightly in the positive direction to reap the benefits of happier interactions (but only slightly, don't say you liked activities you loathed otherwise you'll be asked back, say they were ok but not your cup of tea etc)

In response to comment by shware on White Lies
Comment author: pjeby 13 February 2014 11:55:59PM 11 points [-]

This is my main fear with radical honesty, since it seems to promote thinking that negative thoughts are true just because they are negative. The reasoning going 'I would not say this if I were being polite, but I am thinking it, therefore it is true' without realizing that your brain can make your thoughts be more negative from the truth just as easily as it can make them more positive than the truth.

My own (very limited) observation of trying to be radically honest has been that until I first say (or at least admit to myself) the reaction of annoyance, I can't become aware of what lies beyond it. If I'm angry at my wife because of something else that happened to me, I usually won't know that it's because of something else until I first express (even just to myself) that I am angry at my wife.

Until I actually tried being honest about such things, I didn't know this, and practicing such expression seemed beneficial in increasing my general awareness of thoughts and emotions in the present or near-present moment. I don't even remotely attempt to practice radical honesty even in my relationship with my wife, but we've both definitely benefited from learning to express what we feel... even if what we're feeling often changes in the very moment we express it. That change is kind of the point of the exercise: if you've completely expressed what you're resenting, it suddenly becomes much easier to notice what you appreciate.

I think that even Blanton's philosophy kind of misses or overstates the point: the point isn't to be honest about every damn thing, it's to avoid the sort of emotional constipation that keeps you stuck being resentful about things because you never want to face or admit that resentment, and so can never get past it.

Comment author: pjeby 05 January 2014 06:40:02PM 2 points [-]

From the original study:

Importantly, this result was not due to a general change in cognitive function, but rather a specific effect on a sensory task associated with a critical-period.

Comment author: Sophronius 26 December 2013 08:55:59PM 0 points [-]

Making jokes is fine, and I do like his style of writing for the most part (I have read earlier books of his). The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don't want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction. Giving self-help advise based on common sense/experience alone is not as useful to me if I have no reason to believe his common sense on the matter is any better than mine. In that case, I have to evaluate the trustworthiness of his self-help advise in terms of his overall rationality, in which case believing in nice things because they are nice to believe in (I am firmly on board with the Bayesian view of evidence, FYI) is a very bad sign. It means that if he says things like "you have to believe in yourself" I have to ask myself if he is just saying that because it sounds nice, or because it is known to be an effective strategy.

So basically, what I would like to know is how you determine that his advise is good. Is it basically a good summary of existing thought on the matter, and is that why you recommend it? Or is it that it just jibes well with your own intuition? Or does it fair well on objective measures of quality such as accuracy of scientific claims?

Comment author: pjeby 27 December 2013 02:52:58AM 7 points [-]

I don't know what point you're really trying to make here; I find it irritating when people basically say, "I'm not convinced; convince me," because it puts social pressure on me to overstate my case. (It's also an example of the trap mentioned in HPMOR where continually answering someone's interrogatories leads to the impression of subordinate status.)

I don't agree with your arguments in Adams' case, for a number of reasons, but because of the adversarial position you're taking, an onlooker would likely confuse my attacks on your errors to be in support of Adams, which isn't really my intent.

As I said before, I support the book, not Adams' writing, beliefs, or opinions in general. It contains many practical points that are highly in agreement with the LW zeitgeist, backed with extensive study citations, along with many non-obvious and unique suggestions that appear to make more sense than the usual sort of suggestions.

Many of those suggestions could be thought of as rooted in a "shut up and multiply" frame of mind, like Adams notion that it's worth using small amounts of "bad" or high-calorie foods to tempt one to eat more good foods -- like dipping carrots in ranch dressing or cooking broccoli in regular butter -- if one would otherwise not have eaten the "good" food.

This is the type of idea one usually doesn't see in diet literature, because it appears to be against a deontological morality of "good" and "bad" foods, whereas Adams is making a consequentialist argument.

Quite a lot of the book is like that, actually, in the sense that Adams presents ideas that should occur to people -- but usually don't -- due to biases of these sorts. He talks a lot about how a big purpose of the book is to give people permission to do these things, or to set an example. (He mentions that the example of one of his coworkers becoming published was a huge influence on his future path, and that his example of being published inspired coworkers at his next job. "Permission", in the sense of peer examples or explicit encouragement, is a powerful tool of influence.)

At this point, I think I've said all I'm willing to on this subthread. If you want to know more, read the book and look at the citations yourself. The book is physically available in hundreds of libraries, and electronically available from dozens of library systems, so you needn't spend a penny to look at the references (or advice) for yourself.

View more: Next