Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pjeby 23 September 2014 09:49:55PM 5 points [-]

I still act as if the thing is not true. Then, by some mysterious process, the thing will "click" and it feels like system 1 really gets it. After this the belief in the thing is reflected in behavior.

The "mysterious process" is the translation from an abstract concept to a specific experience (or set of experiences), either real or imaginary. That's why "fictional evidence" influences people's behavior more than abstract discussion, and why simulation games are better still.

Comment author: notsonewuser 09 September 2014 02:19:22AM 6 points [-]

This seems to be an extremely powerful method for handling decision fatigue - it's one of the few (maybe the only?) things I've seen on Less Wrong that I'm going to start applying immediately because of the potential I see in it. On the other hand, I doubt it would be so effective for me for handling social anxiety or other emotion-laden situations. A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.

Comment author: pjeby 17 September 2014 05:02:55AM 4 points [-]

A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.

A voice in your head isn't a simulation of what the idealized person would do. What you want is for your simulation to be is the experience of observing that idealized person actually doing it. Otherwise, you are just thinking (system 2) instead of simulating (system 1).

To put it another way: a voice in your head is Far, a simulated experience is Near -- and Near has far more influence over your emotions (no pun intended).

Comment author: John_Maxwell_IV 21 August 2014 08:38:35PM 3 points [-]

I can't speak for Matt, but after he mentioned this in our conversation, I started reading the book The Goal, a "business novel" which is supposed to teach you the theory of constraints. I've found it to be a reasonably good read, but I'm not sure how broad its applicability is outside of manufacturing. If you don't work in manufacturing, I think you could plausibly get a large fraction of the value you'd get from reading The Goal by understanding the ideas in this Wikipedia article.

Comment author: pjeby 23 August 2014 05:11:44AM 3 points [-]

I'm not sure how broad its applicability is outside of manufacturing

Technically, the book The Goal only addresses one application of TOC, rather than the sum of TOC or its techniques. (Certainly, the five focusing steps are generally applicable problem-solving tools.)

Most of the TOC body of knowledge is actually a set of tools for doing systems analysis and planning in group settings, based on formal cause-effect logic represented in diagram form. The details of such tools can be more readily found in textbooks like Thinking for a Change or The Logical Thinking Process. (Neither is a novel, and both are written by people other than Goldratt. Personally I find Goldratt's novels the more enjoyable reads, but they necessarily leave out lots of details you need in order to do anything besides apply the specific generic solutions they derive.)

And TOC's Drum-Buffer-Rope scheduling model (as described in The Goal) is only one of TOC's "generic business solutions" -- there are others for other aspects of business, including project management, accounting, inventory management, and even marketing. They can generally be applied without needing to reconstruct them from first principles, though the business novels that introduce those solutions will generally show a portion of the working needed to derive them.

The two thinking tools, though, that I've personally found most valuable are the Prerequisite Tree and the Evaporating Cloud. The first one is basically the idea that you can make a plan simply by listing all the reasons why you can't do something, and then turning those around to identify subgoals. (Which you can then continue objecting to, recursively!) If you are as inclined to negative thinking as I am, this is no small thing. ;-)

The second one is a method for surfacing and questioning your assumptions about the incompatibility of yours (or yours and someone else's) conflicting goals, and about the available means of satisfying your preferences. I have taught it to others as a creativity tool, because essentially that's what it is. By forcing you to clarify the conceptual relationships that lead to a conflict, it gives you a handful of specific points to question your assumptions with.

(I have used the other tools on occasion as well, and adapted some of the generic business solutions to improve business situations before, but far less frequently.)

Comment author: pjeby 31 July 2014 03:33:53AM 1 point [-]

Congratulations. There is now a page on Wikipedia about how weird we all are, with the basilisk and our "weird and unconventional" ideas being front and center on it, since there's little in the way of secondary sources for anything else about the site.

Which, of course, is what I and several other people warned would happen, several months ago. Nice going.

Comment author: pjeby 28 July 2014 08:39:12PM 5 points [-]

What’s true is already true, and even though thinking about it being true makes me feel like I must be a bad person, it can’t cause me to be more of a bad person than I already am.

A lot of what I do about this lately amounts to tabooing "bad person", and discovering that most of my evidence for my System 1 definitions of "bad person" amount to things said and done by idiots with ulterior motives, that I uncritically absorbed before I was old enough to know better.

Our brains tend to link feelings of "bad person" to whatever made other people speak or act towards us or others as if they were bad, and then we just think that those things make us bad. A kind of self-applied fundamental attribution error, as though a single act can have that much weight in determining your character.

Comment author: pjeby 23 July 2014 11:22:47PM 7 points [-]

This belongs in Discussion, not Main. It's barely connected to rationality at all. Is there some lesson we're supposed to take from this, besides booing or yaying various groups for their smartness or non-smartness?

Downvoted for being trivia on Main.

Comment author: pjeby 13 July 2014 08:11:20PM 44 points [-]

I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go.

That's further than I go. Heck, what else is there, and why worry about whether you're going there or not?

Comment author: chaosmage 21 March 2014 09:40:12AM *  5 points [-]

I model procrastination entirely differently. When I procrastinate, I seem to be temporarily unaware of my priorities. Whatever I do instead of what I should be doing eats up my attentional resources and pushes out elements of my rational reasoning for why I should be doing what I should be doing.

And this is why forcing myself to go cognitively idle (e.g. five minutes of mindfulness meditation), which frees up attentional resources, helps me stop procrastinating. If procrastination was (always) caused by internal conflict, freeing up attentional resources shouldn't help, but it does.

My personal experience is that things that easily eat up a lot of attention, i.e. reddit, are much more likely to draw me into procrastination mode than highly rewarding things that do not need as much attention, i.e. masturbation.

Comment author: pjeby 22 March 2014 10:14:10PM *  3 points [-]

My personal experience is that things that easily eat up a lot of attention, i.e. reddit, are much more likely to draw me into procrastination mode

Are you sure it's not the reverse? i.e., that you procrastinate in order to "eat up" those attentional resources?

Data point: I'm on LW right now in order to not think about something that I'd otherwise have to think about right now. ;-)

Comment author: pjeby 17 March 2014 01:32:27AM 4 points [-]

Other possible expansions of "I should X", which I find applicable at various times:

  • I could X
  • I might like the results of doing X
  • I think it might be a good idea to X
  • I wish I wanted to X
  • I think [bad thing] will happen if I don't X
Comment author: jimrandomh 14 March 2014 11:42:56PM -2 points [-]

I just went ahead and made the wikipedia page (as a stub article with no content except a sentence, a link to LW and a link to establish notability). Please feel free to add content to it.

Comment author: pjeby 15 March 2014 05:37:47AM 7 points [-]

Please feel free to add content to it.

Please don't, then maybe it can get speedily deleted. There is nothing that gets Wikipedians more up in arms than other sites doing clueless advocacy campaigns like this one. It's viewed on a par with the way certain countries view human rights advocates discussing their "matters internal to their country", and with much better justification for doing so.

Remember that as soon as you add positive content to this page, you are simply creating the opportunity for other people to say negative things, backed up by even more citations than you had for the positive things. Then where are you? Smack dab in the middle of arguments-as-soldiers territory, that's where.

Repeat after me: if I live in a world where LessWrong is positively notable by Wikipedia's standards (not LessWrong's standards), then I want to know that. But if I live in a world where it isn't, then I want to know that, too.

Guess which world we actually live in?

Would we want WIkipedians to come over here and tell us what they think should be considered quality content on LessWrong? I don't think so.

View more: Next