Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam_Bur 22 October 2014 07:08:39PM *  3 points [-]

The idea that religion is primarily about belief is very popular among atheists.

Exactly. Belief itself is merely an opinion. I may believe the universe was created by a Great Lizard in the sky, but per se that doesn't mean anything; it only means I have a weird belief.

Some beliefs push people to action. If I believe the Great Lizard will punish me unless I eat a potato every day, I will pay attention to eating potatoes, and perhaps I will even vote for subsidies for potato producers. But that still is not a religion.

Religion is the social behavior connected with the belief system. They are mutually dependent. A part of the behavior is that you should study the belief, debate it with other believers, and maybe even try to convince non-believers. A part of the belief is that you should do the behavior, and make other people in your group do it. This is the central loop; and then there are additional behaviors and beliefs growing from it.

So, back to the original question -- even rationality in practice has this loop at its core. To believe rationally, there are some things you need to do, e.g. study, avoid mindkilling, et cetera. And to behave rationally, you have to know what is rational.

Sometimes you are in this loop without being aware of it; without reflecting on it. You were taught rational behavior; you were taught rational beliefs. But there is a risk that something will throw you out of the loop, either by a sudden change, or slowly step by step.

I think that LW rationality is about being in this loop and being aware of it. Not only we happen to be not chronically mindkilled; we also actively try to avoid mindkilling; and we know that we are doing that to keep ourselves in the rationality loop. Not only do we happen to have relatively correct beliefs about physical universe and humans; we also actively try to understand it better; and we know that we are doing that to keep ourselves in the rationality loop. Shortly, it's not just that we happen to be rational at this moment, but we are also trying to remain rational, and preferebly become even more rational.

Similarly to religions, reflective rationality is a self-preserving set of behaviors and beliefs. If you believe that X is good, you probably also believe that preserving X is good, therefore yes, I support attempts to make rationality self-preserving.

And now we are exploring what exactly does it take for a system of behaviors and beliefs to be self-preserving. Both on individual and social levels; not only because we have the goal of "raising the sanity waterline", but also because these levels interact. We are a social species; the more of us will be rational, the easier it will be for each of us. Because we can discuss our beliefs, help each other with our behaviors, solve common problems, learn from others' failures, find a company for projects that require cooperation of multiple people.

Comment author: pjeby 24 October 2014 05:37:21PM 1 point [-]

And to behave rationally, you have to know what is rational.

...for your goals and the situation and other variables not stated. "Rational" is not a function with only one variable, which is why "Rational X" posts are unwelcome.

In fact, treating "rational" as a one-argument function is precisely what would make rationalism appear to be a religion. If anything, one-place functions are what religion is famous for. ;-) Good(thing), Evil(thing), Christian(thing), un-Christian(thing). When we speak as though there exists Rational(thing), we should not be surprised if people think a new religion is being proposed.

Comment author: DavidLS 19 October 2014 07:05:18AM *  3 points [-]

Thank you. I had forgotten about that.

So let's say the two groups were, as you suggest:

  • Tracking food & exercise on Beeminder
  • Tracking food & exercise in a journal

Do you have any thoughts on what questions we should be asking about this product? Somehow the data collection and analysis once we have the timeseries data doesn't seem so hard... but the protocol and question design seems very difficult to me.

Comment author: pjeby 24 October 2014 05:24:26PM 2 points [-]

I wonder if there should be a group where they still get Beeminder's graph, but they don't pay anything for going off their road. (In order to test whether the pledge system is actually necessary.)

Comment author: pjeby 23 October 2014 03:19:14AM 3 points [-]

FWIW, there's one book -- "How To Do Everything and Be Happy" -- that gives this practice a name: "Boxing Day", arising from the author's experience of doing this on Boxing Day (the day after Christmas).

IIRC the author suggests having at least one a month, but on the other hand cautions against doing so many of them as to allow them to become routine.

He also offered three rules: 1) no pre-planning of what you'll do on the day, but 2) always schedule the day in advance, and 3) once scheduled, the day can be moved but not canceled entirely. (Otherwise, normal life pressures will gradually extinguish the practice.)

(I have not actually tried any of this myself as yet, however.)

Comment author: pjeby 23 October 2014 02:57:07AM 2 points [-]

Wow. This is the simplest/shortest explanation I've seen yet for how AI can becomes unfriendly, without reference to Terminator-style outcomes.

Of course, per the illusion of transparency, it may be that it only seems so clear to me because of my long term exposure to the idea of FAI... Still, it looks like an important step in subdividing the problem, and one that I expect would be more intuitively obvious to outsiders: "we're studying ways to make sure the sorcerer's apprentice can turn the magic mop off." ;-)

Comment author: pjeby 23 September 2014 09:49:55PM 5 points [-]

I still act as if the thing is not true. Then, by some mysterious process, the thing will "click" and it feels like system 1 really gets it. After this the belief in the thing is reflected in behavior.

The "mysterious process" is the translation from an abstract concept to a specific experience (or set of experiences), either real or imaginary. That's why "fictional evidence" influences people's behavior more than abstract discussion, and why simulation games are better still.

Comment author: notsonewuser 09 September 2014 02:19:22AM 6 points [-]

This seems to be an extremely powerful method for handling decision fatigue - it's one of the few (maybe the only?) things I've seen on Less Wrong that I'm going to start applying immediately because of the potential I see in it. On the other hand, I doubt it would be so effective for me for handling social anxiety or other emotion-laden situations. A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.

Comment author: pjeby 17 September 2014 05:02:55AM 4 points [-]

A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.

A voice in your head isn't a simulation of what the idealized person would do. What you want is for your simulation to be is the experience of observing that idealized person actually doing it. Otherwise, you are just thinking (system 2) instead of simulating (system 1).

To put it another way: a voice in your head is Far, a simulated experience is Near -- and Near has far more influence over your emotions (no pun intended).

Comment author: John_Maxwell_IV 21 August 2014 08:38:35PM 3 points [-]

I can't speak for Matt, but after he mentioned this in our conversation, I started reading the book The Goal, a "business novel" which is supposed to teach you the theory of constraints. I've found it to be a reasonably good read, but I'm not sure how broad its applicability is outside of manufacturing. If you don't work in manufacturing, I think you could plausibly get a large fraction of the value you'd get from reading The Goal by understanding the ideas in this Wikipedia article.

Comment author: pjeby 23 August 2014 05:11:44AM 3 points [-]

I'm not sure how broad its applicability is outside of manufacturing

Technically, the book The Goal only addresses one application of TOC, rather than the sum of TOC or its techniques. (Certainly, the five focusing steps are generally applicable problem-solving tools.)

Most of the TOC body of knowledge is actually a set of tools for doing systems analysis and planning in group settings, based on formal cause-effect logic represented in diagram form. The details of such tools can be more readily found in textbooks like Thinking for a Change or The Logical Thinking Process. (Neither is a novel, and both are written by people other than Goldratt. Personally I find Goldratt's novels the more enjoyable reads, but they necessarily leave out lots of details you need in order to do anything besides apply the specific generic solutions they derive.)

And TOC's Drum-Buffer-Rope scheduling model (as described in The Goal) is only one of TOC's "generic business solutions" -- there are others for other aspects of business, including project management, accounting, inventory management, and even marketing. They can generally be applied without needing to reconstruct them from first principles, though the business novels that introduce those solutions will generally show a portion of the working needed to derive them.

The two thinking tools, though, that I've personally found most valuable are the Prerequisite Tree and the Evaporating Cloud. The first one is basically the idea that you can make a plan simply by listing all the reasons why you can't do something, and then turning those around to identify subgoals. (Which you can then continue objecting to, recursively!) If you are as inclined to negative thinking as I am, this is no small thing. ;-)

The second one is a method for surfacing and questioning your assumptions about the incompatibility of yours (or yours and someone else's) conflicting goals, and about the available means of satisfying your preferences. I have taught it to others as a creativity tool, because essentially that's what it is. By forcing you to clarify the conceptual relationships that lead to a conflict, it gives you a handful of specific points to question your assumptions with.

(I have used the other tools on occasion as well, and adapted some of the generic business solutions to improve business situations before, but far less frequently.)

Comment author: pjeby 31 July 2014 03:33:53AM 1 point [-]

Congratulations. There is now a page on Wikipedia about how weird we all are, with the basilisk and our "weird and unconventional" ideas being front and center on it, since there's little in the way of secondary sources for anything else about the site.

Which, of course, is what I and several other people warned would happen, several months ago. Nice going.

Comment author: pjeby 28 July 2014 08:39:12PM 5 points [-]

What’s true is already true, and even though thinking about it being true makes me feel like I must be a bad person, it can’t cause me to be more of a bad person than I already am.

A lot of what I do about this lately amounts to tabooing "bad person", and discovering that most of my evidence for my System 1 definitions of "bad person" amount to things said and done by idiots with ulterior motives, that I uncritically absorbed before I was old enough to know better.

Our brains tend to link feelings of "bad person" to whatever made other people speak or act towards us or others as if they were bad, and then we just think that those things make us bad. A kind of self-applied fundamental attribution error, as though a single act can have that much weight in determining your character.

Comment author: pjeby 23 July 2014 11:22:47PM 7 points [-]

This belongs in Discussion, not Main. It's barely connected to rationality at all. Is there some lesson we're supposed to take from this, besides booing or yaying various groups for their smartness or non-smartness?

Downvoted for being trivia on Main.

View more: Next