Ben_LandauTaylor comments on Better Rationality Through Lucid Dreaming - Less Wrong

10 Post author: katydee 18 October 2013 08:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: katydee 19 October 2013 09:37:01PM *  2 points [-]

I fully admit that I do not have strong outside-view evidence that this method will objectively improve your rationality-- if I did, I would post it. But many (most?) rationality techniques discussed here lack such evidence as well.

Anecdotally, I can say that it seems to have been quite effective for me and there are many inside-view elements pointing towards this as a strong method.

That may not be fully convincing, and I agree it's a problem. Indeed, one of the main reasons that I posted this is that I hope others will attempt the same or similar and we can get a broader picture of this space.

Comment author: Ben_LandauTaylor 20 October 2013 04:57:07AM 6 points [-]

Anecdotally, I can say that it seems to have been quite effective for me and there are many inside-view elements pointing towards this as a strong method.

Can you give examples?

Comment author: katydee 21 October 2013 08:45:28PM 0 points [-]

Sure, what sorts of examples are you looking for?

Comment author: Ben_LandauTaylor 22 October 2013 12:14:14AM 2 points [-]

Examples of the sorts of examples I'm looking for:

Brienne's post

—Writing fiction has improved my rationality because, in writing about characters who don't know all the information I know, I've come to viscerally understand the distinction between map and territory.

—Surrounding myself with rationalists has improved my rationality because social incentives push me to actually do the things we all agree are good ideas.

How does lucid dreaming improve rationality? You've asserted that it does, but I don't know what relevant skills it trains, or how. (You mention the phrase "noticing confusion," but that's all I could find.)

Comment author: katydee 22 October 2013 12:19:18AM *  6 points [-]

Lucid dreaming has improved my rationality because one of the key skills of rationality is noticing that you are confused, and one of the key skills that can be used to induce lucid dreaming is noticing that you are confused.

Further, lucid dreaming gives me the opportunity to practice coming to the correct conclusion in spite of my brain's efforts to the contrary.

Further, lucid dreaming is an opportunity for deliberate practice with high aliveness.

Is any of the above not clear from the original post? If so, I should probably rewrite it-- the reason that I asked what you meant is because I thought the above was apparent.

Comment author: Nornagest 22 October 2013 12:36:25AM *  2 points [-]

Further, lucid dreaming is an opportunity for deliberate practice with high aliveness.

Could you expand on "aliveness", please? I haven't heard the term before, and Google's mostly giving me obviously unrelated stuff mixed in with a bit of fluff that I don't trust.

Comment author: katydee 22 October 2013 12:43:35AM *  5 points [-]

Ack. Sorry, I thought that was fundamental to LW but I got my communities mixed up. It definitely merits a post of its own, which I'll put up within the week.

Comment author: katydee 31 October 2013 01:18:09AM 4 points [-]
Comment author: [deleted] 22 October 2013 12:29:40PM 0 points [-]

Is it related to EY's impression that CEOs of tech companies seem “more alive” than other people?

Comment author: katydee 22 October 2013 08:00:47PM 0 points [-]

Not at all.

Comment author: Ben_LandauTaylor 22 October 2013 01:25:25AM -1 points [-]

The first example is exactly the sort of thing I was hoping for—thanks! That clarifies what you meant in the original post. I'm not sure what the other two examples mean, probably because I know basically nothing about lucid dreaming. What are "your brain's efforts to the contrary"? How does lucid dreaming invoke deliberate practice? What is "high aliveness"? I expect this probably connects to something useful, but the inferential distance is too great for me to get anything from it.