Abuse of Productivity Systems

15 SquirrelInHell 27 March 2016 05:32AM

Example 1.

Bob's dream had always been to learn French, and to live in France after he retires early from his high-paying management job.

Recently, he used the flashcard program Anki to help him with learning French, and had considerable success with it.

In fact, he has learned French to complete fluency in around one and a half year, and he attributes much of this result to using Anki effectively.

His habit of learning Anki every day is very strong, and he always does it first thing in the morning without fail.

Now he thinks, "if I could have done it with French, what stops me from learning, like, 10 languages in the next 15 years? It'd be so cool!".

And so, after his daily French workload has dropped significantly, he downloads and imports a huge database of German flashcards.

Pretty soon, his notices that he is losing his motivation to learn every morning.

"What is wrong with me? Am I becoming lazy?", he thinks, and pushes himself to work hard.

Learning gradually becomes more and more unpleasant.

Bob's resentment builds, and soon is too large for him to overcome.

When he finally gives up on Anki altogether, it comes as a huge relief.

 

Example 2.

Sally is very satisfied with how the pomodoro technique helps her with productivity.

She has several projects on which she wants to work, and using pomodoros gives her a well defined framework for time-sharing those projects.

Having a more tangible measure of progress (the number of pomodoros done) provides pleasant reinforcement, and she has reduced her procrastination to negligible levels.

In the meantime, she is considering a move to another city, and wants to look for a new job.

With dismay, she discovers that when it comes to looking for jobs, she is not procrastination-free.

It doesn't fit with her new image of herself as a procrastination-free person.

Sally thinks about the problem, and comes up with a great idea: she is going to use pomodoros to search for jobs!

She decides to spend one of her pomodoros every day to browse job offers on the Internet.

The next day, when she remembers about the plan, she feels slight displeasure and annoyance, but pushes those feelings away quickly.

She sets the pomodoro timer and opens her web browser.

25 minutes later, the timer rings and she realizes that she has procrastinated away most of the pomodoro.

This is the first time it has ever happened to her.

But she keeps up the positive attitude, and tries the second time.

She is able to do a little bit more, but it's still nothing like the concentrated work she had been getting out of her pomodoros before.

 

Questions

What mistakes are Bob and Sally making?

What would you change, so turn those mistakes into successes?

(Note: the definition of "success" is broad here. If Bob can decide to not learn German with zero wasted motion, it's a success.)

Is there something in your life, that has failed in a similar manner?

To what other domains does this generalize?

How It Feels to Improve My Rationality

5 SquirrelInHell 18 March 2016 09:59AM

Note: this has started as a comment reply, but I thought it got interesting (and long) enough to deserve its own post.

Important note: this post is likely to spark some extreme reactions, because of how human brains are built. I'm including warnings, so please read this post carefully and in order written or don't read it at all.

I'm going to attempt to describe my subjective experience of progress in rationality.

Important edit: I learned from the responses to this post that there's a group of people which whom this resonates pretty well, and there's also a substantial group with whom it does not at all resonate, to the degree they don't know if what I'm saying even makes sense and is correlated to rationality in any meaningful way. If you find yourself in the second group, please notice that trying to verify if I'm doing "real rationality" or not is not a way to resolve your doubts. There is no reason why you would need to feel the same. It's OK to have different experiences. How you experience things is not a test of your rationality. It's also not a test of my rationality. All in all, because of publishing this and reading the comments, I've found out some interesting stuff about how some clusters of people tend to think about this :)

Also, I need to mention that I am not an advanced rationalist, and my rationality background is mostly reading Eliezer's sequences and self-experimentation.

I'm still going to give this a shot, because I think it's going to be a useful reference for a certain level in rationality progress.

I even expect myself to find all that I write here silly and stupid some time later.

But that's the whole point, isn't it?

What I can say about how rationality feels to me now, is going to be pretty irrelevant pretty soon.

I also expect a significant part of readers to be outraged by it, one way or the other.

If you think this is has no value, maybe try to imagine a rationality-beginner version of you that would find a description such as this useful. If only as a reference that says, yes, there is a difference. No, rationality does not feel like a lot of abstract knowledge that you remember from a book. Yes, it does change you deeply, probably deeper than you suspect.

In case you want to downvote this, please do me a favour and write a private message to me, suggesting how I could change this so that it stops offending you.

Please stop any feeling of wanting to compare yourself to me or anyone else, or to prove anyone's superiority or inferiority.

If you can't do this please bookmark this post and return to it some other time.

...

...

Ready?

So, here we go. If you are free from againstness and competitiveness, please be welcome to read on, and feel free to tell me how this resonates, and how different it feels inside your own head and on your own level.


Part 1. Pastures and fences

Let's imagine a vast landscape, full of vibrant greenery of various sorts.

Now, my visualization of object-level rationality is staking out territories, like small parcels of a pasture surrounded by fences.

Inside of the fences, I tend to gave more of neat grass than anything else. It's never perfect, but when I keep working on an area, it's slowly improving. If neglected, weeds will start growing back sooner or later.

Let's also imagine that the ideas and concepts I generalize as I go about my work become seeds of grass, carried by the wind.

What the work feels like, is that I'm running back and forth between object level (my pastures) and meta-level (scattering seeds).

As result of this running back and forth I'm able to stake new territories, or improve previous ones, to have better coverage and less weeds.

The progress I make in my pastures feeds back into interesting meta-level insights (more seeds carried by the wind), which in turn tend to spread to new areas even when I'm not helping with this process on purpose.

My pastures tend to concentrate in clusters, in areas that I have worked on the most.

When I have lots of action in one area, the large amounts of seeds generated (meta techniques) are more often carried to other places, and at those times I experience the most change happening in other, especially new and unexplored, areas.

However even if I can reuse some of my meta-ideas (seeds), then still to have a nice and clear territory I need to go over there, and put in the manual work of clearing it up.

As I'm getting better and more efficient at this, it becomes less work to gain new territories and improve old ones.

But there's always some amount of manual labor involved.


Part 2. Tells of epistemic high ground

Disclaimer: not using this for the Dark Side requires a considerable amount of self-honesty. I'm only posting this because I believe most of you folks reading this are advanced enough not to shoot yourself in the foot by e.g. using this in arguments.

Note: If you feel the slightest urge to flaunt your rationality level, pause and catch it. (You are welcome.) Please do not start any discussion motivated by this.

So, what clues do I tend to notice when my rationality level is going up, relative to other people?

Important note: This is not the same as "how do I notice if I'm mistaken" or "how do I know if I'm on the right path". These are things I notice after the fact, that I judge to be correlates, but they are not to be used to choose direction in learning or sorting out beliefs. I wrote the list below exactly because it is the less talked about part, and it's fun to notice things. Somehow everyone seems to have thought this is more than I meant it to be.

Edit: check Viliam's comment for some concrete examples that make this list better.

In a particular field:

  • My language becomes more precise. Where others use one word, I now use two, or six.
  • I see more confusion all around.
  • Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.
  • I start getting strong impulses that tell me to educate people who I now see are clearly confused, and could be saved from their mistake in one minute if I could tell them what I know... (spoiler alert, this doesn't work).

Rationality level in general:

  • I stop having problems in my life that seem to be common all around, and that I used to have in the past.
  • I forget how it is to have certain problems, and I need to remind myself constantly that what seems easy to me is not easy for everyone.
  • Writings of other people move forward on the path from intimidating to insightful to sensible to confused to pitiful.
  • I start to intuitively discriminate between rationality levels of more people above me.
  • Intuitively judging someone's level requires less and less data, from reading a book to reading ten articles to reading one article.

Important note: although I am aware that my mind automatically estimates rationality levels of various people, I very strongly discourage anyone (including myself) from ever publishing such scores/lists/rankings. If you ever have an urge to do this, especially in public, think twice, and then think again, and then shut up. The same applies to ever telling your estimates to the people in question.

Note: Growth mindset!


Now let's briefly return to the post I started out replying to. Gram_Stone suggested that:

You might say that one possible statement of the problem of human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.

Now after everything I've seen until now, my intuition suggests Gram_Stone's idealized method wouldn't work from inside a human brain.

A generalized meta-technique could become one of the many seeds that help me in my work, or even a very important one that would spread very widely, but it still wouldn't magically turn raw territory into perfect grassland.


Part 3. OK or Cancel?

The closest I've come to Gram_Stone's ideal is when I witnessed a whole cycle of improving in a certain area being executed subconsciously.

It was only brought to my full attention when an already polished solution in verbal form popped into my head when I was taking a shower.

It felt like a popup on a computer screen that had "Cancel" and "OK" buttons, and after I chose OK the rest continued automatically.

After this single short moment, I found a subconscious habit was already in place that ensured changing my previous thought patterns, and it proved to work reliably long after.


That's it! I hope I've left you better off reading this, than not reading this.

Meta-note about my writing agenda: I've developed a few useful (I hope) and unique techniques and ideas for applied rationality, which I don't (yet) know how to share with the community. To get that chunk of data birthed out of me, I need some continued engagement from readers who would give me feedback and generally show interest (this needs to be done slowly and in the right order, so I would have trouble persisting otherwise). So for now I'm writing separate posts noncommittally, to test reactions and (hopefully) gather some folks that could support me in the process of communicating my more developed ideas.

How I infiltrated the Raëlians (and was hugged by their leader)

15 SquirrelInHell 16 March 2016 05:45AM

I was invited by a stranger I met on a plane and actually went to a meeting of Raëlians (known in some LW circles as "the flying saucer cult") in 沖縄, Japan. It was right next to Claude Vorilhon's home, and he came himself for the "ceremony" (?) dressed in a theatrical space-y white uniform, complete with a Jewish-style white cap on his head. When saying his "sermon" (?) he spoke in English and his words were translated into Japanese for the benefit of those who didn't understand. And yes, it's true he talked with me briefly and then hugged me (I understand he does this with all newcomers, and it felt 100% fake to me). I then went on to eat lunch in an 居酒屋 with a group of around 15 members, who were all really friendly and pleasant people. I was actually treated to lunch by them, and afterwards someone gave me a ~20 minute ride to the town I wanted to be in, despite knowing they won't see me ever again.

If you have ever wondered how it is possible that a flying saucer cult has more members than EA, now it's time to learn something.

Note: I hope it's clear that I do not endorse creating cults, nor do I proclaim the EA community's inferiority. It hasn't even crossed my mind when I wrote the above line that any LW'er would take it as a stab they need to defend against. I'm merely pointing to the fact that we can learn from anything, whether it's good or bad, and encouraging a fresh discussion on this after I gathered some new data.

Let's do this as a Q&A session (I'm at work now so I can't write a long post).

Please ask questions in comments.

Education as Entertainment and the Downfall of LessWrong

9 SquirrelInHell 04 March 2016 02:06PM

Note 1: I'm not very serious about the second part of the title, I just thought it sounds more catchy. I'm a long time lurker writing here for the first time, and it's not my intention to alienate anyone. Also, hi, nice to meet you. Please leave a comment to achieve a result of making me happy about you having left a comment. But let's get to the point.

I think you might be familiar with TED Talks. Recall the last time you watched one, and how you felt while doing it.

[BZRT BZRT sound of imagination working]

In my case, I often got the feeling like if I was learning something valuable while watching most TED Talks. The speakers are (mostly) obviously passionate and intelligent people, speaking about important matters they care about a lot. (Granted, I probably haven't watched more than a dozen TED Talks in all my life, so my sample is quite small, but I think it isn't very unrepresentative.)

But at some point, I started asking myself afterwards:

So, what have I actually learned?

Which translates in my internal dialect to:

For each major point, give a one-sentence summary and at least one example of how I could apply it.

(Note 2: don't treat this "one sentence summary" thing too strictly - of course it's only a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious trigger-action.)

And I could not state afterwards anything actually useful that I have learned from those "fascinating" videos (with at most one or two small exceptions).

This is exactly what I mean by "Education as Entertainment".

It's getting the enjoyable *feeling* of learning without any real progress.

[DUM DUM DUM sound of increasing dramatism]

And now, what if you use this concept to look at rationality materials?

For me, reading the core Eliezer's braindump (basically the content of "From AI to Zombies"), as well as braindumps (in the form of blogs) of several other people from the LW community, had definite learning value.

I take notes when I read those, and I have an accountability system in place that enables me to make sure I follow up on all the advice I give to myself, test the new ideas, and improve/drop/replace/implement as needed.

However, when I read (a significant part of) the content produced by the "modern" community-powered-LessWrong, I classify its actual learning value at around the same level as TED Talks.

Or YouTube videos with cats, only those don't give me the *impression* that I'm learning something.

THE END

Please let me know what you think.

Final Note: Please take my remarks with a grain of salt. What I write is meant to inspire thoughts in you, not to represent my best factual knowledge about the LW community.

View more: Prev