Note 1: I'm not very serious about the second part of the title, I just thought it sounds more catchy. I'm a long time lurker writing here for the first time, and it's not my intention to alienate anyone. Also, hi, nice to meet you. Please leave a comment to achieve a result of making me happy about you having left a comment. But let's get to the point.

I think you might be familiar with TED Talks. Recall the last time you watched one, and how you felt while doing it.

[BZRT BZRT sound of imagination working]

In my case, I often got the feeling like if I was learning something valuable while watching most TED Talks. The speakers are (mostly) obviously passionate and intelligent people, speaking about important matters they care about a lot. (Granted, I probably haven't watched more than a dozen TED Talks in all my life, so my sample is quite small, but I think it isn't very unrepresentative.)

But at some point, I started asking myself afterwards:

So, what have I actually learned?

Which translates in my internal dialect to:

For each major point, give a one-sentence summary and at least one example of how I could apply it.

(Note 2: don't treat this "one sentence summary" thing too strictly - of course it's only a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious trigger-action.)

And I could not state afterwards anything actually useful that I have learned from those "fascinating" videos (with at most one or two small exceptions).

This is exactly what I mean by "Education as Entertainment".

It's getting the enjoyable *feeling* of learning without any real progress.

[DUM DUM DUM sound of increasing dramatism]

And now, what if you use this concept to look at rationality materials?

For me, reading the core Eliezer's braindump (basically the content of "From AI to Zombies"), as well as braindumps (in the form of blogs) of several other people from the LW community, had definite learning value.

I take notes when I read those, and I have an accountability system in place that enables me to make sure I follow up on all the advice I give to myself, test the new ideas, and improve/drop/replace/implement as needed.

However, when I read (a significant part of) the content produced by the "modern" community-powered-LessWrong, I classify its actual learning value at around the same level as TED Talks.

Or YouTube videos with cats, only those don't give me the *impression* that I'm learning something.

THE END

Please let me know what you think.

Final Note: Please take my remarks with a grain of salt. What I write is meant to inspire thoughts in you, not to represent my best factual knowledge about the LW community.

New to LessWrong?

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 3:27 PM

Your point about TED talks is valid, but I feel you're defining "learning" way too narrowly.

Imagine a 12-year-old city boy who spent a couple of summer months in his grandparents' cottage in the woods. Exploring forests, figuring our relationships with the local kids, fishing, swimming, etc. Did he learn much that can be expressed as "a one-sentence summary" and was directly applicable to his city life? No, not really. Did he learn much? Yes, he did.

I think that "diffused" learning which involves acquiring experience and understanding contexts is very important. But then, I'm a fox :-)

You are of course right about word usage, "learning" can various things, let's say:

1) inspiration and motivation,

2) fuzzy background knowledge and a right mindset,

3) explicit knowledge of a theoretical framework,

4) exploring the framework - filling gaps, building bridges, internalizing concepts,

5) training (theory, offline, field),

6) getting community support and help with problems,

7) organizing and simplification.

But then if I consider each of these separately, the current LessWrong seems lukewarm in all but 2), and I can easily point to (for me) more effective ways to accomplish the same goal for 1), 3), 4), 5) and 6):

1) read thousands of pages of Eliezer's braindump in a short span of time,

3) work through math and physics textbooks,

4) working together with people who are currently at a similar level, and are also learning about that particular topic (it does not matter if someone else did it before or not),

5) following individual people you can treat like your "jedi masters" (their methods and personalities need to resonate with you),

6) meeting folks in real life.

So I'm not denying 2) is important, but I think it rarely needs to be explicitly pursued, and also it comes with the danger of turning into a TED Talk-like disaster.

So maybe it would be a good idea to split/organise the content in LW depending on which part of "learning" its supposed to help with?

Your list is missing a very important thing: experience. After all that learning, you need to go out and actually do stuff. And in the process of doing you discover many things which your previous education skipped or didn't pay enough attention to.

but I think it rarely needs to be explicitly pursued

Depends on the need, doesn't it?

It's easy to underestimate that need, too. Recognizing you don't understand a particular theorem is not hard, but realizing you need "fuzzy" background context is not trivial at all.

But it seems to me that a good way to get "fuzzy" background context is to bang your head against a variety of concrete problems (rather than explicitly being fuzzy). Example: you don't know how to approach a math problem, so you solve a lot simpler/related/example problems and it gives you a better "feel" and intuition about the objects involved. Then you come back to the original problem. I've seen this pattern a lot.

Can you give some examples of the opposite being true?

But it seems to me that a good way to get "fuzzy" background context is to bang your head against a variety of concrete problems (rather than explicitly being fuzzy).

I agree. There is no need to be fuzzy "explicitly". My point is that a lot of important learning will be excluded by the requirement for specific one-sentence summaries. For example, banging your head against a bunch of concrete problems :-) One-sentence summary: "My head hurts" :-D

I am not sure what do you mean by "opposite" -- going from a more complex problem to simpler ones?

It's OK, since you agree on the "not being fuzzy explicitly" point, I don't have anything more to say about it.

Don't treat this "one sentence summary" thing too strictly - it's kind of a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious reaction.

I very much agree, but it seems to me that such learning as LW offers (or could offer) is much more of the explicit theoretically-summarizable kind than the implicit ineffable life-experience kind.

Fair point, though I'd like to add that even the "explicit kind" often needs a lot of context.

This was an entertaining post, thanks!

I think you have correctly pin-pointed one reason for the decline of LW: That the ratio of actually practically useful stuff is low. But that is because the most useful stuff has been already said. It may be somewhat hard to find because it doesn't appear prominently each day new. And saying it twice in the same form isn't such a good idea (but see the sequence rerun; I got something from that).

What remains? things that are not yet said. And these are mostly in the area of a) LW core topics esp. AI safety, b) community stability (with this I mean some kind of echo chamber where people keep the spirit of the posts if not the amazing insightfulness) and c) news - and probably some lesser d) and e).

There is still a lot to find e.g. in the media thread.

PS. I think you should remove the Downfall thing from the title. It will just net you downvotes.

OK, so let's look at the "ratio" problem: it only exists if you assume that every new addition lands in the same "pool" as the rest, right?

So the way to solve this would by introducing some organisation. Anything rigid probably wouldn't work, but what could is maybe something I would call "organic organisation". For example:

  • make it easy for each user to leave a "trail" of concepts, ideas, articles, links etc. that were useful to him at different stages of progress in a given topic,

  • make it easy to follow trails of other users, esp. those bookmarked in the past,

  • (maybe later) use the data for auto-organizing the content, but in a way that adapts as the community discovers new paths that lead to the same knowledge and skills.

This way you don't have to say the same thing twice, you can say once what was your individual approach to a given topic, and only add a new item/page when you can't find anything appropriate in the existing material.

Anyways in the meantime, the rationality blogs kinda work in this way, but they are "heavy" in the sense they require a lot of resources from one author, so there aren't many of them.

I have the feeling that a lot of the highest quality writers have gradually hived off to their own blogs and websites.

Lesswrong itself has sort of dropped bellow critical mass, interesting new updates are rare. There's still high quality discussion but it's often the same discussion much of the time.

One additional factor that I think influences people who read The Sequences and then LessWrong on a regular basis is that the pace of receiving highly useful ideas while reading The Sequences can be very high while the pace of receiving highly useful ideas from lurking on LessWrong is much slower. It's a move from several new useful ideas per day to one new useful idea per week or per month.

I don't think this means the weekly to monthly content on LessWrong is low in value, it just occurs at a slow pace (which is to be expected). I'll also mention that I've asked long-time members about this before and they said the pace was actually always slow with one great thread occurring once a month or so.

Granted, but I don't quite believe I have already extracted all the useful ideas from all the posts and comments so far published. It's just that it's too daunting a task to go find the rest of the good stuff, if it's distributed too sparsely.

[-][anonymous]8y20

Well, yes, but even if everything 'major' is already written, there remains mastering the timely recall part. I'm currently working on integrating the 'time-traveler's insight' (about how a year from now the petty things will seem insignificant and not worthy of needling my family about, among other things.)

A lot of "learning" amounts to mental masturbation.

You read some novel idea. Get a hit of neurotransmitter reward. And keep clicking through Wikipedia. Or LessWrong. The dangerous part of this addiction is that learning is generally considered commendable.

Of course, most everyone fritters their life away on trivialities. There are worse trivialities. But wouldn't it be an amazing thing to be focused on actually doing things that after the fact you said "I'm really glad I did that."

Problem with TED Talks is that there are no exercises for the student, and the lectures are too short for a human memory which is based on repetition. Also, in general, non-interactive videos are worse than books, because when reading a book, the student can slow down and think about what they read.

The Sequences have lessons building on each other, together crossing a few large inferential distances. Most current posts are merely one step in a given direction.

Yes, yes, yes, that's a crucial insight about inferential distances right there. For rationality content that is useful to it's readers, stringing items together is the most important part. Seemingly, no one is seriously doing this now (not even updating old paths to reflect better understanding or methods of transferring knowledge...).

BZRT BZRT

Upvoted for accurate working imagination sounds.

For each major point, give a one-sentence summary and at least one example of how I can apply it.

I don't think that describes all learning well.

There are ideas that I have due to having been exposed to ideas A + B + C + D. Neither of A + B + C + D are strongly valuable on their own but together they are valuable.

For me a lot of LW discussions are about exploring ideas that are interesting but where I don't have a clear usecase for them at the moment I'm speaking about them.

I may not have communicated the intention behind that sentence well enough. If you are exposed to an abstract idea A, and you have no use cases for it, it's totally OK to invent a fictional scenario in which it could be useful, or other ideas it could go well with. I still think there needs to be something.

The important thing seems to me to not only consume knowledge passively but to do something actively with it. When learning calculus it's alright to simply focus on doing calculus problem without thinking about the calculus problems that you can solve in real life. On the other hand simply reading an article about calculus without doing any problems won't bring you far.

For myself a lot of active engagmenet with ideas on LW is by writing comments.

Sure, that's the whole point - my claim is that to get to the stage where you do something actively, you need to pass through the stage where you grasp what an application of the thing would even look like. So I guess the post can be seen as pointing out it's useful to notice when this intermediate step is missing, if you care about the quality of your learning. Note that it's a heuristic that can be applied instantly and subconsciously, as opposed to "doing something actively" (which may require resources, waiting for external circumstances etc.).

One of the things that appealed to me when I found LW was a line I read which specifically mentioned that watching TED videos is not going to get us very far after a while. And I find the same thing with lots of the material I find online and even with some books or some of the TTC lectures. There is a lot of repetitive, shallow coverage of the same topics with a failure to go deeper. I can barely watch any documentaries that get shown on TV now because of how painfully basic the content is. I was hoping that LW would be better. I hope you are wrong about the downfall of LW.

Welcome! Glad to see you posting. It sounds like you are describing the difference between instrumental and epistemic rationality. Epistemic stuff is super important as the foundation for instrumental improvements. I feel that a lot of lesswrong has historically been epistemic. Which is very important. Of course now that that exists I agree that we would do well to encourage more instrumental ideas to be shared.

Would you be interested in sharing your instrumental ideas? Some good examples you mentioned above include asking the question: "so what have I actually learnt?" and taking notes as you read things.

What else do you do? Can you write a top level post about it?

Hmm, sure my approach is definitely instrumental rationality-oriented, but I value epistemology a lot and you won't find me complaining about it. As far as I can predict the experience of someone who has a pressing need to learn epistemic rationality efficiently and tries LW, they are going to be very frustrated (beyond the standard sequences). Eliezer worked not only as idea-adder, but also as idea-distiller and sequence-stringer. So maybe it's just that the rest of LW engages in idea-adding only?

About my instrumental ideas, sure I'm interested in sharing them, but because of excessive lurking I have built quite some inferential distances in a few areas that are important to me. So for now I feel like it's easier to write about things that I do not know too much about... (It's actually a good meta-example of how "sentence-stringing" could be seen as the real "magic" behind teaching and learning rationality, and it's (maybe?) a separate vital skill not many people have?) I'm generally baffled about how to communicate about this, especially the stuff related to "rationality of happiness" - I guess mostly because I know this part would sound utterly uninteresting. Mostly: here's a bunch of methods that work not too bad, if you fine tune them for a long time... here's some splitting of mental buckets to have more nuanced language... here's a few tricks I stole from various sources and tested empirically... here's my rough model of how to start success spirals of self change by slowly building confidence and accountability, but who the hell knows how it works really, I only tested this on myself so there may be dozens of other factors. You get the idea.

All this reminds me of how it typically goes when you try talk to people about regulating sleep. Problem 1: everyone is an expert. Problem 2: there's no single method that works. Problem 3: no method works instantly. Problem 4: for anything to work, it needs to be fine tuned for the individual and it also depends on all other factors, so you can't test these things in isolation. Problem 5: hearing a description of a method that works does not seem to justify the effort, until you experience by yourself what the benefits are. Problem 6: the benefits are spread over time, so it's hard to notice them even if they are big and obvious in the "big picture" view.

All of this basically applies to teaching/learning instrumental rationality.

Mostly: here's a bunch of methods that work not too bad

not a terrible way to offer solutions.

about regulating sleep

I wrote a very long list of sleep maintenance suggestions to help. Not so

I really like lists as a way to gather the possible good and possible bad solutions to the problem. So long as people recognise it's a list of ideas; not an instruction manual or the answers. I would like to get around to writing about . Understanding that if this advice worked for someone there was a way that it worked. And considering if there is a way to make it work for you can maybe help you find a way to make it work for you too.

I remember reading through that list sometime in the past, and I wanted to point something out to you.

[Disclaimer: all of the below is per my current understanding. It is a strong opinion moderately held.]

Sleep regulation is an example of optimizing a highly non-linear and volatile system with a multi-dimensional parameter space.

And in this class of problems, listing various parameters is good only as a way to know what is the space we are trying to optimize over. But if you try to gather information about how useful is each of those, you are shooting yourself in the foot before you even started.

If you hear a report of a method that worked for someone, it merely means it was the last missing piece to reach a local optimum.

In other words, this class of problems inherently do not have stable object level solutions.

Edit: please tell me if what I'm saying sounds wrong to your ears, I'm afraid I've forgotten myself a little and ignored the possible inferential distances I might have here and there. So from my perspective this simply points to the idea to apply and test some of the meta-level strategies that work in other contexts, like timeboxing imitations of various people, or upsetting the system on purpose to find a new local optimum, both of which may work better than random walk on the parameter space.

do not have stable object level solutions.

As I said; is a viable strategy, and as a step in the process; understanding why advice is applicable; can help you in applying it.

Example: advice - spend less time organising and just get down to it, (was offered to me by a student who was borderline OCD, enjoyed the scheduling side of things).

I looked at this advice and realised it is really great advice (for herself, or others in her position,) for people who spend too much time organising, but entirely not helpful for myself who spends zero (+/-) time organising myself. By understanding the reason why; (as you said), "a method that worked for someone... to reach a local optimum." you can better plan and try to apply solutions to your own situation. (I appear to be strongly agreeing with you)