Wiki Contributions

Comments

I see learning as very dependency based. Ie. there are a bunch of concepts you have to know. Think of them as nodes. These nodes have dependencies. As in you have to know A, B and C before you can learn D.

Spot on. This is a big problem is mathematics education; prior to university a lot of teaching is done without paying heed to the fundamental concepts. For example - here in the UK - calculus is taught well before limits (in fact limits aren't taught until students get to university).

Teaching is all about crossing the inferential distance between the student's current knowledge and the idea being taught. It's my impression that most people who say "you just have to practice," say as such because they don't know how to cross that gap. You see this often with professors who don't know how to teach their own subjects because they've forgotten what it was like not knowing how to calculate the expectation of a perturbed Hamiltonian. I suspect that in some cases the knowledge isn't truly a part of them, so that they don't know how to generate it without already knowing it.

Projects are a good way to help students retain information (the testing effect) and also train appropriate recall. Experts in a field are usually experts because they can look at a problem and see where they should be applying their knowledge - a skill that can only be effectively trained by 'real world' problems. In my experience teaching A-level math students, the best students are usually the ones that can apply concepts they've learned in non-obvious situations.

You might find this article I wrote on studying interesting.

I'm struck with Dumbledore's ruthlessness

Actually I think he was just following his own advice:

While survives any remnant of our kind, that piece is yet in play, though the stars should die in heaven. [...] Know the value of all your other pieces, and play to win.

All things considered I think it was the most compassionate choice he could have made.

No problem, some other things that come to mind are:

  • It's best to start the articles with a 'hook' paragraph rather than an image, particularly when the image only makes sense to the reader if they know what the article is about.
  • Caption your images always and forever.
  • This has been said before, but the title should make sense to an uninitiated reader. Furthermore to make it more share-able, the title should set up an expectation of what the article is going to tell them. An alternative in this case could be: "What do people really thinking of you?"; or if you restructure the article, something like "X truths about what people think of you," *For popular outreach the inferential distance has to be as low as you can make it; if you can explain something instead of linking to it, do that.
  • Take a look at the most shared websites(upworthy, buzzfeed and the likes), you can learn a lot from their methodology.

(As a rule, using non-standard formatting when posting to LessWrong is a bad idea.)

There are some improvements you can make to increase cognitive ease, such as lowering your long-word count, avoiding jargon, and using fewer sentences per paragraph. I'd recommend running parts of your post (one paragraph at a time is best) through a clarity calculator to get a better idea of where you can improve.

You may also want to look into the concept of inferential distance.

Nice article, have a karma!

There's a lot of information there, I'd suggest perhaps using this article as the basis for a four part series one each area. The content is non-obvious, so having the extra space to really break down the inferential distance into small steps so that the conclusions are intuitive to non-rationalists would be useful.

(As an aside I suspect that writing for the CFAR blog is right now reasonably high impact for the time investment. Personally I found CFAR's apparent radio-silence since September unnerving and it's possible that it was part of the reason the matching fundraiser struggled. Despite Anna's excellent post on CFAR's progress the lack of activity may have caused people to feel as though CFAR was stagnating and thus be less inclined to part with their money on a System 1 level.)

Donated $180.

I was planning on donating this money, my yearly 'charity donation' budget (it's meager - I'm an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:

  • The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don't quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.

  • A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn't. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.

  • While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR's continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.

  • CFAR isn't a typical EA cause. This means that while if I don't donate to keep AMF going, another EA will. However if I don't donate to keep CFAR going there's a reasonable chance that someone else won't. In other words my donations to CFAR aren't replaceable.

  • To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there's the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR's work over the next decade will save more than 800 lives? the answer becomes a definite yes.

As far as I understand it, CFAR's current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they're a good way to test which bits of rationality work and determine the best way to teach them.

In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world

In response to the question "Are you trying to make rationality part of primary and secondary school curricula?" the CFAR FAQ notes that:

We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.

So I'm fairly sure they agree with you on the importance of making broad improvements to education. It's also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you're feeling charitable.

However they go on to say:

At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.

Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they're doing at the moment - building a 'minimum strategic product'. Giving "semi-advanced cognitive self-improvement workshops to the Silicon Valley elite" is just a convenient way to test this stuff.

You might argue for giving the rationality workshops to "people who have not even heard of the basics" but there's a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they've probably already hear of the basics). Then there's the fact that rationality just isn't viewed as useful in the eyes of the general public, so most people won't care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

I don't think CFAR is aiming to propagandize any worldview; they're about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I'm curious about why you think they might be (intentionally or unintentionally) doing so.

Load More