Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Learning by Doing
Comment author: Arran_Stirton 24 March 2015 11:12:01AM 1 point [-]

I see learning as very dependency based. Ie. there are a bunch of concepts you have to know. Think of them as nodes. These nodes have dependencies. As in you have to know A, B and C before you can learn D.

Spot on. This is a big problem is mathematics education; prior to university a lot of teaching is done without paying heed to the fundamental concepts. For example - here in the UK - calculus is taught well before limits (in fact limits aren't taught until students get to university).

Teaching is all about crossing the inferential distance between the student's current knowledge and the idea being taught. It's my impression that most people who say "you just have to practice," say as such because they don't know how to cross that gap. You see this often with professors who don't know how to teach their own subjects because they've forgotten what it was like not knowing how to calculate the expectation of a perturbed Hamiltonian. I suspect that in some cases the knowledge isn't truly a part of them, so that they don't know how to generate it without already knowing it.

Projects are a good way to help students retain information (the testing effect) and also train appropriate recall. Experts in a field are usually experts because they can look at a problem and see where they should be applying their knowledge - a skill that can only be effectively trained by 'real world' problems. In my experience teaching A-level math students, the best students are usually the ones that can apply concepts they've learned in non-obvious situations.

You might find this article I wrote on studying interesting.

Comment author: ChristianKl 17 March 2015 10:40:41PM 1 point [-]

Duolingo provides free language courses. They make this financially viable by crowd sourcing translations from their students. Perhaps a similar thing could be implemented - maybe by getting university students involved.

Duolingo doesn't make profits. It's investors believe in it, but it still to early to say that it's really financially viable.

Comment author: Arran_Stirton 18 March 2015 01:35:50AM 0 points [-]

Thanks. I've edited the comment to reflect this better.

Comment author: Dues 17 March 2015 03:19:41AM 1 point [-]

Good advice. Since I wanted a lot of things to be weighted when determining the search order, I considered just hiding all the complexity 'under the hood'. But if people don't know what they are voting on they might be less inclined to vote at all.

In response to comment by Dues on Efficient Open Source
Comment author: Arran_Stirton 17 March 2015 05:34:07AM 1 point [-]

Since I wanted a lot of things to be weighted when determining the search order, I considered just hiding all the complexity 'under the hood'.

The way I view it, search rankings are a tool like any other. In my own experience in academic research I've always found that clearly defined search rankings are more useful to me than generic rankings; if you know how the tool works, it's easier to use correctly. That said, there's probably still a place for a complex algorithm alongside other search tools, it just shouldn't be the only search tool.

But if people don't know what they are voting on they might be less inclined to vote at all.

Well I think it's more a matter of efficiently extracting information from users. Consider the LessWrong karma system, while it serves its purpose of filtering out spam, its a very noisy indicator of anything other than 'people thought this comment should get karma'. This is because some users think that we should vote things up or down based on different criteria, such as: do I agree with this comment?; did this comment contain valuable information for me?; was this an amusing comment?; was this comment well reasoned?; and so on.

By clearly defining the voting criteria, you're not just making users more likely to vote, you're also more efficiently extracting information out of them. From a user perspective this can be really useful, knowing that a particular rating is the popularity or the importance of a project, they can then choose whether they want to pay attention to or ignore that metric.

Comment author: Arran_Stirton 16 March 2015 02:40:18PM *  2 points [-]

In case it helps, here's a rough list of the thoughts that have come to mind:

  • Simplicity is usually best with voting systems. It may be worth looking at a reddit style up/down system for popularity. With importance you probably want high/mid/low. If you track the 'importance profile' of a user, you could use that to promote projects to their attention that other users with similar profiles find important. Also, in all these rankings it should be clear to the user exactly what metric is being used.

  • Make use of the wisdom of crowds by getting users to evaluate projects/tasks/comments for things like difficulty, relevance, utility, marginal utility - along the lines of this xkcd comic.

  • It seems to me that good open source management tool should direct attention to the right places. Having inbuilt problem flags that users can activate to have the problem brought to the attention of someone who can solve it seems like a good idea.

  • Skill matching. Have detailed skill profiles for users and have required skills flagged up for tasks.

  • Could try breaking projects up into a set of tasks, sub-tasks and next actions a-la Getting Things Done

  • Duolingo provides free language courses. They plan to make this financially viable by crowd sourcing translations from their students. Perhaps a similar thing could be implemented - maybe by getting university students involved.

  • Gamification across a broad range of possible tasks. Give points for things like participation, research, providing information. While rewarding programmers for coding is good, we should seek to reward anything that lowers the activation energy of a task for someone else.

  • Keep a portfolio of work that each user has completed in a format that is easy for them to access, customize and print out and reference in job applications.

  • Encourage networking between users with similar skills, areas of interest and the like. This would provide a benefit to being part of the community.

  • You could have a Patreon like pledging system where people pledge a small amount to projects they consider important. When the project reaches a milestone the contributors then get rewarded a portion of the pledge.

Comment author: buybuydandavis 12 March 2015 10:05:30PM 15 points [-]

I'm struck with Dumbledore's ruthlessness.

Pretend to kill someone to keep your enemies in line, but really just stash them away to be used as a trump card again later, whether as a hostage or a way to reconcile with your enemy. That's good.

"There's only one way to hurt a man who's lost everything. Give him back something broken."

  • Stephen Donaldson
Comment author: Arran_Stirton 12 March 2015 11:51:11PM 8 points [-]

I'm struck with Dumbledore's ruthlessness

Actually I think he was just following his own advice:

While survives any remnant of our kind, that piece is yet in play, though the stars should die in heaven. [...] Know the value of all your other pieces, and play to win.

All things considered I think it was the most compassionate choice he could have made.

Comment author: Gleb_Tsipursky 23 February 2015 01:11:04AM 0 points [-]

Excellent resource on the clarity calculator, and thanks for the feedback overall.

Comment author: Arran_Stirton 23 February 2015 02:32:01AM 1 point [-]

No problem, some other things that come to mind are:

  • It's best to start the articles with a 'hook' paragraph rather than an image, particularly when the image only makes sense to the reader if they know what the article is about.
  • Caption your images always and forever.
  • This has been said before, but the title should make sense to an uninitiated reader. Furthermore to make it more share-able, the title should set up an expectation of what the article is going to tell them. An alternative in this case could be: "What do people really thinking of you?"; or if you restructure the article, something like "X truths about what people think of you," *For popular outreach the inferential distance has to be as low as you can make it; if you can explain something instead of linking to it, do that.
  • Take a look at the most shared websites(upworthy, buzzfeed and the likes), you can learn a lot from their methodology.
Comment author: Arran_Stirton 22 February 2015 11:36:48PM *  1 point [-]

(As a rule, using non-standard formatting when posting to LessWrong is a bad idea.)

There are some improvements you can make to increase cognitive ease, such as lowering your long-word count, avoiding jargon, and using fewer sentences per paragraph. I'd recommend running parts of your post (one paragraph at a time is best) through a clarity calculator to get a better idea of where you can improve.

You may also want to look into the concept of inferential distance.

Comment author: Arran_Stirton 06 February 2015 04:01:27AM 5 points [-]

Nice article, have a karma!

There's a lot of information there, I'd suggest perhaps using this article as the basis for a four part series one each area. The content is non-obvious, so having the extra space to really break down the inferential distance into small steps so that the conclusions are intuitive to non-rationalists would be useful.

(As an aside I suspect that writing for the CFAR blog is right now reasonably high impact for the time investment. Personally I found CFAR's apparent radio-silence since September unnerving and it's possible that it was part of the reason the matching fundraiser struggled. Despite Anna's excellent post on CFAR's progress the lack of activity may have caused people to feel as though CFAR was stagnating and thus be less inclined to part with their money on a System 1 level.)

Comment author: Arran_Stirton 28 January 2015 01:56:23PM *  41 points [-]

Donated $180.

I was planning on donating this money, my yearly 'charity donation' budget (it's meager - I'm an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:

  • The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don't quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.

  • A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn't. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.

  • While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR's continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.

  • CFAR isn't a typical EA cause. This means that while if I don't donate to keep AMF going, another EA will. However if I don't donate to keep CFAR going there's a reasonable chance that someone else won't. In other words my donations to CFAR aren't replaceable.

  • To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there's the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR's work over the next decade will save more than 800 lives? the answer becomes a definite yes.

Comment author: [deleted] 28 December 2014 01:05:46PM 9 points [-]

CFAR seems to many of us to be among the efforts most worth investing in. This isn’t because our present workshops are all that great. Rather, it is because, in terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.

Why? You tend to be marketing your workshops to people who've already got significant training in much of Traditional Rationality. In my view, much of the world's irrationality comes from people who have not even heard of the basics or people whose resource constraints do not allow them to apply what they know, or both. In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world through prevention of Dumb Moves than giving semi-advanced cognitive self-improvement workshops to the Silicon Valley elite.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

Comment author: Arran_Stirton 29 December 2014 02:34:44PM 7 points [-]

As far as I understand it, CFAR's current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they're a good way to test which bits of rationality work and determine the best way to teach them.

In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world

In response to the question "Are you trying to make rationality part of primary and secondary school curricula?" the CFAR FAQ notes that:

We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.

So I'm fairly sure they agree with you on the importance of making broad improvements to education. It's also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you're feeling charitable.

However they go on to say:

At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.

Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they're doing at the moment - building a 'minimum strategic product'. Giving "semi-advanced cognitive self-improvement workshops to the Silicon Valley elite" is just a convenient way to test this stuff.

You might argue for giving the rationality workshops to "people who have not even heard of the basics" but there's a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they've probably already hear of the basics). Then there's the fact that rationality just isn't viewed as useful in the eyes of the general public, so most people won't care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

I don't think CFAR is aiming to propagandize any worldview; they're about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I'm curious about why you think they might be (intentionally or unintentionally) doing so.

View more: Next