Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alex_Altair 19 June 2015 05:09:08PM 0 points [-]

Sorry I can't be there for this! The Bay stole me three years ago. But it's cool to see a meetup there.

Comment author: Vaniver 20 December 2014 07:27:04AM *  0 points [-]

The NYC celebration is on the 20th. I imagine the Bay Area one is offset so people can go to both?

Comment author: Alex_Altair 23 December 2014 02:14:26PM 1 point [-]

That is correct!

In response to Why CFAR?
Comment author: Alex_Altair 29 December 2013 04:56:48AM 8 points [-]

In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.

I'd love to hear more detailed plans or ideas for achieving these.

we’ll be devoting more resources to epistemic curriculum development

This is really exciting! I think people tend to have a lot more epistemic rationality than instrumental rationality, but that they still don't have enough epistemic rationality to care about x-risk or other EA goals.

In response to comment by CarlShulman on Why CFAR?
Comment author: Benquo 29 December 2013 02:44:20AM *  13 points [-]

On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:

  • CFAR
  • 80,000 Hours / CEA
  • GiveWell
  • Leverage Research

Whom am I leaving out?

My model for what they're doing is this:

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

Do any of these descriptions seem off? If so, how?

PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.

In response to comment by Benquo on Why CFAR?
Comment author: Alex_Altair 29 December 2013 03:31:18AM 9 points [-]

MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.

In response to Why CFAR?
Comment author: Alex_Altair 28 December 2013 11:18:25PM 7 points [-]

Excellent post! I wish my donation didn't have to wait a few months.

Comment author: Louie 03 October 2013 07:52:49PM 0 points [-]

Do you think Causality is a superior recommendation to Probabilistic Graphical Models?

Comment author: Alex_Altair 03 October 2013 08:16:54PM 1 point [-]

The material covered in Causality is more like a subset of that in PGM. PGM is like an encyclopedia, and Causality is a comprehensive introduction to one application of PGMs.

Comment author: fowlertm 23 June 2013 08:58:24PM 2 points [-]

Understood. To my knowledge there really isn't that much research on this topic, period. As I noted, I thought about going into considerably more depth, but at the time I felt that the result would have a poor tedium-to-value ratio. I felt like most of what I wanted to accomplish I could do by simply pointing out the issue and giving a few examples.

Perhaps I was wrong about that.

Comment author: Alex_Altair 23 June 2013 09:18:59PM 3 points [-]

Maybe just add a section with a few more examples or advice. The post was a quick read for me, I could have handled more.

Comment author: Alex_Altair 23 June 2013 09:17:38PM 4 points [-]

I love this post.

spatial arrangements that simplify perception

This is why you should make your bed in the morning. Also this is why paragraphs exist. And why math notation isn't linear. And why parentheses look like they're encircling the text. And periods and kerning and oh god I can't stop coming up with examples

I'm an extremely visual thinker, and I think I think about these things all the time. I wonder though, if this stuff is as useful to people who aren't visual thinkers. I've experienced having disagreements with people on what heuristics to use, based on the fact that the other person wasn't a visual thinker.

I also find that this has huge application to my computer usage. For example, I always put my cursor off of text. I always have the line I'm reading at the top of the window. I always strategically place my chat windows in the margins of websites so I can see them while reading the site.

Comment author: Vaniver 29 April 2013 08:54:09PM 10 points [-]

My favorite intellectually is "choose well," but I haven't successfully used it much because in the moment it sounds too ominous.

Comment author: Alex_Altair 29 April 2013 09:50:33PM 6 points [-]

I prefer the variant, "May you choose wisely." Also "May your premises be sound."

Comment author: Alex_Altair 23 April 2013 03:01:24AM 9 points [-]

Saw the video before this post, thought to make a prediction, and was correct! :D

View more: Next