A bit about our last few months:
- We’ve been working on getting a simple clear mission and an organization that actually works. We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
- As part of that, we’ll need to find a way to be intelligible.
- This is the first of several blog posts aimed at causing our new form to be visible from outside. (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)
-
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
-
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
-
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
- AI and machine learning graduate students, researchers, project-managers, etc. who care; who can think; and who are interested in thinking better;
- Students and others affiliated with the “Effective Altruism” movement, who are looking to direct their careers in ways that can do the most good;
- Rationality geeks, who are interested in seriously working to understand how the heck thinking works when it works, and how to make it work even in domains as confusing as AI safety.
Brier-boosting, not Signal-boosting

- Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
- CFAR's mission statement (link post, linking to our website).
There's a difference between optimizing for truth and optimizing for interestingness. Interestingness is valuable for truth in the long run because the more hypotheses you have, the better your odds of stumbling on the correct hypothesis. But naively optimizing for truth can decrease creativity, which is critical for interestingness.
I suspect "having ideas" is a skill you can develop, kind of like making clay pots. In the same way your first clay pots will be lousy, your first ideas will be lousy, but they will get better with practice.
Source.
If this is correct, this also gives us clues about how to solve Less Wrong's content problem.
Online communities do not have a strong comparative advantage in compiling and presenting facts that are well understood. That's the sort of thing academics and journalists are already paid to do. If online communities have a comparative advantage, it's in exploring ideas that are neglected by the mainstream--things like AI risk, or CFARish techniques for being more effective.
Unfortunately, LW's culture has historically been pretty antithetical to creativity. It's hard to tell in advance whether an idea you have is a good one or not. And LW has often been hard on posts it considers bad. This made the already-scary process of sharing new ideas even more fraught with the possibility of embarrassment.
Same source.
I recommend recording ideas in a private notebook. I've been doing this for a few years, and I now have way more ideas than I know what to do with.
Relevant: http://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html
Oh yes. For example, Physical Review Letters is mostly interested in the former, while HuffPo -- in the latter.
That's not true because you must also evaluate all these hypotheses and that's costly. For a trivial example, given a question X, would you find it easier to identify a correct hypothesis if I presented you with five candidates or with five million candidates?
... (read more)