A bit about our last few months:
- We’ve been working on getting a simple clear mission and an organization that actually works. We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
- As part of that, we’ll need to find a way to be intelligible.
- This is the first of several blog posts aimed at causing our new form to be visible from outside. (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)
-
We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
-
Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
-
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
Existential wins and AI safety
Who we’re focusing on, why
- AI and machine learning graduate students, researchers, project-managers, etc. who care; who can think; and who are interested in thinking better;
- Students and others affiliated with the “Effective Altruism” movement, who are looking to direct their careers in ways that can do the most good;
- Rationality geeks, who are interested in seriously working to understand how the heck thinking works when it works, and how to make it work even in domains as confusing as AI safety.
Brier-boosting, not Signal-boosting
- Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
- CFAR's mission statement (link post, linking to our website).
Yes. This is unfortunate, but I cannot help you here.
I think it's a bad idea. I can't anticipate your responses well enough (in other words, I don't have a good model of you) -- for example, I did not expect you to take five million candidate hypotheses. And if I want to have a conversation with myself, why, there is no reason to involve you in the process.
We didn't get to an average Lesswronger generating hypotheses yet. You've introduced a new term -- "interestingness" and set it in opposition to truth (or should it have been truthiness?) As far as I can see, clickbait is just a subtype of "interestingness" -- and if you want to optimize for "interestingness", you would tend to end up with clickbait of some sort. And I'm not quite sure what does it have to do with the propensity to generate hypotheses.
If a correct hypothesis was guaranteed to be included in your set, you would discard the true one in 99.9999% of the cases, then.
Let's try it. "Earth rotates around the Sun" -- ha-ha, what do I look like, an idiot? Implausible. Next!
Where "it" is "writing fiction"?
LOL. Kids are naturally playful -- the don't need a kindergarten for it. In fact, kindergartens tend to use their best efforts to shut down kids creativity and make them "less disruptive", "respectful", "calm", and all the things required of a docile shee... err... member of society.
I neither see much reason to do so, nor do I take my own opinion seriously, anyway :-P
Do you want playfulness or seriousness? Pick a side.
Is this due to lack of ability or lack of desire? If lack of ability, why do you think you lack this ability?