Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

John_Maxwell_IV comments on CFAR’s new focus, and AI Safety - LessWrong

30 Post author: AnnaSalamon 03 December 2016 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 03 December 2016 12:58:17PM *  11 points [-]

If Alyssa Vance is correct that the community is bottlenecked on idea generation, I think this is exactly the wrong way to respond. My current view is that increasing hierarchy has the advantage of helping people coordinate better, but it has the disadvantage that people are less creative in a hierarchical context. Isaac Asimov on brainstorming:

If a single individual present has a much greater reputation than the others, or is more articulate, or has a distinctly more commanding personality, he may well take over the conference and reduce the rest to little more than passive obedience. The individual may himself be extremely useful, but he might as well be put to work solo, for he is neutralizing the rest.

I believe this has already happened to the community through the quasi-deification of people like Eliezer, Scott, and Gwern. It's odd, because I generally view the LW community as quite nontraditional. But when I look at academia, I get the impression that college professors are significantly closer in status to their students than our intellectual leadership.

This is my steelman of people who say LW is a cult. It's not a cult, but large status differences might be a sociological "code smell" for intellectual communities. Think of the professor who insists that they always be addressed as "Dr. Jones" instead of being called by their first name. This is rarely the sort of earnest, energetic, independent-minded person who makes important discoveries. "The people I know who do great work think that they suck, but that everyone else sucks even more."

The problem is compounded by the fact that Eliezer, Scott, and Gwern are not actually leaders. They're high status, but they aren't giving people orders. This leads to leadership vacuums.

My current guess is that we should work on idea generation at present, then transform into a more hierarchical community when it's obvious what needs to be done. I don't know what the best community structure for idea generation is, but I suspect the university model is a good one: have a selective admissions process, while keeping the culture egalitarian for people who are accepted. At least this approach is proven.

Comment author: FeepingCreature 03 December 2016 01:58:39PM *  6 points [-]

I shall preface by saying that I am neither a rationalist nor an aspiring rationalist. Instead, I would classify myself as a "rationality consumer" - I enjoy debating philosophy and reading good competence/insight porn. My life is good enough that I don't anticipate much subjective value from optimizing my decisionmaking.

I don't know how representative I am. But I think if you want to reach "people who have something to protect" you need to use different approaches from "people who like competence porn", and I think while a site like LW can serve both groups we are to some extent running into issues where we may have a population that is largely the latter instead of the former - people admire Gwern, but who wants to be Gwern? Who wants to be like Eliezer or lukeprog? We may not want leaders, but we don't even have heroes.

I think possibly what's missing, and this is especially relevant in the case of CFAR, is a solid, empirical, visceral case for the benefit of putting the techniques into action. At the risk of being branded outreach, and at the very real risk of significantly skewing their post-workshop stats gathering, CFAR should possibly put more effort into documenting stories of success through applying the techniques. I think the main focus of research should be full System-1 integration, not just for the techniques themselves but also for CFAR's advertisement. I believe it's possible to do this responsibly if one combines it with transparency and System-2 relevant statistics. Contingent, of course, on CFAR delivering the proportionate value.

I realize that there is a chicken-and-egg problem here where for reasons of honesty, you want to use System-1-appealing techniques that only work if the case is solid, which is exactly the thing that System-1 is traditionally bad at! I'm not sure how to solve that, but I think it needs to be solved. To my intuition, rationality won't take off until it's value-positive for S1 as well as S2. If you have something to protect you can push against S1 in the short-term, but the default engagement must be one of playful ease if you want to capture people in a state of idle interest.

Comment author: Vaniver 03 December 2016 09:35:06PM 7 points [-]

CFAR should possibly put more effort into documenting stories of success through applying the techniques.

They do put effort into this; I do wonder how communicable it is, though.

For example, at one point Anna described a series of people all saying something like "well, I don't know if it had any relationship to the workshop, but I did X, Y, and Z" during followups that, across many followups, seemed obviously due to the workshop. But it might be a vague thing that's easier to see when you're actually doing the followups rather than communicating statistics about followups.

Comment author: Viliam 12 December 2016 03:11:29PM *  2 points [-]

I shall preface by saying that I am neither a rationalist nor an aspiring rationalist. Instead, I would classify myself as a "rationality consumer" - I enjoy debating philosophy and reading good competence/insight porn. My life is good enough that I don't anticipate much subjective value from optimizing my decisionmaking.

Thanks so much for saying this! Thinking about this distinction you made, I feel there may be actually four groups of LW readers, with different needs or expectations from the website:

"Science/Tech Fans" -- want more articles about new scientific research and new technologies. "Has anyone recently discovered a new particle, or built a new machine? Give me a popular science article about it!"

"Competence/Insight Consumers" -- want more articles about pop psychology theories and life hacks. They feel they are already doing great, and only want to improve small details. "What do you believe is the true source of human motivation, and how do you organize your to-do lists? But first, give me your credentials: are you a successful person?"

"Already Solving a Problem" -- want feedback on their progress, and information speficially useful for them. Highly specific; two people in the same category working on completely different problems probably wouldn't benefit too much from talking to each other. If they achieve critical mass, it would be best to make a subgroup for them (except that LW currently does not support creating subgroups).

"Not Started Yet" -- inspired by the Sequences, they would like to optimize their lives and the universe, but... they are stuck in place, or advancing very very slowly. They hope for some good advice that would make something "click", and help them leave the ground.

Maybe it's poll time... what do you want to read about?

Submitting...