Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mass_Driver 23 December 2016 10:08:49AM 0 points [-]

And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...

"I know [evolution] sounds crazy -- it didn't make sense to me at first either. I can explain how it works if you're curious, but it will take me a long time, because it's a complicated idea with lots of moving parts that you probably haven't seen before. Sometimes even simple questions like 'where did the first humans come from?' turn out to have complicated answers."

Comment author: Qiaochu_Yuan 13 December 2016 10:04:46PM *  3 points [-]

do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That's the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.

I sympathize with your concerns - I would love for the rationality community to be more diverse along all sorts of axes - but I worry they're predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren't particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they're highly uncertain and even though there are no obvious feedback loops, because they're important enough to work on anyway.

Comment author: Mass_Driver 14 December 2016 01:28:31AM 1 point [-]

I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart's calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who's longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

Comment author: Qiaochu_Yuan 13 December 2016 06:07:59AM 2 points [-]

What is the marginal benefit gained by moving further along the road to specialization, from "roughly half our efforts these days happen to go to running an AI research seminar series" to "our mission is to enlighten AI researchers?" The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public.

Yes, I agree that this is the important question. I think there are benefits around stronger coordination among 1) CFAR staff, 2) CFAR supporters, and 3) CFAR participants around AI safety that are not captured by a quantitative increase in the number of seminars being run or whatever.

In the ideal situation, you can try to create a group of people who have common knowledge that everyone else in the group is actually dedicated to AI safety, and it allows them to coordinate better because it allows them to act and make plans under the assumption that everyone else is dedicated to AI safety, at every level of meta (e.g. when you make plans which are contingent on someone else's plans). If CFAR instead continues to publicly present as approximately cause-neutral, these assumptions shatter and people can't rely on each other and coordinate as well. I think it would be pretty difficult to attempt to quantify the benefit of doing this but I'd be skeptical of any confident and low upper bounds.

There are also benefits from CFAR signaling that it cares enough about AI safety in particular to drop cause neutrality; that could encourage some people who otherwise might not have to take the cause more seriously.

Comment author: Mass_Driver 13 December 2016 08:25:10AM 3 points [-]

Yeah, that pretty much sums it up: do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

Shockingly, as a lawyer who's working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don't see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.

I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR's first minicamps back in 2011, I'd hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.

Instead, five years later, we've got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.

Comment author: Qiaochu_Yuan 13 December 2016 04:17:12AM 4 points [-]

I see here a description of several potential costs of the new focus but no attempt to weigh those costs against the potential benefit.

Comment author: Mass_Driver 13 December 2016 05:13:07AM *  1 point [-]

Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That's why I think it's a good idea for CFAR to run a series of AI-specific seminars.

What is the marginal benefit gained by moving further along the road to specialization, from "roughly half our efforts these days happen to go to running an AI research seminar series" to "our mission is to enlighten AI researchers?" The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public. I would expect any such potential to be seriously outweighed by the costs I describe in my main post (e.g., losing out on rationality techniques that would be invented by people who are interested in other issues), such that the marginal effect of moving from 50% specialization to 100% specialization would be to increase AI risk. That's why I don't want CFAR to specialize in educating AI researchers to the exclusion of all other groups.

Comment author: Mass_Driver 12 December 2016 04:23:30PM 7 points [-]

I dislike CFAR's new focus, and I will probably stop my modest annual donations as a result.

In my opinion, the most important benefit of cause-neutrality is that it safeguards the integrity of the young and still-evolving methods of rationality. If it is official CFAR policy that reducing AI risk is the most important cause, and CFAR staff do almost all of their work with people who are actively involved with AI risk, and then go and do almost all of their socializing with rationalists (most of whom also place a high value on reducing AI risk), then there will be an enormous temptation to discover, promote, and discuss only those methods of reasoning that support the viewpoint that reducing AI risk is the most important value. This is bad partly because it might stop CFAR from changing its mind in the face of new evidence, but mostly because the methods that CFAR will discover (and share with the world) will be stunted -- students will not receive the best-available cognitive tools; they will only receive the best-available cognitive tools that encourage people to reduce AI risk. You might also lose out on discovering methods of (teaching) rationality that would only be found by people with different sorts of brains -- it might turn out that the sort of people who strongly prioritize friendly AI think in certain similar ways, and if you surround yourself with only those people, then you limit yourself to learning only what those people have to teach, even if you somehow maintain perfect intellectual honesty.

Another problem with focusing exclusively on AI risk is that it is such a Black Swan-type problem that it is extremely difficult to measure progress, which in turn makes it difficult to assess the value or success of any new cognitive tools. If you work on reducing global warming, you can check the global average temperature. More importantly, so can any layperson, and you can all evaluate your success together. If you work on reducing nuclear proliferation for ten years, and you haven't secured or prevented a single nuclear warhead, then you know you're not doing a good job. But how do you know if you're failing to reduce AI risk? Even if you think you have good evidence that you're making progress, how could anyone who's not already a technical expert possibly assess that progress? And if you propose to train all of the best experts in your methods, so that they learn to see you as a source of wisdom, then how many of them will retain the capacity to accuse you of failure?

I would not object to CFAR rolling out a new line of seminars that are specifically intended for people working on AI risk -- it is a very important cause, and there's something to be gained in working on a specific problem, and as you say, CFAR is small enough that CFAR can't do it all. But what I hear you saying that the mission is now going to focus exclusively on reducing AI risk. I hear you saying that if all of CFAR's top leadership is obsessed with AI risk, then the solution is not to aggressively recruit some leaders who care about other topics, but rather to just be honest about that obsession and redirect the institution's policies accordingly. That sounds bad. I appreciate your transparency, but transparency alone won't be enough to save the CFAR/MIRI community from the consequences of deliberately retreating into a bubble of AI researchers.

LINK: Quora brainstorms strategies for containing AI risk

5 Mass_Driver 26 May 2016 04:32PM

In case you haven't seen it yet, Quora hosted an interesting discussion of different strategies for containing / mitigating AI risk, boosted by a $500 prize for the best answer. It attracted sci-fi author David Brin, U. Michigan professor Igor Markov, and several people with PhDs in machine learning, neuroscience, or artificial intelligence. Most people from LessWrong will disagree with most of the answers, but I think the article is useful as a quick overview of the variety of opinions that ordinary smart people have about AI risk.


Comment author: RomeoStevens 10 February 2016 03:38:58AM *  5 points [-]

"The remedy lies, indeed, partly in charity, but more largely in correct intellectual habits, in a predominant, ever-present disposition to see things as they are, and to judge them in the full light of an unbiased weighing of evidence applied to all possible constructions, accompanied by a withholding of judgment when the evidence is insufficient to justify conclusions.

I believe that one of the greatest moral reforms that lies immediately before us consists in the general introduction into social and civic life of that habit of mental procedure which is known in investigation as the method of multiple working hypotheses. "

-T. C. Chamberlin from: http://www.mantleplumes.org/WebDocuments/Chamberlin1897.pdf

Comment author: Mass_Driver 19 February 2016 05:03:15PM 2 points [-]

Does anyone know what happened to TC Chamberlin's proposal? In other words, shortly after 1897, did he in fact manage to spread better intellectual habits to other people? Why or why not?

Comment author: EngineerofScience 08 August 2015 01:30:21PM 0 points [-]

Also, can I write in my asteroid essay the potential helpfullness of asteroids? We belive that one asteroid(just one!) could be worth $1,000,000,000,000. In other words, catching one asteroid could be worth one-trillion dollars. Could I mention that in my hundred word blurb?

Comment author: Mass_Driver 09 August 2015 04:44:13PM 0 points [-]


Comment author: Dorikka 06 August 2015 03:37:27AM *  5 points [-]

Do you know anyone who has done website design, like as an actual job? May want to ask them. I can really just say whether something does or doesn't look right to me - honestly wouldn't know where to start recommending fonts and stuff.

Comment author: Mass_Driver 07 August 2015 06:38:05PM 1 point [-]

Again, fair point -- if you are reading this, and you have experience designing websites, and you are willing to donate a couple of hours to build a very basic website, let us know!

Comment author: CCC 06 August 2015 08:17:59AM 2 points [-]

I agree with Dorikka - that banner image is, well, not the best. I did not even notice that the workshop was flooded until I saw you point it out in this post; I thought it merely had a shiny floor and a low workbench (and took no particular notice of either detail).

If I may make a recommendation, I would suggest a mostly-black banner, with a few stars (i.e. a view of space) with, on the far right, a picture of Earth blowing up (something along the lines of this image - though, of course, not exactly that image because of copyright, but along those lines).

Have the text white, in one image, with a transparent background, left-aligned; and the space/Earth image as a different image behind it, right-aligned; then your banner will still look good on any screen resolution.

I think that would make a good, attention-grabbing banner.

Comment author: Mass_Driver 07 August 2015 06:36:48PM 2 points [-]

Sounds good to me. I'll keep an eye out for public domain images of the Earth exploding. If the starry background takes up enough of the image, then the overall effect will probably still hit the right balance between alarm and calm.

A really fun graphic would be an asteroid bouncing off a shield and not hitting Earth, but that might be too specific.

View more: Next