Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAncientGeek 15 December 2016 11:02:54AM *  2 points [-]

If we live in a world where the discover of the planning fallacy can fall victim to it, we live in a world where teachers of rationality fail to improve anyone's rationality skills.

Comment author: owencb 15 December 2016 11:51:06AM 0 points [-]

This conclusion is way too strong. To just give one way: there's a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.

Comment author: owencb 11 December 2016 01:24:27PM 12 points [-]

I don't know who the intended audience for this is, but I think it's worth flagging that it seemed extremely jargon-heavy to me. I expect this to be off-putting to at least some people you actually want to attract (if it were one of my first interactions with CFAR I would be less inclined to engage again). In several cases you link to explanations of the jargon. This helps, but doesn't really solve the problem that you're asking the reader to do a large amount of work.

Some examples from the first few paragraphs:

  • clear and unhidden
  • original seeing
  • original making
  • existential risk
  • informational content [non-standard use]
  • thinker/doer
  • know the right passwords
  • double crux
  • outreach efforts
Comment author: owencb 11 December 2016 01:14:42PM 5 points [-]

I found this document kind of interesting, but it felt less like what I normally understand as a mission statement, and more like "Anna's thoughts on CFAR's identity". I think there's a place for the latter, but I'd be really interested in seeing (a concise version of) the former, too.

If I had to guess right now I'd expect it to say something like:

We want to develop a community with high epistemic standards and good rationality tools, at least part of which is devoted to reducing existential risk from AI.

... but I kind of expect you to think I have the emphasis there wrong in some way.

Comment author: owencb 11 December 2016 01:00:05PM 6 points [-]

I like your (A)-(C), particularly (A). This seems important, and something that isn't always found by default in the world at large.

Because it's somewhat unusual, I think it's helpful to give strong signals that this is important to you. For example I'd feel happy about it being a core part of the CFAR identity, appearing in even short statements of organisational mission. (I also think this can help organisation insiders to take it even more seriously.)

On (i), it seems clearly a bad idea for staff to pretend they have no viewpoints. And if the organisation has viewpoints, it's a bad idea to hide them. I think there is a case for keeping organisational identity small -- not taking views on things it doesn't need views on. Among other things, this helps to make sure that it actually delivers on (A). But I thought the start of your post (points (1)-(4)) did a good job of explaining why there are in fact substantive benefits to having an organisational view on AI, and I'm more supportive of this than before. I still think it is worth trying to keep organisational identity relatively small, and I'm still not certain whether it would be better to have separate organisations.

In response to comment by owencb on Be secretly wrong
Comment author: Benquo 10 December 2016 07:23:51PM *  5 points [-]

I think this clarifies an important area of disagreement:

I claim that there are lots of areas where people have implicit strong beliefs, and it's important to make those explicit to double-check. Credences are important for any remaining ambiguity, but for cognitive efficiency, you should partition off as much as you can as binary beliefs first, so you can do inference on them - and change your mind when your assumptions turn out to be obviously wrong. This might not be particularly salient to you because you're already very good at this in many domains.

This is what I was trying to do with my series of blog posts on GiveWell, for instance - partition off some parts of my beliefs as a disjunction I could be confident enough in to think about it as a set of beliefs I could reason logically about. (For instance, Good Ventures either has increasing returns to scale, or diminishing, or constant, at its given endowment.) What remains is substantial uncertainty about which branch of the disjunction we're in, and that should be parsed as a credence - but scenario analysis requires crisp scenarios, or at least crisp axes to simulate variation along.

Another way of saying this is that from many epistemic starting points it's not even worth figuring out where you are in credence-space on the uncertain parts, because examining your comparatively certain premises will lead to corrections that fundamentally alter your credence-space.

In response to comment by Benquo on Be secretly wrong
Comment author: owencb 10 December 2016 11:32:40PM 1 point [-]

This was helpful to me, thanks.

I think I'd still endorse a bit more of a push towards thinking in credences (where you're at a threshold of that being a reasonable thing to do), but I'll consider further.

Comment author: AnnaSalamon 10 December 2016 06:32:31PM 1 point [-]
Comment author: owencb 10 December 2016 11:28:25PM *  1 point [-]

Thanks. I'll dwell more on these. Quick thoughts from a first read:

  • I generally liked the "further discussion" doc.
  • I do think it's important to strongly signal the aspects of cause neutrality that you do intend to pursue (as well as pursuing them). These are unusual and important.
  • I found the mission statement generally opaque and extremely jargony. I think I could follow what you were saying, but in some cases this required a bit of work and in some cases I felt like it was perhaps only because I'd had conversations with you. (The FAQ at the top was relatively clear, but an odd thing to lead with.)
  • I was bemused by the fact that there didn't appear to be a clear mission statement highlighted anywhere on the page!

ETA: Added some more in depth comments on the relevant comment threads: here on "further thoughts", and here and here on the mission statement.

Comment author: AnnaSalamon 10 December 2016 08:02:59AM 3 points [-]

Thanks for the thoughts; I appreciate it.

I agree with you that framing is important; I just deleted the old ETA. (For anyone interested, it used to read:

ETA: Having talked just now to people at our open house, I would like to clarify: Even though our aim is explicitly AI Safety...
CFAR does still need an art of rationality, and a community of rationality geeks that support that. We will still be investing at least some in that community. We will also still be running some "explore" workshops of different sorts aiming at patching gaps in the art (funding permitting), not all of which will be deliberately and explicitly backchained form AI Safety (although some will). Play is generative of a full rationality art. (In addition to sometimes targeting things more narrowly at particular high-impact groups, and otherwise more directly backchaining.) (More in subsequent posts.)

I'm curious where our two new docs leave you; I think they make clearer that we will still be doing some rationality qua rationality.

Will comment later re: separate organizations; I agree this is an interesting idea; my guess is that there isn't enough money and staff firepower to run a good standalone rationality organization in CFAR's stead, and also that CFAR retains quite an interest in a standalone rationality community and should therefore support it... but I'm definitely interested in thoughts on this.

Julia will be launching a small spinoff organization called Convergence, facilitating double crux conversations between EAs and EA-adjacent people in, e.g., tech and academia. It'll be under the auspices of CFAR for now but will not have opinions on AI. I'm not sure if that hits any of what you're after.

Comment author: owencb 10 December 2016 03:49:55PM *  4 points [-]

Thanks for engaging. Further thoughts:

I agree with you that framing is important; I just deleted the old ETA.

For what it's worth I think even without saying that your aim is explicitly AI safety, a lot of people reading this post will take that away unless you do more to cancel the implicature. Even the title does this! It's a slightly odd grammatical construction which looks an awful lot like CFAR’s new focus: AI Safety; I think without being more up-front about alternative interpretation it will sometimes be read that way.

I'm curious where our two new docs leave you

Me too! (I assume that these have not been posted yet, but if I'm just failing to find them please let me know.)

I think they make clearer that we will still be doing some rationality qua rationality.

Great. Just to highlight that I think there are two important aspects of doing rationality qua rationality:

  • Have the people pursuing the activity have this as their goal. (I'm less worried about you failing on this one.)
  • Have external perceptions be that this is what you're doing. I have some concern that rationality-qua-rationality activities pursued by an AI safety org will be perceived as having an underlying agenda relating to that. And that this could e.g. make some people less inclined to engage, even relative to if they're run by a rationality org which has a significant project on AI safety.

my guess is that there isn't enough money and staff firepower to run a good standalone rationality organization in CFAR's stead

I feel pretty uncertain about this, but my guess goes the other way. Also, I think if there are two separate orgs, the standalone rationality one should probably retain the CFAR brand! (as it seems more valuable there)

I do worry about transition costs and losing synergies of working together from splitting off a new org. Though these might be cheaper earlier than later, and even if it's borderline right now whether there's enough money and staff to do both I think it won't be borderline within a small number of years.

Julia will be launching a small spinoff organization called Convergence

This sounds interesting! That's a specialised enough remit that it (mostly) doesn't negate my above concerns, but I'm happy to hear about it anyway.

In response to Be secretly wrong
Comment author: Benquo 09 December 2016 06:23:21PM *  3 points [-]

Claim 1: "Be wrong." Articulating your models and implied beliefs about the world is an important step in improving your understanding. The simple act of explicitly constraining your anticipations so that you'll be able to tell if you're wrong will lead to updating your beliefs in response to evidence.

If you want to discuss this claim, I encourage you to do it as a reply to this comment.

In response to comment by Benquo on Be secretly wrong
Comment author: owencb 10 December 2016 02:10:39PM 3 points [-]

I'm not sure exactly what you meant, so not ultimately sure whether I disagree, but I at least felt uncomfortable with this claim.

I think it's because:

  • Your framing pushes towards holding beliefs rather than credences in the sense used here.
  • I think it's generally inappropriate to hold beliefs about the type of things that are important and you're likely to turn out to be wrong on. (Of course for boundedly rational agents it's acceptable to hold beliefs about some things as a time/attention-saving matter.)
  • It's normally right to update credences gradually as more evidence comes in. There isn't so much an "I was wrong" moment.

On the other hand I do support generating explicit hypotheses, and articulating concrete models.

Comment author: owencb 08 December 2016 02:45:30PM *  11 points [-]

I had mixed feelings towards this post, and I've been trying to process them.

On the positive side:

  • I think AI safety is important, and that collective epistemology is important for this, so I'm happy to know that there will be some attention going to this.
  • There may be synergies to doing some of this alongside more traditional rationality work in the same org.

On the negative side:

  • I think there is an important role for pursuing rationality qua rationality, and that this will be harder to do consistently under an umbrella with AI safety as an explicit aim. For example one concern is that there will be even stronger pressure to accept community consensus that AI safety is important rather than getting people to think this through for themselves. Since I agree with you that the epistemology matters, this is concerning to me.
  • With a growing community, my first inclination would be that one could support both organisations, and that it would be better to have something new focus on epistemology-for-AI, while CFAR in a more traditional form continues to focus more directly on rationality (just as Open Phil split off from GiveWell rather than replacing the direction of GiveWell). I imagine you thought about this; hopefully you'll address it in one of the subsequent posts.
  • There is potential reputational damage by having these things too far linked. (Though also potential reputational benefits. I put this in "mild negative" for now.)

On the confused side:

  • I thought the post did an interesting job of saying more reasonable things than the implicature. In particular I thought it was extremely interesting that it didn't say that AI safety was a new focus. Then in the ETA you said "Even though our aim is explicitly AI Safety..."

I think framing matters a lot here. I'd feel much happier about a CFAR whose aim was developing and promoting individual and group rationality in general and particularly for important questions, one of whose projects was focusing on AI safety, than I do about a CFAR whose explicit focus is AI safety, even if the basket of activities they might pursue in the short term would look very similar. I wonder if you considered this?

Comment author: AnnaSalamon 27 November 2016 09:11:52PM 6 points [-]

It seems to me that for larger communities, there should be both: (a) a central core that everyone keeps up on, regardless of subtopical interest; and (b) topical centers that build in themselves, and that those contributing to that topical center are expected to be up on, but that members of other topical centers are not necessarily up on. (So that folks contributing to a given subtopical center should be expected to be keeping up with both that subtopic, and the central cannon.)

It seems to me that (a) probably should be located on LW or similar, and that, if/as the community grows, the number of posts within (a) can remain capped by some "keep up withable" number, with quality standards raising as needed.

Comment author: owencb 27 November 2016 10:39:09PM 3 points [-]

Your (a) / (b) division basically makes sense to me.[*] I think we're already at the point where we need this fracturing.

However, I don't think that the LW format makes sense for (a). I'd probably prefer curated aggregation of good content for (a), with fairly clear lines about what's in or out. It's very unclear what the threshold for keeping up on LW should be.

Also, I quite like the idea of the topical centres being hosted in the same place as the core, so that they're easy to find.

[*] A possible caveat is dealing with new community members nicely; I haven't thought about this enough so I'm just dropping a flag here.

View more: Next