Independent research in inspiring and scaling collective intelligence
Previously Head of Procurement at Anthropic starting 2021
Executive Director at CFAR, starting 2018; Productivity Coach in 2017; Senior Research Analyst and Operations Associate at GiveWell 2013-2017
[I’m coming back to the comments in this post now and feeling grateful for all the engagement with our podcast. Apologies for being very low engagement on LW for the last decade.]
I just (re-?)noticed that I never addressed your curiosity about machine editing.
Yes, we made heavy use of an automatic transcript/audio editing tool. We can get you the name of it if desired, though I recommend reaching out to me offline if I don’t reply here sufficiently quickly to any follow-up questions you or anyone might have.
Your curiosity helps me realize that I think we should consider flagging all use of automatic tools even if they don’t make use of AI @Elizabeth (I’m not even sure if this tool does or doesn’t 😬).
One of Elizabeth and my ongoing curiosities is what the best format is for optimizing clarity/trnasparency, completeness of context, and ease of engagement (among other things). My current best guess is that we will consider publishing at least two versions of each podcast, at least once we have sufficient editorial capacity:
[Additional procedural comment: In service of my own learning and increasing my engagement with LW, I’m going to try to “lean in” to writing/publishing fast at the risk of falling on my face a couple times. Thank you for your patience.]
Awesome, thank you! I'm not sure if we're going to correct this; it's a pain in the butt to fix, especially in the YouTube version, and Elizabeth (who has been doing all the editing herself) is sick right now.
The group I played with (same as Mark Xu's group from comment above) decided that "S2 counting is illegal (you have to let your gut 'feel' the right amount of time)" and "repeating some elaborate ritual that takes the same amount of time before your card is due is illegal" (e.g. you can stick your hand 10% of the way towards the pile when the number's 10 off from your card, and 50% of the way when it's 5 off.)
Metaphors We Live By by George Lakoff — Totally changed the way I think about language and metaphor and frames when I read it in college. Helped me understand that there are important kinds of knowledge that aren't explicit.
What I get from Duncan’s FB post is (1) an attempt to disentangle his reputation from CFAR’s after he leaves, (2) a prediction that things will change due to his departure, and (3) an expression of frustration that more of his knowledge than necessary will be lost.
All of these answers so far (Luke, Adam, Duncan) resonate for me.
I want to make sure I’m hearing you right though, Duncan. Putting aside the ‘yes’ or ‘no’ of the original question, do the scenes/experiences that Luke and Adam describe match what you remember from when you were here?
Agreed I wouldn’t take the ratanon post too seriously. For another example, I know from living with Dario that his motives do not resemble those ascribed to him in that post.
+1 (I'm the Executive Director of CFAR)
What do you recommend if good data is too costly to collect?
I think that if someone has made a claim but failed to use good data or an empirical model, it should not require good data or an empirical model to convince that person that they were wrong. Great if you have it, but I'm not going to ignore an argument just because it fails to use a model.
[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]
I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?