Timothy Telleen-Lawton

Independent research in inspiring and scaling collective intelligence

Previously Head of Procurement at Anthropic starting 2021
Executive Director at CFAR, starting 2018; Productivity Coach in 2017; Senior Research Analyst and Operations Associate at GiveWell 2013-2017

Wikitag Contributions

Comments

Sorted by

[I have only read Elizabeth’s comment that I’m responding to here (so far); apologies if it would have been less confusing for me to read the entire thread before responding.]

I have always capitalized both EA and Rationality, and have never thought about it before. The first justification for capitalizing R that comes to mind is all the intentionality/intelligence that I perceive was invested into the proto-“AI Safety” community under EY’s (and others’) leadership. Isn’t it fair to describe the “Rationalist/Rationality” community as the branch of AI Safety/X-risk that is downstream of MIRI, LW, the Sequences, 🪄HPMOR, etc?

[I’m coming back to the comments in this post now and feeling grateful for all the engagement with our podcast. Apologies for being very low engagement on LW for the last decade.]

I just (re-?)noticed that I never addressed your curiosity about machine editing.


Yes, we made heavy use of an automatic transcript/audio editing tool. We can get you the name of it if desired, though I recommend reaching out to me offline if I don’t reply here sufficiently quickly to any follow-up questions you or anyone might have.

Your curiosity helps me realize that I think we should consider flagging all use of automatic tools even if they don’t make use of AI @Elizabeth (I’m not even sure if this tool does or doesn’t 😬).

One of Elizabeth and my ongoing curiosities is what the best format is for optimizing clarity/trnasparency, completeness of context, and ease of engagement (among other things). My current best guess is that we will consider publishing at least two versions of each podcast, at least once we have sufficient editorial capacity:

  1. “Complete” unedited audio with automatically generated transcript—ideally with some cursory review for accuracy including (in my favorite world that would require more talent than I currently have) a community-transcript-editting process so that people like you could make these changes (and add more footnotes) directly as on Wikipedia. (“Complete” is in quotes ‘only’ because it feels impossible to include all context in EVN/my conversations unless we record all of our interactions, which is rough because she’s become one of my top 5 friends these days).
  2. Easy edit for fast publishing without all the tricky bits that feel like they need more context to publish in a non confusing way.
  3. Pithy version to optimize productive engagement (without sacrificing integrity, obvi). Ideally makes heavy use of footnotes and links that link everywhere including to the relevant portion of the full transcript. If we had this I imagine it should be the most visible/clickable version.
  4. Crowdsourced Dank EA Memes tournament based on clips? (See caption contests for a fun version of this.)
  5. Other (please suggest)

     

[Additional procedural comment: In service of my own learning and increasing my engagement with LW, I’m going to try to “lean in” to writing/publishing fast at the risk of falling on my face a couple times. Thank you for your patience.]

Awesome, thank you! I'm not sure if we're going to correct this; it's a pain in the butt to fix, especially in the YouTube version, and Elizabeth (who has been doing all the editing herself) is sick right now.

The group I played with (same as Mark Xu's group from comment above) decided that "S2 counting is illegal (you have to let your gut 'feel' the right amount of time)" and "repeating some elaborate ritual that takes the same amount of time before your card is due is illegal" (e.g. you can stick your hand 10% of the way towards the pile when the number's 10 off from your card, and 50% of the way when it's 5 off.)

Metaphors We Live By by George Lakoff — Totally changed the way I think about language and metaphor and frames when I read it in college. Helped me understand that there are important kinds of knowledge that aren't explicit.

What I get from Duncan’s FB post is (1) an attempt to disentangle his reputation from CFAR’s after he leaves, (2) a prediction that things will change due to his departure, and (3) an expression of frustration that more of his knowledge than necessary will be lost.

  1. It's a totally reasonable choice.
  2. At the time I first saw Duncan’s post I was more worried about big changes to our workshops from losing Duncan than I have observed since then. A year later I think the change is actually less than one would expect from reading Duncan’s post alone. That doesn’t speak to the cost of not having Duncan—since filling in for his absence means we have less attention to spend on other things, and I believe some things Duncan brought have not been replaced.
  3. I am also sad about this, and believe that I was the person best positioned to have caused a better outcome (smaller loss of Duncan’s knowledge and values). In other words I think Duncan’s frustration is not only understandable, but also pointing at a true thing.

All of these answers so far (Luke, Adam, Duncan) resonate for me.

I want to make sure I’m hearing you right though, Duncan. Putting aside the ‘yes’ or ‘no’ of the original question, do the scenes/experiences that Luke and Adam describe match what you remember from when you were here?

Agreed I wouldn’t take the ratanon post too seriously. For another example, I know from living with Dario that his motives do not resemble those ascribed to him in that post.

+1 (I'm the Executive Director of CFAR)

What do you recommend if good data is too costly to collect?

I think that if someone has made a claim but failed to use good data or an empirical model, it should not require good data or an empirical model to convince that person that they were wrong. Great if you have it, but I'm not going to ignore an argument just because it fails to use a model.

Load More