brook

https://forum.effectivealtruism.org/users/brook

Comments

Answer by brookAug 25, 202310

Quick thoughts:

I'd say it looks a shade long, but I could well be wrong about the length of survey people will answer. Some suggestions for cutting it down a little:

  • Questions 2-4 in section 1 seem somewhat redundant with one another to me (i.e. you could probably have just one or at most two of them). 
  • The list in question 1 (section 2) seems long to ask people to rate all of. Could you drop a few? (I'm thinking "self-help" is too broad, epistemics & uncertainty could maybe be merged, etc.). 

You might also want to ask people to rate different parts of the course (moderation, content, structure, etc.) so you have an idea of what needs improving.

Overall, looks good! Thanks for running the project, strongly believe that evaluation is a really important part of any course/organisation/whatever. 

brook1y20

"This is an internal document written for the LessWrong/Lightcone teams. I'm posting as "available by link only" post to share in a limited way, because I haven't reviewed the post for making sense to a broader audience, or thoroughly checked for sensitive things."

 

This post appears in search results and to people who have followed you on LW. I didn't read it, but you may want to take it down if this is unwanted enough. 

brook1y10

ShareX does look like a more powerful (for some use-cases) version! I think the key benefits of Loom are it's extreme ease of use & its automatic upload of the video, which makes sharing feel very streamlined. 

Unfortunately, I'm on macOS currently, so I can't test ShareX myself. 

brook1y30

Really great post! The concept I have in my head looks broadly-applicable though slippery

The section below sounded a lot to me like "you form a model from a set of words, and then later on you Directly Observe the Territory™, and this shifts the mental model associated with the words in an important way". 

Running on this model, I think a lot of the sequences was like this for me-- it wasn't until 1-2 years after reading them that I noticed concrete, major changes in my behaviour. Possibly this time was spent observing the part of the territory I call my brain.

But in fact, there really is a kind of deeper, fuller, contextualized understanding, a kind of getting-it-in-your-bones, that often doesn’t show up until later.  Because when you first hear the wisdom, it doesn’t really matter to you.  You’re usually not in the sort of situation where the wisdom applies, so it’s just this random fact floating around in your brain.

Often, it’ll be years later, and you’ll be in the middle of a big, stressful situation yourself, and that little snippet of wisdom will float back up into your thoughts, and you’ll go “ohhhhhhhh, so that’s what that means!”

You already knew what it meant in a sort of perfunctory, surface-level, explicit sense, but you didn't really get it, on a deep level, until there was some raw experiential data for it to hook up to. 

brook1y30

Thanks for running this survey! I'm looking to move into AI alignment, and this represents a useful aggregator of recommendations from professionals and from other newcomers; I was already focussing on AGISF but it's useful to see that many of the resources advertised as 'introductory' on the alignment forum (e.g. the Embedded Agency sequence) are not rated as very useful. 

I was also surprised that conversations with researchers ranked quite low as a recommendation to newcomers, but I guess it makes sense that most alignment researchers are not as good at 'interpreting' research as e.g. Rob Miles, Richard Ngo. 

brook1y10

I think "speech of appropriate thought-like-ness" is, unfortunately, wildly contextual. I would have predicted that the precise lengthy take would go down well on LW and especially with ACX readers. This specific causal gears-level type of explanation is common and accepted here, but for audiences that aren't expecting it, it can be jarring and derail a discussion. 

Similarly, many audiences are not curious about the subject! Appropriate is the operative word. Sometimes it will be appropriate to gloss over details either because the person is not likely to be interested (and will tune out lengthy sentences about causal models of how doctors behave), or because it's non-central to the discussion at hand. 

For instance, if I was chatting to a friend with a medical (but non-rationalist) background about marijuana legalisation, the lengthy take is probably unwise; benzodiazepines are only peripherally relevant to the discussion, and the gears-level take easily leads us into one of several rabbit holes (Are they actually unlikely to cause withdrawal symptoms? What do you mean by unlikely? Does psychological addiction mean precisely that? Is that why those guidelines exist? why are you modelling doctors in this way at all, is that useful? should I be using gears-level models?).

Any of these questions can lead to a fruitful discussion (especially the last few!), but if you have specific reason to keep discussions on track I would save your gears-explanations for cruxes and similar. 

brook1y10

This is good for some formats; I think in verbal communication I like to track this because the key variable I'm optimising on is listener attention/time; giving both loses a lot. I find it can be useful to save the gears-level stuff for the cruxes and try to keep the rest brief.

brook1y100

I strongly agree with the Johnswentworth's point! I think my most productive discussions have come from a gears-level/first-example style of communication. 

What I'm arguing in this post is very much not that this communication style is bad. I'm arguing that many people will stop listening if you jump straight to this, and you should explicitly track this variable in your head when communicating. 

Obviously 'know your audience and adjust complexity appropriately' is quite a trivial point, but to me thinking about it with a 'thought-like-ness' frame helps me to actually implement tis by asking "how much translating do I need to do for this audience?" 

Maybe I should rewrite the post as "Gears in Conversation" or so.

brook2y140

I think it's good to experiment, but I actually found the experience of being on the site over the last week pretty unpleasant, and I've definitely spent much less time here. I initially went through some old ideas I had and tried posting one, but ended up just avoiding LessWrong until the end of the week. 

I'm not totally sure right now why I felt this way. Something-like I'm very sensitive to feeling like my normal motivation system is being hijacked? I spent all of my time thinking about the best way to act differently given GHW, rather than just reading the content and enjoying it. This was pretty uncomfortable for me. 

brook2y50

I'm sure this happens in many areas (maths, for one), but medical language is a pretty well-optimised system I know well. You might like to use it for inspiration:

Medicine: "72yo F BIBA with 3/7 hx SOB, CP. Chest clear, HS I+II+0. IMP: IECOPD"

English: 72 year old woman brought in by ambulance because she's been short of breath and had chest pain for the past 3 days. No noises were audible over her lungs with a stethoscope, both of her heart sounds were clearly audible with no added sounds. I think it's most likely this is being caused by an infection on top of a history of chronic obstructive pulmonary disease.  

Load More