All of Saul Munn's Comments + Replies

I really enjoy this post, for two reasons: as a slice out of the overall aesthetic of the Bay Area Rationalist; and, as an honest-to-goodness reference for a number of things related to good interior decorating.

I'd enjoy seeing other slices of anthropology on the Rationalist scene, e.g. about common verbal tics ("this seems true" vs "that seems true," or "that's right," or "it wouldn't be crazy"), or about some element of history.

"The ants and the grasshopper" is a beautifully written short fiction piece that plays around with the structure and ending of the classic Aesop fable: the ants who prepare for winter, and the grasshopper who does not.

I think there's often a gap between how one thinks through the implications that a certain decision process would have on various difficult situations in the abstract, and how one actually feels while following through (or witnessing others follow through). It's pretty easy to point at that gap's existence, but pretty hard to reason well abou... (read more)

MCE is a clear, incisive essay. Much of it clarified thoughts I already had, but framed them in a more coherent way; the rest straightforwardly added to my process of diagnosing interpersonal harm. I now go about making sense of most interpersonal issues through its framework. 

Unlike Ricki/Avital, I haven't found that much use from its terminology with others, though I often come to internal conclusions generated by explicitly using its terminology then communicate those conclusions in more typical language. I wouldn't be surprised if I found greater ... (read more)

ohh, this is great — agreed on all fronts. thanks shri!

The numbers I have in my Anki deck, selected for how likely I am to find practical use of them:

  • total # hours in a year — 8760
  • ${{c1::200}}k/year = ${{c2::100}}/hour
  • ${{c1::100}}k/year = ${{c2::50}}/hour
  • # of hours in a working year — 2,000 hours
  • miles per time zone — ~1,000 miles
  • california top-to-bottom — 900 miles
  • US coast-to-coast — 3,000 miles
  • equator circumference — (before you show the answer, i always find it fun that i can quickly get an approximation by multiplying the # of time zones by the # of miles per time zone!) :::25,000::: miles
  • US GDP in 2022 — $
... (read more)
Answer by Saul Munn103

memento — shows a person struggling to figure out the ground truth; figuring out to whom he can defer (including different versions of himself); figuring out what his real goals are; etc.

hmm, that's fair — i guess there's another, finer distinction here between "active recall" and chaining the mental motion of recalling of something to some triggering mental motion. i usually think of "active recall" as the process of:

  • mental-state-1
  • ~stuff going on in your brain~
  • mental-state-2

over time, you build up an association between mental-state-1 and mental-state-2. doing this with active recall looks like being shown something that automatically triggers mental-state-1, then being forced to actively recall mental-state-2.

with names/faces, i think th... (read more)

Saul Munn*143

Active Recall and Spaced Repetition are Different Things

EDIT: I've slightly edited this and published it as a full post.

Epistemic status: splitting hairs.

There’s been a lot of recent work on memory. This is great, but popular communication of that progress consistently mixes up active recall and spaced repetition. That consistently bugged me — hence this piece.

If you already have a good understanding of active recall and spaced repetition, skim sections I and II, then skip to section III.

Note: this piece doesn’t meticulously cite sources, and will probably... (read more)

1PhilosophicalSoul
Glad somebody finally made a post about this. I experimented with the distinction in my trio of posts on photographic memory a while back.
1Parker Conley
Useful clarification and thanks for writing this up! Inspired by and building on this, I decided to clean up some thoughts of my own in a similar direction. Here they are on my short forum: What are the actual use cases of memory systems like Anki?

is there a handy label for “crux(es) on which i’m maximally uncertain”? there are infinite cruxes that i have for any decision, but the ones i care about are the ones about which i’m most uncertain. it'd be nice to have a reference-able label for this concept, but i haven't seen one anywhere.

there's also an annoying issue that feels analogous to "elasticity" — how much does a marginal change in my doxastic attitude toward my belief in some crux affect my conative attitude toward the decision?

if no such concepts exist for either, i'd propose: crux uncertainty, crux elasticity (respectively)

4Shri Samson
Crux elasticity might be better phrased as 'crux sensitivity'. There is a large literature on Sensitivity Analysis, which gets at how much a change in a given input changes an output. I'd wager saying 'my most sensitive crux is X' gets the meaning across with less explanation, whereas elasticity requires some background econ knowledge.
2Dagon
I've sometimes used "crux weight" for a related but different concept - how important that crux is to a decision.  I'd propose "crux belief strength" for your topic - that part of it fits very well into a Bayesean framework for evidence. Most decisions (for me, as far as I can tell) are overdetermined - there are multiple cruxes, with different weights and credences, which add up to more than 51% "best".  They're inter-correlated, but not perfectly, so it's REALLY tricky to be very explicit or legible in advance what would actually change my mind.

I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.

is this something you’ve considered building into LW natively?

3TsviBT
https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods
2TsviBT
DM'd

gotcha. what would be the best way to send you feedback? i could do:

  • comments here
  • sent directly to you via LW DM, email, [dm through some other means] or something else if that's better

(while it's top-of-mind: the feedback that generated this question was that the chat interface pops up every single time open a tab of LW, including every time i open a post in a new tab. this gets really annoying very quickly!)

2Ruby
Cheers! Comments here are good, so is LW DM, or Intercom.

great! how do i access it on mobile LW?

2Ruby
Not available on mobile at this time, I'm afraid.

i’d love access! my guess is that i’d use it like — elicit:research papers::[this feature]:LW posts

solved: i think you mean it as this wikipedia article describes:

The word albatross is sometimes used metaphorically to mean a psychological burden (most often associated with guilt or shame) that feels like a curse.

3Elizabeth
Correct. I basically meant people who can only attend college with a lot of debt, and won't obviously have a career that makes it an easy burden, but didn't want to go on a time-consuming tangent about the conditionals.

what do you mean by "financial albatross"?

2Saul Munn
solved: i think you mean it as this wikipedia article describes:

Domain: Various: Startups, Events, Project Management, etc

Link: Manifund, Manifold, and Manifest {2023, 2024: meeting notes, docs, budget}

Person: Various: generally, the Manifold, Manifund, and Manifest teams

Why: This isn't a video, but it's probably relevantly close. All Manifold-sphere things are public — all meeting notes, budgets/finances, strategy docs, etc. I think that someone could learn a lot of tacit knowledge based on how the Manifold-sphere teams work by skimming e.g. our meeting notes docs, which are fairly comprehensive/extensive.

this interview between alexey guzey and dwarkesh patel gets into it a bit!

Saul Munn5-1

i only spent a few minutes browsing, but i thought this was surprisingly well-made!

…when I say “integrity” I mean something like “acting in accordance with your stated beliefs.” Where honesty is the commitment to not speak direct falsehoods, integrity is the commitment to speak truths that actually ring true to yourself, not ones that are just abstractly defensible to other people. It is also a commitment to act on the truths that you do believe, and to communicate to others what your true beliefs are.

your handle of “integrity” seems to point at something quite similar to the thing at which joe carlsmith’s handle of “sincerity” (https... (read more)

2habryka
Yeah, the two sure share some structure, though IDK, I feel like I would use the terms in practice differently. I like it as a different lens on a bunch of the same underlying dynamics and problems. I associate the idea of "integrity" with a certain kind of hardness. An unwillingness to give into external pressure, or to give into local convenience. "Sincerity" feels softer, more something that you do when you are alone or in an environment that is safe.  If I had to treat these as separate (though they seem certainly heavily overlapping), I would say something like "integrity is being able to speak what you truly believe even when facing opposition" and "sincerity is being able to sense what you truly believe", but idk, it's not perfect.

ah, lovely! maybe add that link as an edit to the top-level shortform comment?

fourthed. oli, do you intend to post this?

if not, could i post this text as a linkpost to this shortform?

2habryka
It's long been posted! Integrity and accountability are core parts of rationality 

i quite liked this post. thanks!

Saul Munn10

i'll give two answers, the Official Event Guidelines and the practical social environment.[1] i will say that i have have a bit of a COI in that i'm an event organizer; it'd be good if someone who isn't organizing the event, but e.g. attended the event last year, to either second my thoughts or give their own.

  1. Official Event Guidelines
    1. Unsafe drug use of any kind is disallowed and strongly discouraged, both by the venue and by us.
    2. Illegal drug use is disallowed and strongly discouraged, both by the venue and by us.
    3. Alcohol use during the event is discoura
... (read more)
Saul Munn31

thanks oli, and thanks for editing mine! appreciate the modding <3

Saul Munn10

thanks for writing this — also, some broad social encouragement for the practice of doing quick/informal lit reviews + posting them publicly! well done :)

Saul Munn20

that is... wild. thanks for sharing!

I think institutional market makers are basically not pricing [slow takeoff, or the expectation of one] in

why do you think they're not pricing this in?

lc*101
  • The market makers don't seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn't come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.

  • Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeof

... (read more)

Really love this post. Thanks for writing it!

thanks for the feedback on the website. here's the explanation we gave on the manifest announcement post:

In the week between LessOnline and Manifest, come hang out at Lighthaven with other attendees! Cowork and share meals during the day, attend casual workshops and talks in the evening, and enjoy conversations by the (again, literal) fire late into the night.

Summer Camp will be pretty lightweight: we’ll provide the space and the tools, and you & your fellow attendees will bring the discussions, workshops, tournaments, games, and whatever else you’re e

... (read more)
1imoatama
Do you have a sense for what the week of SummerCamp will be like? I have taken the week off from my (completely remote-friendly) job, do you think there'll be enough going on that this is the right move (vs spending some time working)? Or is it really hard to say until we're there? For a bit more context, I am flying in from Australia for LessOnline/SummerCamp/Manifest. I'll be staying at Lighthaven for the whole period of the three events. I have enough leave to take the whole period, but I try to use my leave as parsimoniously as possible so if coworking from Lighthaven for some of the week is possible whilst still soaking up the summer camp vibes / feeling present, I'd probably do that. 

I really enjoyed this — thank you for writing. I also think the updated version is a lot better than the previous version, and I appreciate the work you put in to update it. I'm really, really looking forward to the other posts in this sequence.

I'd also really enjoy a post that's on this exact topic, but one that I'd feel comfortable sending to my mom or something, cf "Broad adverse selection (for poets)."

  1. the name of this post was really confusing for me. i thought it would be about "how to stop defeating akrasia," not "how to defeat akrasia by stopping." consider renaming it to be a bit more clear?
  2. the part at the end really reminded me of this piece by dr mciver: https://notebook.drmaciver.com/posts/2022-12-20-17:21.html

+1 on Things You're Allowed To Do, it's really really great

Answer by Saul Munn50

here are some specific, random, generally small things that i do quite often:

  • sit on the floor. i notice myself wanting to sit, and i notice myself lacking a chair. fortunately, the floor is always there.
  • explicitly babble! i babble about thoughts that are bouncing around in my head, no matter the topic! open a new doc — docs.new works well, or whatever you use — set a 5 minute timer, and just babble. write whatever comes to mind.
  • message effective/competent people to cowork with them. i'm probably not the most effective/competent person you know, but feel fr
... (read more)

If you and your audience have smartphones, we suggest making use of a copy of this spreadsheet and google form.

are "spreadsheet" and "google form" meant to be linked to something?

I think a lot of what I write for rationalist meetups would apply straightforwardly to EA meetups.

agreed. this sort of thing feels completely missing from the EA Groups Resources Centre, and i'd guess it would be a big/important contribution.

This may be a silly question, but- how does cross posting usually work?

iirc, when you're publishing a post on {LessWrong, the EA forum}, one of the many settings at the bottom is "Cross-Post to {the EA forum, LessWrong}," or something along those lines. there's some karma requirement for both the EA forum and for LW — ... (read more)

This is great! Have you cross-posted this to the EA Forum? If not, may I?

4Screwtape
I have not. There's no particular reason why, other than I tend to view myself as a noncentral EA and so hang out more in LW and ACX spaces. I think a lot of what I write for rationalist meetups would apply straightforwardly to EA meetups. This may be a silly question, but- how does cross posting usually work? I have a bit of a preference to stick my handle or name on things I write, so maybe I should make an account over on EA forum. It sounds like you spend more time over there; are there norms on EA forum around say, pseudonyms and real names, or being a certain amount aligned with EA?

Thanks for the response!

Re: concerns about bad incentives, I agree that you can depict the losses associated with manipulating conditional prediction markets as paying a "cost" — even though you'll probably lose a boatload of money, it might be worth it to lose a boatload of money to manipulate the markets. In the words of Scott Alexander, though:

If you’re wondering why people aren’t going to get an advantage in the economy by committing horrible crimes, the answer is probably the same combination of laws, ethics, and reputational concerns that works every

... (read more)
1lunatic_at_large
So I've been thinking a little more about the real-world-incentives problem, and I still suspect that there are situations in which rules won't solve this. Suppose there's a prediction market question with a real-world outcome tied to the resulting market probability (i.e. a relevant actor says "I will do XYZ if the prediction market says ABC"). Let's say the prediction market participants' objective functions are of the form play_money_reward + real_world_outcome_reward. If there are just a couple people for whom real_world_outcome_reward is at least as significant as play_money_reward and if you can reliably identify those people (i.e. if you can reliably identify the people with a meaningful conflict of interest), then you can create rules preventing those people from betting on the prediction market. However, I think that there are some questions where the number of people with real-world incentives is large and/or it's difficult to identify those people with rules. For example, suppose a sports team is trying to determine whether to hire a star player and they create a prediction market for whether the star athlete will achieve X performance if hired. There could be millions of fans of that athlete all over the world who would be willing to waste a little prediction market money to see that player get hired. It's difficult to predict who those people are without massive privacy violations -- in particular, they have no publicly verifiable connection to the entities named in the prediction market. 

Thanks for the response!

This applies to roughly the entire post, but I see an awful lot of magical thinking in this space.

Could you point to some specific areas of magical thinking in the post? and/or in the space?[1] (I'm not claiming that there aren't any, I definitely think there are. I'm interested to know where I & the space are being overconfident/thinking magically, so that I/it can do less magical thinking.)

What is the actual mechanism by which you think prediction markets will solve these problems?

The mechanism that Manifold Love uses. In... (read more)

3SimonM
This post triggered me a bit, so I ended up writing one of my own. I agree the entire thing is about how to subsidise the markets, but I think you're overestimating how good markets are as a mechanism for subsidising forecasting (in general). Specifically for your examples: 1. Direct subsidies are expensive relative to the alternatives (the point of my post) 2. Hedging doesn't apply in lots of markets, and in the ones where it does make sense those markets already exist. (Eg insurance) 3. New traders is a terrible idea as you say. It will work in some niches (eg where there's lots of organic interest, but it wont work at scale for important things)