I really enjoy this post, for two reasons: as a slice out of the overall aesthetic of the Bay Area Rationalist; and, as an honest-to-goodness reference for a number of things related to good interior decorating.
I'd enjoy seeing other slices of anthropology on the Rationalist scene, e.g. about common verbal tics ("this seems true" vs "that seems true," or "that's right," or "it wouldn't be crazy"), or about some element of history.
"The ants and the grasshopper" is a beautifully written short fiction piece that plays around with the structure and ending of the classic Aesop fable: the ants who prepare for winter, and the grasshopper who does not.
I think there's often a gap between how one thinks through the implications that a certain decision process would have on various difficult situations in the abstract, and how one actually feels while following through (or witnessing others follow through). It's pretty easy to point at that gap's existence, but pretty hard to reason well abou...
MCE is a clear, incisive essay. Much of it clarified thoughts I already had, but framed them in a more coherent way; the rest straightforwardly added to my process of diagnosing interpersonal harm. I now go about making sense of most interpersonal issues through its framework.
Unlike Ricki/Avital, I haven't found that much use from its terminology with others, though I often come to internal conclusions generated by explicitly using its terminology then communicate those conclusions in more typical language. I wouldn't be surprised if I found greater ...
ohh, this is great — agreed on all fronts. thanks shri!
The numbers I have in my Anki deck, selected for how likely I am to find practical use of them:
memento — shows a person struggling to figure out the ground truth; figuring out to whom he can defer (including different versions of himself); figuring out what his real goals are; etc.
hmm, that's fair — i guess there's another, finer distinction here between "active recall" and chaining the mental motion of recalling of something to some triggering mental motion. i usually think of "active recall" as the process of:
over time, you build up an association between mental-state-1 and mental-state-2. doing this with active recall looks like being shown something that automatically triggers mental-state-1, then being forced to actively recall mental-state-2.
with names/faces, i think th...
EDIT: I've slightly edited this and published it as a full post.
Epistemic status: splitting hairs.
There’s been a lot of recent work on memory. This is great, but popular communication of that progress consistently mixes up active recall and spaced repetition. That consistently bugged me — hence this piece.
If you already have a good understanding of active recall and spaced repetition, skim sections I and II, then skip to section III.
Note: this piece doesn’t meticulously cite sources, and will probably...
is there a handy label for “crux(es) on which i’m maximally uncertain”? there are infinite cruxes that i have for any decision, but the ones i care about are the ones about which i’m most uncertain. it'd be nice to have a reference-able label for this concept, but i haven't seen one anywhere.
there's also an annoying issue that feels analogous to "elasticity" — how much does a marginal change in my doxastic attitude toward my belief in some crux affect my conative attitude toward the decision?
if no such concepts exist for either, i'd propose: crux uncertainty, crux elasticity (respectively)
I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
is this something you’ve considered building into LW natively?
I can give some context for that
please do!
gotcha. what would be the best way to send you feedback? i could do:
(while it's top-of-mind: the feedback that generated this question was that the chat interface pops up every single time open a tab of LW, including every time i open a post in a new tab. this gets really annoying very quickly!)
great! how do i access it on mobile LW?
i’d love access! my guess is that i’d use it like — elicit:research papers::[this feature]:LW posts
solved: i think you mean it as this wikipedia article describes:
The word albatross is sometimes used metaphorically to mean a psychological burden (most often associated with guilt or shame) that feels like a curse.
what do you mean by "financial albatross"?
Domain: Various: Startups, Events, Project Management, etc
Link: Manifund, Manifold, and Manifest {2023, 2024: meeting notes, docs, budget}
Person: Various: generally, the Manifold, Manifund, and Manifest teams
Why: This isn't a video, but it's probably relevantly close. All Manifold-sphere things are public — all meeting notes, budgets/finances, strategy docs, etc. I think that someone could learn a lot of tacit knowledge based on how the Manifold-sphere teams work by skimming e.g. our meeting notes docs, which are fairly comprehensive/extensive.
i only spent a few minutes browsing, but i thought this was surprisingly well-made!
…when I say “integrity” I mean something like “acting in accordance with your stated beliefs.” Where honesty is the commitment to not speak direct falsehoods, integrity is the commitment to speak truths that actually ring true to yourself, not ones that are just abstractly defensible to other people. It is also a commitment to act on the truths that you do believe, and to communicate to others what your true beliefs are.
your handle of “integrity” seems to point at something quite similar to the thing at which joe carlsmith’s handle of “sincerity” (https...
ah, lovely! maybe add that link as an edit to the top-level shortform comment?
fourthed. oli, do you intend to post this?
if not, could i post this text as a linkpost to this shortform?
how did this go? any update?
i quite liked this post. thanks!
i'll give two answers, the Official Event Guidelines and the practical social environment.[1] i will say that i have have a bit of a COI in that i'm an event organizer; it'd be good if someone who isn't organizing the event, but e.g. attended the event last year, to either second my thoughts or give their own.
thanks oli, and thanks for editing mine! appreciate the modding <3
thanks for writing this — also, some broad social encouragement for the practice of doing quick/informal lit reviews + posting them publicly! well done :)
that is... wild. thanks for sharing!
I think institutional market makers are basically not pricing [slow takeoff, or the expectation of one] in
why do you think they're not pricing this in?
The market makers don't seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn't come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.
Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeof
Really love this post. Thanks for writing it!
thanks for the feedback on the website. here's the explanation we gave on the manifest announcement post:
...In the week between LessOnline and Manifest, come hang out at Lighthaven with other attendees! Cowork and share meals during the day, attend casual workshops and talks in the evening, and enjoy conversations by the (again, literal) fire late into the night.
Summer Camp will be pretty lightweight: we’ll provide the space and the tools, and you & your fellow attendees will bring the discussions, workshops, tournaments, games, and whatever else you’re e
I really enjoyed this — thank you for writing. I also think the updated version is a lot better than the previous version, and I appreciate the work you put in to update it. I'm really, really looking forward to the other posts in this sequence.
I'd also really enjoy a post that's on this exact topic, but one that I'd feel comfortable sending to my mom or something, cf "Broad adverse selection (for poets)."
+1 on Things You're Allowed To Do, it's really really great
here are some specific, random, generally small things that i do quite often:
If you and your audience have smartphones, we suggest making use of a copy of this spreadsheet and google form.
are "spreadsheet" and "google form" meant to be linked to something?
I think a lot of what I write for rationalist meetups would apply straightforwardly to EA meetups.
agreed. this sort of thing feels completely missing from the EA Groups Resources Centre, and i'd guess it would be a big/important contribution.
This may be a silly question, but- how does cross posting usually work?
iirc, when you're publishing a post on {LessWrong, the EA forum}, one of the many settings at the bottom is "Cross-Post to {the EA forum, LessWrong}," or something along those lines. there's some karma requirement for both the EA forum and for LW — ...
This is great! Have you cross-posted this to the EA Forum? If not, may I?
Thanks for the response!
Re: concerns about bad incentives, I agree that you can depict the losses associated with manipulating conditional prediction markets as paying a "cost" — even though you'll probably lose a boatload of money, it might be worth it to lose a boatload of money to manipulate the markets. In the words of Scott Alexander, though:
...If you’re wondering why people aren’t going to get an advantage in the economy by committing horrible crimes, the answer is probably the same combination of laws, ethics, and reputational concerns that works every
Thanks for the response!
This applies to roughly the entire post, but I see an awful lot of magical thinking in this space.
Could you point to some specific areas of magical thinking in the post? and/or in the space?[1] (I'm not claiming that there aren't any, I definitely think there are. I'm interested to know where I & the space are being overconfident/thinking magically, so that I/it can do less magical thinking.)
What is the actual mechanism by which you think prediction markets will solve these problems?
The mechanism that Manifold Love uses. In...
Damn good post. Pretty fucking funny, too.