I mean, I might be being dumb on all these points. But I personally disagree about:
I think my problem with the last section is only that it is not up to the very high standard that the rest of the post seems to me to hit, in which things are made unusually clear to even a young/inexperienced reader who is happy to believe relayed events but who wants to see the why of things for themself. (And I'm not providing these 'disagreements' because I think the article would be better with my opinions instead of the authors; I don't think I"m especially correct about these matters; I'm providing them as evidence that this part of the article is less visibly-true-to-all-readers, e.g. to me)
I appreciate this post for spelling out an unsolved problem that IMO is a major reason it's hard to build good community gatherings among large groups of people, and for including enough detail/evidence that I expect many, after reading it, can see how the trouble works in their own inside views. I slightly wish the author had omitted the final section ("What would be the elements of a good system?"), as it seems less evidence-backed than the rest (and I personally agree with its claims less), and its inclusion makes it a bit harder for me to recommend the article to those needing a problem-description.
I love this post and suspect it's content is true and underappreciated. (Though I admittedly haven't found any new ways to test it / etc since it came out.)
I like it, but I wish its main point would stick better in my mind somehow. (This was true when I read it last year, and again when I re-skimmed it now.) I, too like the ladder metaphor; I agree that it helps get people thinking about on-ramps, and that that this is valuable; I like the examples and techniques about remembering how you got there, imagining a new early-you who showed up today, etc. But: I still feel there's a "whole" you're gesturing at that's not quite sticking in my head, and I wonder if a slight rewrite could get it to?
I read this once when Sarah wrote it, just over a year ago, and I still think about it ~every two weeks or so. It convinced me that it's possible and desirable to be neutral along some purpose-relevant axes, and that I should keep my eye on where and how this is accomplished, and what it does. (I stayed convinced.) Hoping it makes it in.
I appreciate the explicit, fairly clear discussion of a likely gap in what I'm reading about parenting and kids. I was aware of a gap near here, but the post added a bit of detail to my model, and I like having it in common knowledge; I also hope it may encourage other such posts. (Plus, it's short and easy to read.)
Nominating this for 2024 review. It seems like an accurate (in many cases, at least) model of a phenomenon I care about (and encounter fairly frequently, in myself and in people I end up trying to help with things) that I didn't previously have an accurate model of.
A further wrinkle / another example is that a question like "what should I think about (in particular, what to gather information about / update about)", during the design process, wants these predictions.
Yes; this (or something similar) is why I suspect that "'believing in' atoms" may involve the same cognitive structure as "'believing in' this bakery I am helping to create" or "'believing in' honesty" (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to "believe in" atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
Prediction and planning remain incredibly distinct as structures of cognitive work,
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
Thanks for building; I'm looking forward to trying it. A main thing I keep wanting from LLM writing assistance (I'm not sure how hard this is; I've tried prompting LLMs myself, and failed to get the quality I wanted, but I didn't try with much patience or skill) is help applying Strunk and White's "The Elements of Style" to my writing. That is, I want help flagging phrases/words/sentence constructions that fail to be short and to the point.