I agree, although I also think we ran with this where it was convenient instead of hashing it out properly (like, we asked "what can we say that'll sound good and be true" when writing fundraiser posts, rather than "what are we up for committing to in a way that will build a high-integrity relationship with whichever community we actually want to serve, and will let any other communities who we don't want to serve realize that and stop putting their hopes in us.")
But I agree re: Julia.
I think many of us, during many intention-minutes, had fairly sincere goals of raising the sanity of those who came to events, and took many actions backchained from these goals in a fairly sensible fashion. I also think I and some of us worked to: (a) bring to the event people who were unusually likely to help the world, such that raising their capability would help the world; (b) influence people who came to be more likely to do things we thought would help the world; and (c) draw people into particular patterns of meaning-making that made them easier to influence and control in these ways, although I wouldn't have put it that way at the time, and I now think this was in tension with sanity-raising in ways I didn't realize at the time.
I would still tend to call the sentence "we were trying to raise the sanity waterline of smart rationality hobbyists who were willing and able to pay for workshops and do practice and so on" basically true.
I also think we actually helped a bunch of people get a bunch of useful thinking skills, in ways that were hard and required actual work/iteration/attention/curiosity/etc (which we put in, over many years, successfully).
IMO, our goal was to raise the sanity of particular smallish groups who attended workshops, but wasn't very much to have effects on millions or billions (we would've been in favor of that, but most of us mostly didn't think we had enough shot to try backchaining from that). Usually when people say "raise the sanity waterline" I interpret them as discussing stuff that happens to millions.
I agree the "tens of thousands" in the quoted passage is more than was attending workshops, and so pulls somewhat against my claim.
I do think our public statements were deceptive, in a fairly common but nevertheless bad way, in that we had many conflicting visions, tended to avoid contradicting people who thought we were gonna do all the good things that at least some of us had at least some desire/hope to do, and we tended in our public statements/fundraisers to try to avoid alienating all those hopes, as opposed to the higher-integrity / more honorable approach of trying to come to a coherent view of which priorities we prioritized how much and trying to help people not have unrealistic hopes in us, and not have inaccurate views of our priorities.
I agree with the sentence you quote from Vervaeke ("[myths] are symbolic stories of perennial patterns that are always with us") but mostly-disagree with "myths ... encapsulate some eternal and valuable truths" (your paraphrase).
As an example, let's take the story of Cain and Abel. IMO, it is a symbolic story containing many perennial patterns:
I suspect this story and its patterns (especially back when there were few stories passed down and held in common) helped many to make conscious sense of what they were seeing, and to share their sense with those around them ("it's like Cain and Abel"). But this help (if I'm right about it) would've been similar to the way words in English (or other natural languages) help people make conscious sense of what they're seeing, and communicate that sense -- myths helped people have short codes for common patterns, helped make those patterns available for including in hypotheses and discussions. But myths didn't much help with making accurate predictions in one shot, the way "eternal and valuable truths" might suggest.
(You can say that useful words are accurate predictions, a la "cluster structures in thingspace". And this is technically true, which is why I am only mostly disagreeing with "myths encapsulate some eternal and valuable truths". But a good word helps differently than a good natural law or something does).
To take a contemporary myth local to our subculture: I think HPMOR is a symbolic story that helps make many useful patterns available to conscious thought/discussion. But it's richer as a place to see motifs in action (e.g.
the way McGonagal initially acts the picture of herself who lives in her head; the way she learns to break her own bounds
) than as a source of directly stateable truths.
A friend recently complained to me about this post: he said most people do much nonsense under the heading “belief”, and that this post doesn’t acknowledge this adequately. He might be right!
Given his complaint, perhaps I ought to say clearly:
1) I agree — there is indeed a lot of nonsense out there masquerading as sensible/useful cognitive patterns. Some aimed to wirehead or mislead the self; some aimed to deceive others for local benefit; lots of it simple error.
2) I agree also that a fair chunk of nonsense adheres to the term “belief” (and the term “believing in”). This is because there’s a real, useful pattern of possible cognition near our concepts of “belief”, and because nonsense (/lies/self-deception/etc) likes to disguise itself as something real.
3) But — to sort sense from nonsense, we need to understand what the real (useful, might be present in the cogsci books of alien intelligences) pattern is, that is near our “beliefs”. If we don’t:
4) I’m pretty sure that LessWrong’s traditional concept of “beliefs” as “accurate Bayesian predictions about future events” is only half-right, and that we want the other half too, both for (3a) type reasons, and for (3b) type reasons.
The old LessWrong Sequences-reading crowd *sort of* knew about this — folks talked about how beliefs about matters directly affected by the beliefs could be self-fulfilling or self-undermining prophecies, and how Bayes-math wasn’t defined around here. But when I read those comments, I thought they were discussing an uninteresting edge case. The idioms by which we organize complex actions (within a person, and between people) are part of the bread and butter of how intelligence works; they are not an uninteresting edge case.
Likewise, people talked sometimes (on LW in the past) about they were intentionally holding false beliefs about their start-ups’ success odds; and they were advised not to be clever, and some commenters dissented from this advice. But IMO the “believing in” concept lets us distinguish:
All of which is sort of to say that I think this model of “believing in” has substance we can use for the normal human business of planning actions together, and isn’t merely propaganda to mislead people into thinking human thinking bugs are less buggy than they are. Also I think it’s as true to the normal English usage of “believing in” as the historical LW usage of “belief” is to the normal English usage of “belief”.
Elaborating Plex's idea: I imagine you might be able to buy into participation as an SFF speculation granter with $400k. Upsides:
(a) Can see a bunch of people who're applying to do things they claim will help with AI safety;
(b) Can talk to ones you're interested in, as a potential funder;
(c) Can see discussion among the (small dozens?) of people who can fund SFF speculation grants, see what people are saying they're funding and why, ask questions, etc.
So it might be a good way to get the lay of the land, find lots of people and groups, hear peoples' responses to some of your takes and see if their responses make sense on your inside view, etc.
I'm tempted to argue with / comment on some bits of the argument about "Instrumental goals are almost-equally as tractable as terminal goals." But when I click on the "comment" button, it removes the article from view and prompts me with "Discuss the wikitag on this page. Here is the place to ask questions and propose changes."
Is there a good way to comment on the article, rather than the tag?
I got to the suggestion by imagining: suppose you were about to quit the project and do nothing. And now suppose that instead of that, you were about to take a small amount of relatively inexpensive-to-you actions, and then quit the project and do nothing. What're the "relatively inexpensive-to-you actions" that would most help?
Publishing the whole list, without precise addresses or allegations, seems plausible to me.
I guess my hope is: maybe someone else (a news story, a set of friends, something) would help some of those on the list to take it seriously and take protective action, maybe after awhile, after others on the list were killed or something. And maybe it'd be more parsable to people if had been hanging out on the internet for a long time, as a pre-declared list of what to worry about, with visibly no one being there to try to collect payouts or something.
Sorry, to amend my statement about "wasn't aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets":
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn't understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to "teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc". I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they'd be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren't necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn't have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia's book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.