LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Mmm, I'm thinking of before vaccines came out. I have more thoughts about that but maybe don't want to make this thread all about that.
I've heard similar comments from several people about the afterparty, and regret not spending a lot more time trying to make it a good part of the experience. I think in future years I maybe would prefer the afterparty on Saturday night to be more primarily "for Solstice attendees" and try to make a different night of the weekend the "everyone from all over the extended community comes over."
(You didn't mention the decompression zone but I maybe also want to take the opportunity to apologize: I had announced the decompression zone around firepits, but, then it turned out that all the firepits were full of people by the time I got there, and the whole area was so loud it felt hard to do announcements to direct people into the room we found. What I realize now was that I should have put up more/bigger signs about that)
Ah whoops. Fixed.
(Normally this wouldn't have been that bad a problem since the form itself is private, I happened to make it public to be easier to get feedback on the questions earlier today)
(For people who read it already, I just added an Appendix of Director Commentary. I might add another Appendix B about why I made some of the choices in the event that did get included)
fwiw I think there is a good thing about steelmanning and a different good thing about ITT passing. (Which seems plausibly consistent with Rob's title ITT-passing and civility are good; "charity" is bad; steelmanning is niche, and also your post title here. I haven't reread either yet but am responding since I was tagged)
ITT passing is good for making sure you are having a conversation that changes people's minds, and not getting confused/mislead about what other people believe.
Steelmanning is good for identifying the strongest forms of arguments in a vacuum, which is useful for exploring the argument space but also prone to spending time on something that nobody believes or cares about, which is sometimes worth it and sometimes not. (it also often is part of a process that misleads people about what a person or group believes)
Which of those is more important most of the time? I dunno, the answer is AFAICT "each consideration is important enough to track that you should pay attention to them periodically." And it feels like attempts to pin this down further feel more like some kind of culture war that isn't primarily about the object-level fact of how often they are useful.
(apologies if I have missed a major point here, replying quickly at a busy time)
Minor reference that I agree wasn't worth spelling out in the post but seemed nice to include: A Little Echo is a song I wrote in 2012 as "a cryonics funeral song", about the various ways that echoes of people can survive.
It hasn't turned out to be a mainstay Solstice song. I was actually a bit sad that this solstice turned out last-minute-accidentally to be the most cryonics-heavy Solstice I've led (as a recurring B Plot), but it didn't really make sense to do the song because other songs were filling it's niche as a singalong.
I believe that we will win.
An echo of an old ad for the 2014 US men’s World Cup team. It did not win.
See: @AnnaSalamon's Believing In.
I've recently been meditating on Eliezer's:
Beliefs are for being true. Use them for nothing else.
If you need a good thing to happen, use a plan for that.
I think Anna Salamon is right that there are two separate things people call beliefs, one of which is about probabilities, and one is about what things you want to invest in.
In one early CFAR test session, we asked volunteers to each write down something they believed. My plan was that we would then think together about what we would see in a world where each belief was true, compared to a world where it was false.
I was a bit flummoxed when, instead of the beliefs-aka-predictions I had been expecting, they wrote down such “beliefs” as “the environment,” “kindness,” or “respecting people.” At the time, I thought this meant that the state of ambient rationality was so low that people didn’t know “beliefs” were supposed to be predictions, as opposed to group affiliations.
I’ve since changed my mind. My new view is that there is not one but two useful kinds of vaguely belief-like thingies – one to do with predictions and Bayes-math, and a different one I’ll call “believing in.” I believe both are lawlike, and neither is a flawed attempt to imitate/parasitize the other. I further believe both can be practiced at once – that they are distinct but compatible.
I’ll be aiming, in this post, to give a clear concept of “believing in,” and to get readers’ models of “how to ‘believe in’ well” disentangled from their models of “how to predict well.”
I think it's a dangling thread of rationality discourse of how to fully integrate Believing In. Fortunately, it's The Review Season and it's a good time to back to the Believing In post and review it.
On thing to note is that "short reviews" in the nomination phase are meant to be basically a different type of object than "effort reviews." Originally we actually had a whole different data-type for them ("nominations"), but it didn't seem worth the complexity cost.
And then, separately: one of the points of the review is just to track "did anyone find this actually helpful?" and a short review that's like "yep, I did in fact use this concept and it helped me, here's a few details about it" is valuable signal.
Drive by "this seems false, because [citation]" also good)
It is nice to do more effortful reviews, but I definitely care about those types of short reviews.
Thanks!
The reason I asked you to write some-version-of-this is, I have in fact noticed myself veering towards a certain kind of melodrama about the whole x-risk thing, and I've found various flavors of your "have you considered just... not doing that?" to be helpful to me. "Oh, I can just choose to not be melodramatic about things."
(on net I am still fairly relatively dramatic/narrative-shaped as rationalists go, but, I've deliberately tuned the knob in the other direction periodically and think various little bits of writing of yours has helped me)
I liked the framing you did at Solstice of it as a general prompt to treat it as a skill issue without being about the exact recipe.
(fixed)