LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I believe that we will win.
An echo of an old ad for the 2014 US men’s World Cup team. It did not win.
See: @AnnaSalamon's Believing In.
I've recently been meditating on Eliezer's:
Beliefs are for being true. Use them for nothing else.
If you need a good thing to happen, use a plan for that.
I think Anna Salamon is right that there are two separate things people call beliefs, one of which is about probabilities, and one is about what things you want to invest in.
In one early CFAR test session, we asked volunteers to each write down something they believed. My plan was that we would then think together about what we would see in a world where each belief was true, compared to a world where it was false.
I was a bit flummoxed when, instead of the beliefs-aka-predictions I had been expecting, they wrote down such “beliefs” as “the environment,” “kindness,” or “respecting people.” At the time, I thought this meant that the state of ambient rationality was so low that people didn’t know “beliefs” were supposed to be predictions, as opposed to group affiliations.
I’ve since changed my mind. My new view is that there is not one but two useful kinds of vaguely belief-like thingies – one to do with predictions and Bayes-math, and a different one I’ll call “believing in.” I believe both are lawlike, and neither is a flawed attempt to imitate/parasitize the other. I further believe both can be practiced at once – that they are distinct but compatible.
I’ll be aiming, in this post, to give a clear concept of “believing in,” and to get readers’ models of “how to ‘believe in’ well” disentangled from their models of “how to predict well.”
I think it's a dangling thread of rationality discourse of how to fully integrate Believing In. Fortunately, it's The Review Season and it's a good time to back to the Believing In post and review it.
On thing to note is that "short reviews" in the nomination phase are meant to be basically a different type of object than "effort reviews." Originally we actually had a whole different data-type for them ("nominations"), but it didn't seem worth the complexity cost.
And then, separately: one of the points of the review is just to track "did anyone find this actually helpful?" and a short review that's like "yep, I did in fact use this concept and it helped me, here's a few details about it" is valuable signal.
Drive by "this seems false, because [citation]" also good)
It is nice to do more effortful reviews, but I definitely care about those types of short reviews.
Thanks!
The reason I asked you to write some-version-of-this is, I have in fact noticed myself veering towards a certain kind of melodrama about the whole x-risk thing, and I've found various flavors of your "have you considered just... not doing that?" to be helpful to me. "Oh, I can just choose to not be melodramatic about things."
(on net I am still fairly relatively dramatic/narrative-shaped as rationalists go, but, I've deliberately tuned the knob in the other direction periodically and think various little bits of writing of yours has helped me)
I liked the framing you did at Solstice of it as a general prompt to treat it as a skill issue without being about the exact recipe.
Yeah I do not think it is good to format that in blockquotes without spelling out that it is a paraphase (in my original I say "something like 'Anthropic wants...'")
One thing I want to remind people: if something looks like it's going to end up winning the review, and you disagree with it, if you write up a critical review that gets upvoted (10+ karma), it'll show up whenever we spotlight the review. This may not be fully satisfying if you were really hoping to change everyone's mind, but it does mean you can at least make sure our infrastructure makes sure everyone knows about your disagreement.
(I recommend optimizing your first sentence to convey the most important argument of your disagreement, so the one-line version of the comment gets the core idea across)
For example, AI Control was one of the leading candidates from the last review, but, John's countertake is highlighted for people who are skimming through the /bestoflesswrong page.
Oh sad. I thought I fixed the timezone issues but I guess there were more. Looking into it.
Yeah I'll be working out kinks like this tomorrow
Part of the reason I'm rolling the dice on running Solstice the way I am, is, it doesn't really seem like we have the luxury of not engaging with the question. (But, there's a reason I wrote this post including option #1 – if I didn't think I had a decent chance of pulling it off I'd have done something different)
FYI I am also planning an aftercare / decompression / chat around a firepit thing for people who need that afterwards.
This didn't really do what I wanted. For starters, literally quoting Richard is self-defeating – either it's reasonable to make this sort of criticism, or it's not. If you think there is something different between your post and Richard's comment, I don't know what it is and why you're doing the reverse-quote except to be sorta cute.
I don't even know why you think Richard's comment is "primarily doing the social move of lower trust in what Mikhail says". Richard's comment gives examples of why he thinks that about your post, you don't explain what you think is charitable about his.
I think it is necessary sometimes to argue that people are being uncharitable, and looking they are doing a status-lowering move more than earnest truthseeking.
I haven't actually looked at your writing and don't have an opinion I'd stand by, but from my passing glances at it I did think Richard's comment seemed to be pointing at an important thing.
Minor reference that I agree wasn't worth spelling out in the post but seemed nice to include: A Little Echo is a song I wrote in 2012 as "a cryonics funeral song", about the various ways that echoes of people can survive.
It hasn't turned out to be a mainstay Solstice song. I was actually a bit sad that this solstice turned out last-minute-accidentally to be the most cryonics-heavy Solstice I've led (as a recurring B Plot), but it didn't really make sense to do the song because other songs were filling it's niche as a singalong.