The predicted winners for future years of the review are now visible on the Best of LessWrong page! Here are the top ten guesses for the currently ongoing 2024 review:
(I've already voted on several of these! I doctored the screenshot to hide my votes)
I think LessWrong's annual review is better than karma at finding the best and most enduring posts. Part of the dream for the review prediction markets is bringing some of that high-quality signal from the future into the present. That signal is currently highlighted with gold karma on the post item, if the prediction market has a high enough probability.
Currently the markets are pretty thinly traded, but I think they already have decent signal. They could do a lot better, I think, with a little more smart trading. It would be a nice bonus if this UI attracted a bit more betting.
Hopefully coming soon: a tag on the markets which indicates which year review they'll be in, to make it a bit easier for consistency traders to make their bag.
Human intelligence amplification is very important. Though I have become a bit less excited about it lately, I do still guess it's the best way for humanity to make it to a glorious destiny. I found that having a bunch of different methods in one place organised my thoughts, and I could more seriously think about what approaches might work.
I appreciate that Tsvi included things as "hard" as brain emulation and as soft as rationality, tools for thought and social epistemology.
I liked this post. I thought it was interesting to read about how Tobes' relation to AI changed, and the anecdotes were helpfully concrete. I could imagine him in those moments, and get a sense of how he was feeling.
I found this post helpful for relating to some of my friends and family as AI has been in the news more, and they connect it to my work and concerns.
A more concrete thing I took away: the author describing looking out of his window and meditating on the end reaching him through that window. I find this a helpful practice, and sometimes I like to look out of a window and think about various endgames and how they might land in my apartment or workplace or grocery store.
I'm a big fan of this series. I think that puzzles and exercises are undersupplied on LessWrong, especially ones that are fun, a bit collaborative and a bit competitive. I've recently been trying my hand at some of the backlog, and it's been pretty cool. I can feel that I'm getting at least a bit better at compressing the dimensionality of the data as I investigate it.
In general, I'd guess that data science is a pretty important epistemological skill. I think LessWrongers aren't as strong in it as they ideally would be. This is in part because of a justified suspicion that people just pour in data and confusion, and get out more official-looking confusion. I'd say that a central point of this series is: how do you avoid confusing yourself with data by actually thinking about things?
I have the impression that I reach for this rule fairly frequently. I only ontologise it as a rule to look out for because of this post. (I normally can't remember the exact number, so have to go via the compound interest derivation).
(My plus is conditional on me not being the adjudicator)
SFF is matching donations on some orgs through the end of 2025 (see the list), which signals which orgs they want more people to donate to.
As I work for an org which receives matching, I think it's important to note this has nothing to do with which orgs SFF likes best.
When you apply for an SFF grant, you can opt into receiving some of your funds as matching pledges. That gives you more weight in the S-Process algorithm; the S-Process treats it like it's able to give you >$1 per dollar it spends.
So it's just about what orgs felt would be best for their fundraising, not endorsement from the SFF.
(See more here)
Fire alarms basically don't help with fire deaths at all
Is that true? I don't think there's amazing evidence, but my sense is that it's sufficient to expect fire alarms help. I think the study designs look like:
*Lighthaven->Lightcone (at least in the case of SFF matching)
Thank you!
Would it help if the prompt read more like a menu?