476

LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Load More

Popular Comments

Harry Potter and The Methods of Rationality

What if Harry was a scientist? What would you do if the universe had magic in it? 
A story that conveys many rationality concepts, helping to make them more visceral, intuitive and emotionally compelling.

Nina Panickssery1d37-11
Mourning a life without AI
Your Substack subtitle is "I won't get to raise a family because of AGI". It should instead be "I don't want to raise a family because of AGI" I think it's >90% likely that if you want and try to, you can raise a family in a relatively normal way (i.e. your wife gives birth to your biological children and you both look after them until they are adults) in your lifetime.  Not wanting to do this because those children will live in a world dissimilar to today's is another matter, but note that your parents also raised you to live in a world very dissimilar from the world they grew up in, but were motivated to do it anyway! So far, over many generations, people have been motivated to build families not by the confidence that their children will live in the same way as they did, but rather by other drives (whether it's a drive towards reproduction, love, curiosity, norm-following, etc.). I also think you're very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don't see why either of those things stop you from having a family. 
foodforthought3h210
One Shot Singalonging is an attitude, not a skill or a song-difficulty-level*
Completely agree with your observations, and I say this as someone who (a) grew up with family singing, campfire singing (b) had pretty extensive choral training; (c) nevertheless later participated in and led community singing groups built on the idea that it's totally, absolutely fine to sing "badly"; (d) for many years hosted a successful wassailing party at which most people were not Christian and not familiar with caroling, yet happily and credibly belted out the Christmas carols they had just learned, on the porches of surprised neighbors; (e) is now a folkie very active in the pub singing tradition (f) with an armchair interest in ethnomusicology, oral tradition, and the neurobiosociology of community singing. SO... a few thoughts: You are absolutely right that the most important factor is giving people permission to sing, that everyone has the right to sing, your voice doesn't have to be good, you don't have to be in tune, and in fact it will be fine. If there are professional musicians present it can be important to explain to them what is going on, why they should be happy to hear bad singers sing, and how they can help by singing the melody loudly, and not wincing. You may find helpful resources at https://singout.org/communitysings/, look up Pete Seeger's Tone Deaf Choir (historical), Matt Watroba's Community Sings (current, I think). They were brilliant at getting big crowds of non-singers to sing (and surprise themselves with how good they sound). I'd be happy to exchange notes on repertoire. I am not sure what themes exactly suit the Rationalist Solstice scene, but I know what worked well for my motley wassailing crew and my community singing group. There are characteristics of songs from oral tradition that support/encourage everyone to sing, which are common in older traditional songs, religious/church songs, childrens/camp songs. Yes, they arose from contexts where the participants could not read, but this is irrelevant. You do not want people reading words off of a sheet of paper or their phone. You want them to be present to the room and just sing. You want it to be easy to sing in the pitch dark with a candle in one hand and a glass of grog in the other. So the same rules apply.  Predictability. The tune is repetitive (no modulations, bridges, etc), and the lyrics have a formulaic pattern. A good example: "Where have all the flowers gone?". If someone has never heard the song, they have to stop singing and listen to hear the one word that is new in each verse; but then they can predict how the entire next verse will go, and can sing along to the whole verse as well as the chorus. And that song is not silly, and not a bad candidate.  Repetition. Songs with a chorus, as noted, you can sing the chorus once at the outset, and then people can sing it every time it repeats. Pro tip: sing every verse, and repeat the chorus after every verse. Stage folk performers will skip verses and only sometimes sing the chorus to avoid boring the audience; but it's not boring when there is no audience and everyone is singing.  But choruses aren't the only form of this. Some songs have call and response where you repeat each line (or there's a formula for the response to the called line). Some have a refrain in which the last line or two of each verse is sung again. Look for songs with these features. Familiarity. Obvious one. If a lot of people recognize a song it helps, even if they just hum the tune. A good example of this might be Silent Night. In  traditional music, including church hymnals and pub ballads, tunes are heavily re-used. The same tunes are re-used for many different sets of lyrics, so you can leverage that everyone already knows the tune. Parodies (writing new songs to well known tunes) work well for this reason. Physicality. Clapping, stomping, snapping, whatever, lets people participate even if they don't know the words or tune -- and surprisingly lowers inhibitions for singing. For totally novel songs I think the best you can do is have an optional pre-run for people who want to learn them, use the same ones year after year, and make it ok to just listen and enjoy the ones you don't know.  People pick up songs up with remarkable ease. They will accidentally find they are singing it next time.  In this context, story songs are the easiest for people to remember; our brains are wired for stories. For example 'Good King Wenceslas' was the favorite most belted out at my Wassail, even though there are no choruses or refrains.  Since you are taking a long term view on this: wanna co-org a workshop on "singing for people who can't sing" at LessOnline next year? 
Vladimir_Nesov1dΩ16269
Comparing Payor & Löb
I would term □x→x "hope for x" rather than "reliability", because it's about willingness to enact x in response to belief in x, but if x is no good, you shouldn't do that. Indeed, for bad x, having the property of □x→x is harmful fatalism, following along with destiny rather than choosing it. In those cases, you might want to □x→¬x or something, though that only prevents x from being believed, that you won't need to face □x in actuality, it doesn't prevent the actual x. So □x→x reflects a value judgement about x reflected in agent's policy, something downstream of endorsement of x, a law of how the content of the world behaves according to an embedded agent's will. Payor's Lemma then talks about belief in hope □(□x→x), that is hope itself is exogenous and needs to be judged (endorsed or not). Which is reasonable for games, since what the coalition might hope for is not anyone's individual choice, the details of this hope couldn't have been hardcoded in any agent a priori and need to be negotiated during a decision that forms the coalition. A functional coalition should be willing to act on its own hope (which is again something we need to check for a new coalition, that might've already been the case for a singular agent), that is we need to check that □(□x→x) is sufficient to motivate the coalition to actually x. This is again a value judgement about whether this coalition's tentative aspirations, being a vehicle for hope that x, are actually endorsed by it. Thus I'd term □(□x→x) "coordination" rather than "trust", the fact that this particular coalition would tentatively intend to coordinate on a hope for x. Hope □x→x is a value judgement about x, and in this case it's the coalition's hope, rather any one agent's hope, and the coalition is a temporary nascent agency thing that doesn't necessarily know what it wants yet. The coalition asks: "If we find ourselves hoping for x together, will we act on it?" So we start with coordination about hope, seeing if this particular hope wants to settle as the coalition's actual values, and judging if it should by enacting x if at least coordination on this particular hope is reached, which should happen only if x is a good thing. (One intuition pump with some limitations outside the provability formalism is treating □x as "probably x", perhaps according to what some prediction market tells you. If "probably x" is enough to prompt you to enact x, that's some kind of endorsement, and it's a push towards increasing the equilibrium-on-reflection value of probability of x, pushing "probably x" closer to reality. But if x is terrible, then enacting it in response to its high probability is following along with self-fulfilling doom, rather doing what you can to push the equilibrium away from it.) Löb's Theorem then says that if we merely endorse a belief by enacting the believed outcome, this is sufficient for the outcome to actually happen, a priori and without that belief yet being in evidence. And Payor's Lemma says that if we merely endorse a coalition's coordinated hope by enacting the hoped-for outcome, this is sufficient for the outcome to actually happen, a priori and without the coordination around that hope yet being in evidence. The use of Löb's Theorem or Payor's Lemma is that the condition (belief in x, or coordination around hope for x) should help in making the endorsement, that is it should be easier to decide to x if you already believe that x, or if you already believe that your coalition is hoping for x. For coordination, this is important because every agent can only unilaterally enact its own part in the joint policy, so it does need some kind of premise about the coalition's nature (in this case, about the coalition's tentative hope for what it aims to achieve) in order to endorse playing its part in the coalition's joint policy. It's easier to decide to sign an assurance contract than to unconditionally donate to a project, and the role of Payor's Lemma is to say that if everyone does sign the assurance contract, then the project will in fact get funded sufficiently.
Load More
494Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
76
[Today]"If Anyone Builds it, Everyone Dies" : three AI futurists with disparate views respond - socializing, talks, and panel discussion at Microsoft NERD
[Tomorrow]11/10/25 Monday Social 7pm-9pm @ Segundo Coffee Lab
First Post: Chapter 1: A Day of Very Low Probability
dynomight1d822
JonathanN, Linch, and 1 more
5
Just had this totally non-dystopian conversation: "...So for other users, I spent a few hours helping [LLM] understand why it was wrong about tariffs." "Noooo! That does not work." "Relax, it thanked me and stated it was changing its answer." "It's lying!" "No, it just confirmed that it's not lying."
Mo Putera20h470
testingthewaters, Mitchell_Porter, and 2 more
5
This MO thread initiated by Bill Thurston on the varied ways mathematicians think about math has always made me wonder how theoretical researchers in other fields think about their domains. I think of this as complementary to Mumford's tribes of mathematicians, and (much more tangentially) to Eliezer's remark on how sparse thinkers are at the intellectual frontiers.  ---------------------------------------- Here are some of my favorite quotes. Terry Tao talks about an "adversarial perspective" which I'm guessing is the closest match to how alignment researchers think: There's the "economic" mindset; Tao again: Physical analogies; Tao again: Visualisation techniques; Tao again: Another take on visual thinking, by François G. Dorais: Benson Farb on Thurston's visual-geometric way of thinking about higher dimensions – Thurston was widely considered the best geometric thinker in the history of math: At a more elementary level, here's Phil Issett on geometric thinking: Qiaochu Yuan's way of thinking about determinants isn't one I've seen written up before:   Subconscious thought processing "masticating" tons of examples; Vivek Shende: Shende's mastication remark reminds me of Michael Nielsen's "exhaust, bad [Anki] cards that seem to be necessary to get to good cards": Nielsen himself has interesting remarks on how he thinks about doing math in the essay above, which is mainly about using Anki to deepen mathematical understanding: ---------------------------------------- Sometimes the ways of thinking seem too personal to be useful. Richard Feynman, in The Pleasure of Finding Things Out, explained how counting is a verbal process for him, and then ended with: Sam Derbyshire concurs: as does Mariano Suárez-Álvarez: I think this is too pessimistic, and not necessarily reflective of collaborative problem-solving. Tao again: But Terry Tao is an extremely social collaborative mathematician; his option seems somewhat foreclosed to truly ground-up independent
Daniel Paleka2d9320
0
Slow takeoff for AI R&D, fast takeoff for everything else Why is AI progress so much more apparent in coding than everywhere else? Among people who have "AGI timelines", most do not set their timelines based on data, but rather update them based on their own day-to-day experiences and social signals. As of 2025, my guess is that individual perception of AI progress correlates with how closely someone's daily activities resemble how an AI researcher spends their time. The reason why users of coding agents feel a higher rate of automation in their bones, whereas people in most other occupations don't, is because automating engineering has been the focus of the industry for a while now. Despite the expectations for 2025 to be the year of the AI agent, it turns out the industry is small and cannot have too many priorities, hence basically the only competent agents we got in 2025 so far are coding agents. Everyone serious about winning the AI race is trying to automate one job: AI R&D. To a first approximation, there is no point yet in automating anything else, except to raise capital (human or investment), or to earn money. Until you are hitting diminishing returns on your rate of acceleration, unrelated capabilities are not a priority. This means that a lot of pressure is being applied to AI research tasks at all times; and that all delays in automation of AI R&D are, in a sense, real in a way that's not necessarily the case for tasks unrelated to AI R&D. It would be odd if there were easy gains to be made in accelerating the work of AI researchers on frontier models in addition to what is already being done across the industry. I don't know whether automating AI research is going to be smooth all the way there or not; my understanding is that slow vs fast takeoff hinges significantly on how bottlenecked we become by non-R&D factors over time. Nonetheless, the above suggests a baseline expectation: AI research automation will advance more steadily compared to auto
Mo Putera1d311
0
Something about the imagery in Tim Krabbe's quote below from April 2000 on ultra-long computer database-generated forced mates has stuck with me, long years after I first came across it; something about poetically expressing what superhuman intelligence in a constrained setting might look like: And from that linked essay above, Stiller's Monsters - or perfection in chess: In 2014 Krabbe's diary entry announced an update to the forced mate length record at 549 moves: Krabbe of course includes all the move sequences in his diary entries at the links above, I haven't reproduced them here.
GradientDissenter4d*8613
Ryan Meservey, RobertM, and 6 more
13
Notes on living semi-frugally in the Bay Area. I live in the Bay Area, but my cost of living is pretty low: roughly $30k/year. I think I live an extremely comfortable life. I try to be fairly frugal, both so I don't end up dependent on jobs with high salaries and so that I can donate a lot of my income, but it doesn't feel like much of a sacrifice. Often when I tell people how little I spend, they're shocked. I think people conceive of the Bay as exorbitantly expensive, and it can be, but it doesn't have to be. Rent: I pay ~$850 a month for my room. It's a small room in a fairly large group house I live in with nine friends. It's a nice space with plenty of common areas and a big backyard. I know of a few other places like this (including in even pricier areas like Palo Alto). You just need to know where to look and to be willing to live with friends. On top of rent I pay ~$200/month (edit: I was missing one expense, it's more like $300) for things like utilities, repairs on the house, and keeping the house tidy. I pool the grocery bill with my housemates so we can optimize where we shop a little. We also often cook for each other (notably most of us, including myself, also get free meals on weekdays in the offices we work from, though I don't think my cost of living was much higher when I was cooking for myself each day not that long ago). It works out to ~$200/month. I don't buy that much stuff. I thrift most of my clothes, but I buy myself nice items when it matters (for example comfy, somewhat-expensive socks really do make my day better when I wear them). I have a bunch of miscellaneous small expenses like my Claude subscription, toothpaste, etc, but they don't add up to much. I don't have a car, a child, or a pet (but my housemate has a cat, which is almost the same thing). I try to avoid meal delivery and Ubers, though I use them in a pinch. Public transportation costs aren't nothing, but they're quite manageable. I actually have a PA who helps me with
LWLW2d37-12
clone of saturn, waterlubber, and 7 more
13
I just can’t wrap my head around people who work on AI capabilities or AI control. My worst fear is that AI control works, power inevitably concentrates, and then the people who have the power abuse it. What is outlandish about this chain of events? It just seems like we’re trading X-risk for S-risks, which seems like an unbelievably stupid idea. Do people just not care? Are they genuinely fine with a world with S-risks as long as it’s not happening to them? That’s completely monstrous and I can’t wrap my head around it.  The people who work at the top labs make me ashamed to be human. It’s a shandah. This probably won’t make a difference, but I’ll write this anyways. If you’re working on AI-control, do you trust the people who end up in charge of the technology to wield it well? If you don’t, why are you working on AI control?
Tomás B.16h91
Dagon, Algon
2
Suppose you're a billionaire and you want to get married. However, gold-digging people of the gender you prefer target you. They are good enough at faking attraction that you cannot tell. How should you act? One idea I had was this: pick 10000 random people and then select from there. You will at the very least likely remove most of the world-class dissemblers. Armstrong proposes a similar scheme in Siren worlds and the perils of over-optimised search.
Load More (7/54)
Berkeley Solstice Weekend
2025 NYC Secular Solstice & East Coast Rationalist Megameetup
36Solstice Season 2025: Ritual Roundup & Megameetups
Raemon
3d
4
286I ate bear fat with honey and salt flakes, to prove a point
aggliu
6d
43
137Unexpected Things that are People
Ben Goldhaber
1d
6
254Legible vs. Illegible AI Safety Problems
Ω
Wei Dai
6h
Ω
78
289Why I Transitioned: A Case Study
Fiora Sunshine
8d
49
742The Company Man
Tomás B.
2mo
70
695The Rise of Parasitic AI
Adele Lopez
2mo
178
53Condensation
Ω
abramdemski
8h
Ω
6
115Mourning a life without AI
Nikola Jurkovic
2d
29
175The Unreasonable Effectiveness of Fiction
Raelifin
4d
20
167Lack of Social Grace is a Lack of Skill
Screwtape
7d
23
192You’re always stressed, your mind is always busy, you never have enough time
mingyuan
8d
6
357Hospitalization: A Review
Logan Riggs
1mo
21
Load MoreAdvanced Sorting/Filtering
254
Legible vs. Illegible AI Safety Problems
Ω
Wei Dai
6h
Ω
78
175
The Unreasonable Effectiveness of Fiction
Raelifin
4d
20