1577

LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Load More

Popular Comments

Harry Potter and The Methods of Rationality

What if Harry was a scientist? What would you do if the universe had magic in it? 
A story that conveys many rationality concepts, helping to make them more visceral, intuitive and emotionally compelling.

Vladimir_Nesov8hΩ12199
Comparing Payor & Löb
I would term □x→x "hope for x" rather than "reliability", because it's about willingness to enact x in response to belief in x, but if x is no good, you shouldn't do that. Indeed, for bad x, having the property of □x→x is harmful fatalism, following along with destiny rather than choosing it. In those cases, you might want to □x→¬x or something, though that only prevents x from being believed, that you won't need to face □x in actuality, it doesn't prevent the actual x. So □x→x reflects a value judgement about x reflected in agent's policy, something downstream of endorsement of x, a law of how the content of the world behaves according to an embedded agent's will. Payor's Lemma then talks about belief in hope □(□x→x), that is hope itself is exogenous and needs to be judged (endorsed or not). Which is reasonable for games, since what the coalition might hope for is not anyone's individual choice, the details of this hope couldn't have been hardcoded in any agent a priori and need to be negotiated during a decision that forms the coalition. A functional coalition should be willing to act on its own hope (which is again something we need to check for a new coalition, that might've already been the case for a singular agent), that is we need to check that □(□x→x) is sufficient to motivate the coalition to actually x. This is again a value judgement about whether this coalition's tentative aspirations, being a vehicle for hope that x, are actually endorsed by it. Thus I'd term □(□x→x) "coordination" rather than "trust", the fact that this particular coalition would tentatively intend to coordinate on a hope for x. Hope □x→x is a value judgement about x, and in this case it's the coalition's hope, rather any one agent's hope, and the coalition is a temporary nascent agency thing that doesn't necessarily know what it wants yet. The coalition asks: "If we find ourselves hoping for x together, will we act on it?" So we start with coordination about hope, seeing if this particular hope wants to settle as the coalition's actual values, and judging if it should by enacting x if at least coordination on this particular hope is reached, which should happen only if x is a good thing. (One intuition pump with some limitations outside the provability formalism is treating □x as "probably x", perhaps according to what some prediction market tells you. If "probably x" is enough to prompt you to enact x, that's some kind of endorsement, and it's a push towards increasing the equilibrium-on-reflection value of probability of x, pushing "probably x" closer to reality. But if x is terrible, then enacting it in response to its high probability is following along with self-fulfilling doom, rather doing what you can to push the equilibrium away from it.) Löb's Theorem then says that if we merely endorse a belief by enacting the believed outcome, this is sufficient for the outcome to actually happen, a priori and without that belief yet being in evidence. And Payor's Lemma says that if we merely endorse a coalition's coordinated hope by enacting the hoped-for outcome, this is sufficient for the outcome to actually happen, a priori and without the coordination around that hope yet being in evidence. The use of Löb's Theorem or Payor's Lemma is that the condition (belief in x, or coordination around hope for x) should help in making the endorsement, that is it should be easier to decide to x if you already believe that x, or if you already believe that your coalition is hoping for x. For coordination, this is important because every agent can only unilaterally enact its own part in the joint policy, so it does need some kind of premise about the coalition's nature (in this case, about the coalition's tentative hope for what it aims to achieve) in order to endorse playing its part in the coalition's joint policy. It's easier to decide to sign an assurance contract than to unconditionally donate to a project, and the role of Payor's Lemma is to say that if everyone does sign the assurance contract, then the project will in fact get funded sufficiently.
Raemon4d*9547
Heroic Responsibility
I think this part of Heroic Responsibility isn't too surprising/novel to people. Obviously the business owner has responsibility for the business. The part that's novel is more like: If I'm some guy working in legal, and I notice this hot potato going around, and it's explicitly not my job to deal with it, I might nonetheless say "ugh, the CEO is too busy to deal with this today and it's not anyone else's job. I will deal with it." Then you go to each department head, even if you're not even a department head you're a lowly intern (say), and say "guys, I think we need to decide who's going to deal with this." And if their ego won't let them take advice from an intern, you might also take it as your responsibility to figure out how to navigate their ego – maybe by making them feel like it was their own idea, or by threatening to escalate to the CEO if they don't get to it themselves, or by appealing to their sense of duty. A great example of this, staying with them realm of "random Bureaucracy", I got from @Elizabeth: E. D. Morel was a random bureaucrat at a shipping company in 1891. He noticed that his company was shipping guns and manacles into the Congo, and shipping rubber and other resources back out to Britain. It was not Morel's job to notice that this was a bit weird. It was not Morel's job to notice that that weirdness was a clue, and look into those clues. And then find out that what was happening was, weapons were being sent to the Congo to forcibly steal resources at gunpoint. It was not his job to make it his mission to raise awareness of the Congo abuses and stop them. But he did. ... P.S. A failure mode of rationalists is to try to take Heroic responsibility for everything, esp. in a sort of angsty way that is counterproductive and exhausting. It's also a failure mode to act as if only you can possibly take Heroic responsibility, rather than trying to model the ecosystem around you and the other actors (some of whom might be Live Players who are also taking Heroic Responsibility, some of whom might be sort of local actors following normal incentives but are still, like, part of the solution) There is nuance to when and how to do Heroic Responsibility well.
niplav3d*360
People Seem Funny In The Head About Subtle Signals
Hm, I am unsure how much to believe this, even though my intuitions go the same way as yours. As a correlational datapoint, I tracked my success from cold approach and the time I've spent meditating (including a 2-month period of usually ~2 hours of meditation/day), and don't see any measurable improvement in my success rate from cold approach: (Note that the linked analysis also includes a linear regression of slope -6.35e-08, but with p=0.936, so could be random.) In cases where meditation does stuff to your vibe-reading of other people, I would guess that I'd approach women who are more open to being approached. I haven't dug deeper into my fairly rich data on this, and the data doesn't include much post-retreat approaches, but I still find the data I currently have instructive. I wish more people tracked and analyzed this kind of data, but I seem alone in this so far. I do feel some annoyance at everyone (the, ah, "cool people"?) in this area making big claims (and sometimes money off of those claims) without even trying to track any data and analyze it, leaving it basically to me to scramble together some DataFrames and effect sizes next to my dayjob.[1] > So start meditating for an hour a day for 3 months using the mind illuminated as an experiment (getting some of the cool skills mentioned in Kaj Sotala's sequence?) and see what happens? Do you have any concrete measurable predictions for what would happen in that case? ---------------------------------------- 1. I often wonder if empiricism is just incredibly unintuitive for humans in general, and experimentation and measurement even more so. Outside the laboratory very few people do it, and see e.g. Aristotle's claims about the number of women's teeth or his theory of ballistics, which went un(con)tested for almost 2000 years? What is going on here? Is empiricism really that hard? Is it about what people bother to look at? Is making shit up just so much easier so that everyone keeps in that mode, which is a stable equilibrium? ↩︎
Load More
Berkeley Solstice Weekend
Fri Dec 5•Berkeley
2025 NYC Secular Solstice & East Coast Rationalist Megameetup
Fri Dec 19•New York
LW-Cologne meetup
Sat Nov 8•Köln
ACX Phoenix, November 2025 Meetup
Sat Nov 8•Phoenix
493Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
76
172
The Unreasonable Effectiveness of Fiction
Raelifin
3d
18
87
LLM-generated text is not testimony
TsviBT
6d
81
25Solstice Season 2025: Ritual Roundup & Megameetups
Raemon
2d
0
275I ate bear fat with honey and salt flakes, to prove a point
aggliu
5d
39
234Legible vs. Illegible AI Safety Problems
Ω
Wei Dai
4d
Ω
68
290Why I Transitioned: A Case Study
Fiora Sunshine
7d
49
76Unexpected Things that are People
Ben Goldhaber
9h
0
741The Company Man
Tomás B.
2mo
70
690The Rise of Parasitic AI
Adele Lopez
2mo
178
86Mourning a life without AI
Nikola Jurkovic
22h
5
172The Unreasonable Effectiveness of Fiction
Raelifin
3d
18
189You’re always stressed, your mind is always busy, you never have enough time
mingyuan
7d
6
157Lack of Social Grace is a Lack of Skill
Screwtape
6d
23
73A country of alien idiots in a datacenter: AI progress and public alarm
Seth Herd
1d
11
357Hospitalization: A Review
Logan Riggs
1mo
21
Load MoreAdvanced Sorting/Filtering
dynomight9h552
Linch, mako yass
2
Just had this totally non-dystopian conversation: "...So for other users, I spent a few hours helping [LLM] understand why it was wrong about tariffs." "Noooo! That does not work." "Relax, it thanked me and stated it was changing its answer." "It's lying!" "No, it just confirmed that it's not lying."
Daniel Paleka1d6817
0
Slow takeoff for AI R&D, fast takeoff for everything else Why is AI progress so much more apparent in coding than everywhere else? Among people who have "AGI timelines", most do not set their timelines based on data, but rather update them based on their own day-to-day experiences and social signals. As of 2025, my guess is that individual perception of AI progress correlates with how closely someone's daily activities resemble how an AI researcher spends their time. The reason why users of coding agents feel a higher rate of automation in their bones, whereas people in most other occupations don't, is because automating engineering has been the focus of the industry for a while now. Despite the expectations for 2025 to be the year of the AI agent, it turns out the industry is small and cannot have too many priorities, hence basically the only competent agents we got in 2025 so far are coding agents. Everyone serious about winning the AI race is trying to automate one job: AI R&D. To a first approximation, there is no point yet in automating anything else, except to raise capital (human or investment), or to earn money. Until you are hitting diminishing returns on your rate of acceleration, unrelated capabilities are not a priority. This means that a lot of pressure is being applied to AI research tasks at all times; and that all delays in automation of AI R&D are, in a sense, real in a way that's not necessarily the case for tasks unrelated to AI R&D. It would be odd if there were easy gains to be made in accelerating the work of AI researchers on frontier models in addition to what is already being done across the industry. I don't know whether automating AI research is going to be smooth all the way there or not; my understanding is that slow vs fast takeoff hinges significantly on how bottlenecked we become by non-R&D factors over time. Nonetheless, the above suggests a baseline expectation: AI research automation will advance more steadily compared to auto
Mo Putera10h221
0
Something about the imagery in Tim Krabbe's quote below from April 2000 on ultra-long computer database-generated forced mates has stuck with me, long years after I first came across it; something about poetically expressing what superhuman intelligence in a constrained setting might look like: And from that linked essay above, Stiller's Monsters - or perfection in chess: In 2014 Krabbe's diary entry announced an update to the forced mate length record at 549 moves: Krabbe of course includes all the move sequences in his diary entries at the links above, I haven't reproduced them here.
J Bostock8h*112
Eli Tyre
3
Spoilers (I guess?) for HPMOR
LWLW1d36-11
clone of saturn, waterlubber, and 7 more
13
I just can’t wrap my head around people who work on AI capabilities or AI control. My worst fear is that AI control works, power inevitably concentrates, and then the people who have the power abuse it. What is outlandish about this chain of events? It just seems like we’re trading X-risk for S-risks, which seems like an unbelievably stupid idea. Do people just not care? Are they genuinely fine with a world with S-risks as long as it’s not happening to them? That’s completely monstrous and I can’t wrap my head around it.  The people who work at the top labs make me ashamed to be human. It’s a shandah. This probably won’t make a difference, but I’ll write this anyways. If you’re working on AI-control, do you trust the people who end up in charge of the technology to wield it well? If you don’t, why are you working on AI control?
GradientDissenter3d*8413
Ryan Meservey, RobertM, and 6 more
13
Notes on living semi-frugally in the Bay Area. I live in the Bay Area, but my cost of living is pretty low: roughly $30k/year. I think I live an extremely comfortable life. I try to be fairly frugal, both so I don't end up dependent on jobs with high salaries and so that I can donate a lot of my income, but it doesn't feel like much of a sacrifice. Often when I tell people how little I spend, they're shocked. I think people conceive of the Bay as exorbitantly expensive, and it can be, but it doesn't have to be. Rent: I pay ~$850 a month for my room. It's a small room in a fairly large group house I live in with nine friends. It's a nice space with plenty of common areas and a big backyard. I know of a few other places like this (including in even pricier areas like Palo Alto). You just need to know where to look and to be willing to live with friends. On top of rent I pay ~$200/month (edit: I was missing one expense, it's more like $300) for things like utilities, repairs on the house, and keeping the house tidy. I pool the grocery bill with my housemates so we can optimize where we shop a little. We also often cook for each other (notably most of us, including myself, also get free meals on weekdays in the offices we work from, though I don't think my cost of living was much higher when I was cooking for myself each day not that long ago). It works out to ~$200/month. I don't buy that much stuff. I thrift most of my clothes, but I buy myself nice items when it matters (for example comfy, somewhat-expensive socks really do make my day better when I wear them). I have a bunch of miscellaneous small expenses like my Claude subscription, toothpaste, etc, but they don't add up to much. I don't have a car, a child, or a pet (but my housemate has a cat, which is almost the same thing). I try to avoid meal delivery and Ubers, though I use them in a pinch. Public transportation costs aren't nothing, but they're quite manageable. I actually have a PA who helps me with
Alexander Gietelink Oldenziel15h15-10
Mitchell_Porter, Nina Panickssery, and 4 more
5
Claude is smarter than you. Deal with it.  There is an incredible amount of cope about the current abilities of AI.  AI isn't infallible. Of course. And yet... For 90% of queries a well-prompted AI has better responses than 99% of people.For some queries the number of people that could match the kind of deep, broad knowledge that AI has can be counted on two hands. Finally, obviously, there is no man alive on the face of the earth that comes even close to the breadth and depth of crystallized intelligence that AIs now have.  People have developped a keen apprehension and aversion for " AI slop". The truth of the matter is that  LLMs are incredible writers and if you had presented AI slop as human writing to somebody ten six years ago they would say it is good if somewhat corporate writing all the way to inspired, eloquent, witty.  Does AI sometimes make mistakes? Of course. So do humans. To be human is to err. There is an incredible amount of cope about the current abilities of AI. Frankly, I find it embarassing. Witness the absurd call to flag AI-assisted writing. The widespread disdain for " @grok is this true?" . Witness how llm psychosis has gone from perhaps a real phenomenon to a generic slur for slightly kooky people. The endless moving of goalposts. The hysteria for the slopapocalypse. The almost complete lack of interest in integrating AI in life conversations. The widespread shame that people evidently seem to still feel when they use AI. Worst of all - it's not just academia in denial, or the unwashed masses. The most incredible thing to me is how much denial, cope and refusal there is in the AI safety space itself.   I cannot escape the conclusion that inside each and everyone of us is an insecure ape that cannot bear to see itself usurped from the throne of creation. 
Load More (7/55)
First Post: Chapter 1: A Day of Very Low Probability