1465

LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Your Feed
Load More

Popular Comments

Jeremy Gillen3d*801
Resampling Conserves Redundancy (Approximately)
Alfred Harwood and I were working through this as part of a Dovetail project and unfortunately I think we’ve found a mistake. The Taylor expansion in Step 2 has the 3rd order term o(δ3)=16[2(√P[X])3](−δ[X])3. This term should disappear as δ[X] goes to zero, but this is only true if √P[X] stays constant. The Γ transformation in Part 1 reduces (most terms of) P[X] and Q[X] at the same rate, so √P[X] decreases at the same rate as δ[X]. So the 2nd order approximation isn’t valid. For example, we could consider two binary random variables with probability distributions P(x=0)=zp and P(X=1)=1−zp and Q(X=0)=zq and Q(X=1)=1−zq. If δ[X]=√P(X)−√Q(X), then δ[X]→0 as z→0. But consider the third order term for X=0 which is 13(√Q(0)−√P(0)√P(0))3=13(√zq−√zp√zp)3=13(√q−√p√p)3 This is a constant term which does not vanish as z→0. We found a counterexample to the whole theorem (which is what led to us finding this mistake), which has KL(X2→X1→Λ′)max[KL(X1→X2→Λ),KL(X2→X1→Λ)]>10, and it can be found in this colab. There are some stronger counterexamples at the bottom as well. We used sympy because we were getting occasional floating point errors with numpy. Sorry to bring bad news! We’re going to keep working on this over the next 7 weeks, so hopefully we’ll find a way to prove a looser bound. Please let us know if you find one before us!
David Gross1d356
Meditation is dangerous
Things I'd like to know: 1. What is the baseline things-going-tits-up-mentally rate for people similarly situated to those who take on meditation, and how does that compare to the rate for people who begin to meditate? There's a smell of "the plural of anecdote is data" about this post. People at high risk for mental illness can go around the bend while meditating? Well, they can go around the bend watching TV or chatting with Claude too. How much more dangerous is meditation than the typical range of alternative activities? 2. There's bad, and then there's Bad. Ask a reformed alcoholic whether there were any negative side effects of giving up alcohol, and they'll tell you it was a bit like an anal probe of the soul with the devil's pitchfork for a month or two at least. Is that a cautionary tale that should steer you away from sobriety, or just par for that otherwise worthy course? Some of the practitioners of meditation (esp. in the Buddhist tradition) think we're most of us addicts to the chimerical delights of the senses, and that of course it'll be a struggle to overcome that. FWIW. 3. Is this like psychedelics, where if you take them in the context of a long-standing ritual practice with lots of baked-in wisdom, things will probably go okay or at least they'll know how to give you a soft pillow to land on if you get too far out there; but if you take them in some arbitrary context there's no telling how it'll turn out? How do outcomes look for people who meditate in an institutional context with feedback from a seasoned veteran vs. those who meditate based on e.g. enthusiastic blog posts? Not saying you're wrong, but answers to things like this would help me know what to do with your observations.
Richard_Ngo10h235
Generalized Coming Out Of The Closet
I think that properly understanding the psychology of BDSM might provide the key to understanding psychology in general (in ways that are pretty continuous with the insights of early pioneers of psychology, e.g. Freud and particularly Jung). My current model is: * The process of learning to be "good" typically involves renouncing and suppressing your "antisocial" desires, some of which are biologically ingrained (e.g. many aspects of male aggression) and some of which are learned idiosyncratically (e.g. having a traumatic childhood which teaches you that the world is zero-sum and you can only gain by hurting others). It also involves renouncing and suppressing parts of yourself which are "pathetic" or "weak" (e.g. the desire to not have to make any choices, the belief that you are bad and unworthy of existing). * These desires/beliefs aren't removed from your psyche (since internal subagents have strong survival instincts, making it difficult to fully destroy them) but rather coagulate into a "shadow": a coalition of drives and desires which mostly remains hidden from your conscious thinking, but still influences your behavior in various ways. The influence of your shadow on your behavior is typically hard for you to detect yourself, but often easy for (emotionally intelligent) others to detect in you. * People who have a very strong "will-to-Goodness" don't necessarily have very strong/extreme shadows, but often do, because they created the very strong will-to-Goodness by strongly suppressing their antisocial desires, which then strongly polarized those desires. * Many types of BDSM are a fairly straightforward manifestation of the desires in your shadow. Participating in BDSM can be good for one's psyche in the sense that it represents a partial reconciliation with one's shadow, reducing internal conflict. I.e. rather than having a shadow that's fully repressed, you can have a "bargain" between your ego and your shadow that's something like "the ego is (mostly) in charge almost all the time, while the shadow is (mostly) in charge during kinky sex". It feels really somatically nice for parts of your psyche which are almost always repressed and shamed to be allowed to act for once. * However, BDSM can also be bad for one's psyche in the sense that positive reinforcement during BDSM causes your shadow to grow, thereby increasing internal conflict longer-term. Also, doing BDSM with others can cause their shadow to grow too. "Healthy" BDSM probably looks more like an outlet which gradually helps you to accept and integrate your shadow then move on, rather than a lifestyle or a part of your long-term identity. My guess is that BDSM communities end up instantiating similar "crab in a bucket" dynamics as incel communities—i.e. holding people back from developing healthier psychologies. * Young children are rightly horrified by BDSM when they stumble upon it, because it's an indication that there's something twisted/perverse going on in the world. However, I suspect that almost all adults who feel horrified by BDSM are in part reacting to their own shadow. My guess is that the few people who have actually integrated their shadows in a healthy way are neither very interested in nor very horrified by BDSM, but rather mostly sad about it (like they're sad about suffering more generally). When I say that they've "integrated" their shadows, I mean that their BDSM-like desires are cooperating strongly enough with their other desires that they're a little bit present most of the time, rather than driving them to create simulacra of highly transgressive behavior. This might sound scary, but I expect that fully experiencing the ways in which we all have power over each other in normal life provides enough fodder to satisfy the BDSM-like desires in almost all of us. (For example, if you really allowed yourself to internalize how much power being a westerner gives you over people in developing countries, or the power dynamics in friendships where one person is more successful than the other, I expect that thought process to feel kinda like BDSM.) * Trying to evoke and deal with your shadow is a difficult and fraught process, since (by definition) it involves grappling with the parts of yourself that you're most ashamed about and most scared of giving control to. I recommend doing so gradually and carefully. My most direct engagement with shadow work was regrettably intense (analogous to a bad psychedelic trip) and came very close to having very bad effects on my life (though I've now wrestled those effects into a positive direction, and find shadow work very valuable on an ongoing basis). * As you can probably infer, most of the points above are informed by my own past and ongoing experiences.
Load More
Science Isn't Enough
Book 4 of the Sequences Highlights

While far better than what came before, "science" and the "scientific method" are still crude, inefficient, and inadequate to prevent you from wasting years of effort on doomed research directions.

488Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
76
October Meetup - One Week Late
AI Safety Law-a-thon: We need more technical AI Safety researchers to join!
[Today]Göttingen – ACX Meetups Everywhere Fall 2025
[Today]Warsaw – ACX Meetups Everywhere Fall 2025
First Post: When Science Can't Help
149
The "Length" of "Horizons"
Adam Scholl
2d
22
262
Towards a Typology of Strange LLM Chains-of-Thought
1a3orn
6d
21
lc4h2317
Vladimir_Nesov
1
Bad people underestimate how nice some people are and nice people underestimate how bad some people are.
faul_sname19h368
Drake Thomas, ryan_greenblatt, and 5 more
9
Why is it worse for x risk for China to win the AI race? My understanding of the standard threat model is that, at some point, governments will need to step in and shut down or take control over profitable and popular projects for the good of all society. I look at China, and I look at the US, and I can't say "the US is the country I would bet on to hit the big red button here". There's got to be something I'm missing here.
Fabien Roger18hΩ20310
Garrett Baker, Buck, and 1 more
4
I listened to the books Original Sin: President Biden's Decline, Its Cover-up, and His Disastrous Choice to Run Again and The Divider: Trump in the White House, 2017–2021. Both clearly have an axe to grind and I don't have enough US politics knowledge to know which claims are fair, and which ones are exaggerations and/or are missing important context, but these two books are sufficiently anti-correlated that it seems reasonable to update based on the intersection of the 2 books. Here are some AGI-relevant things I learned: * It seems rough to avoid sycophancy dynamics as president: * There are often people around you who want to sabotage you (e.g. to give more power to another faction), so you need to look out for saboteurs and discourage disloyalty. * You had a big streak of victories to become president, which probably required a lot of luck but also required you to double down on the strength you previously showed and be confident in your abilities to succeed again in higher-stakes positions. * Of course you can still try to be open to new factual information, but being calibrated about how seriously to take dissenting points of view sounds rough when the facts are not extremely crisp and legible. * This makes me somewhat more pessimistic about how useful AGI advisors could be. * Unconditional truthfulness (aka never lying) seems very inconvenient in politics. It looks quite common to e.g. have to choose between truthfulness and loyalty in an environment that strongly encourages loyalty. Seems especially convenient when the lie is about some internal state ("did you know that Biden was too old?" "do you think the current policy of the admin you work for is good?"). But even regular cover-ups about factual information seem quite common. * I think this makes designing good model specs for AGI advisors potentially quite hard, especially if the AGI advisors also have to answer questions from journalists and other adversarial entities. * I wonder
Daniel Kokotajlo1d411
Bogdan Ionut Cirstea, RussellThor, and 3 more
9
Suppose AGI happens in 2035 or 2045. Will takeoff be faster, or slower, than if it happens in 2027? Intuition for slower: In the models of takeoff that I've seen, longer timelines is correlated with slower takeoff. Because they share a common cause: the inherent difficulty of training AGI. Or to put it more precisely, there's all these capability milestones we are interested in, such as superhuman coders, full AI R&D automation, AGI, ASI, etc. and there's this underlying question of how much compute, data, tinkering, etc. will be needed to get from milestone 1 to 2 to 3 to 4 etc., and these things are probably all correlated (at least in our current epistemic state). Moreover, in the 2030's the rate of growth of inputs such as data, compute, etc. will have slowed, so all else equal the pace of takeoff should be slower. Intuition for faster: That was all about correlation. Causally, it seems clear that longer timelines cause faster takeoff. Because there's more compute lying around, more data available, more of everything. If you have (for example) just reached the full automation of AI R&D, and you are trying to do the next big paradigm shift that'll take you to ASI, you'll have orders of magnitude more compute and data to experiment with (and your automated AI researchers be both more numerous and serially faster!) if it's 2035 instead of 2027. "So what?" the reply goes. "Correlation is what matters for predicting how fast takeoff will be in 2035 or 2045. Yes you'll have + 3 OOMs more resources with which to do the research, but (in expectation) the research will require (let's say) +6 OOMs more resources." But I'm not fully satisfied with this reply. Apparent counterexample: Consider the paradigm of brainlike AGI, in which the tech tree is (1) Figure out how the human brain works, (2) Use those principles to build an AI that has similar properties, i.e. similar data-efficient online learning blah blah blah, and (3) train that AI in some simulation environment si
Cleo Nardo4d*781
bodry, cosmobobak, and 8 more
22
What's the Elo rating of optimal chess? I present four methods to estimate the Elo Rating for optimal play: (1) comparing optimal play to random play, (2) comparing optimal play to sensible play, (3) extrapolating Elo rating vs draw rates, (4) extrapolating Elo rating vs depth-search. 1. Optimal vs Random Random plays completely random legal moves. Optimal plays perfectly. Let ΔR denote the Elo gap between Random and Optimal. Random's expected score is given by E_Random = P(Random wins) + 0.5 × P(Random draws). This is related to Elo gap via the formula E_Random = 1/(1 + 10^(ΔR/400)). First, suppose that chess is a theoretical draw, i.e. neither player can force a win when their opponent plays optimally. From Shannon's analysis of chess, there are ~35 legal moves per position and ~40 moves per game. At each position, assume only 1 move among 35 legal moves maintains the draw. This gives a lower bound on Random's expected score (and thus an upper bound on the Elo gap). Hence, P(Random accidentally plays an optimal drawing line) ≥ (1/35)^40 Therefore E_Random ≥ 0.5 × (1/35)^40. If instead chess is a forced win for White or Black, the same calculation applies: Random scores (1/35)^40 when playing the winning side and 0 when playing the losing side, giving E_Random ≥ 0.5 × (1/35)^40. Rearranging the Elo formula: ΔR = 400 × log₁₀((1/E_Random) - 1) Since E_Random ≥ 0.5 × (1/35)^40 ≈ 5 × 10^(-62): The Elo gap between random play and perfect play is at most 24,520 points. Random has an Elo rating of 477 points[1]. Therefore, the Elo rating of Optimal is no more than 24,997 points. 2. Optimal vs Sensible We can improve the upper-bound by comparing Optimal to Sensible, a player who avoids ridiculous moves such as sacrificing a queen without compensation. Assume that there are three sensible moves per in each position, and that Sensible plays randomly among sensible moves. Optimal still plays perfectly. Following the same analysis, E_Sensible ≥ 0.5 × (1/3)^40
Daniel Jacobson8h50
Mitchell_Porter
1
Questions: When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction? I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives.   
eggsyntax8h50
0
Ezra Klein's interview with Eliezer Yudkowsky (YouTube, unlocked NYT transcript) is pretty much the ideal Yudkowsky interview for an audience of people outside the rationalsphere, at least those who are open to hearing Ezra Klein's take on things (which I think is roughly liberals, centrists, and people on the not-that-hard left). Klein is smart, and a talented interviewer. He's skeptical but sympathetic. He's clearly familiar enough with Yudkowsky's strengths and weaknesses in interviews to draw out his more normie-appealing side. He covers all the important points rather than letting the discussion get too stuck on any one point. If it reaches as many people as most of Klein's interviews, I think it may even have a significant impact above counterfactual. I'll be sharing it with a number of AI-risk-skeptical people in my life, and insofar as you think it's good for more people to really get the basic arguments — even if you don't fully agree with Eliezer's take on it — you may want to do the same.
Load More (7/46)
719The Company Man
Tomás B.
26d
65
658The Rise of Parasitic AI
Adele Lopez
1mo
175
332Hospitalization: A Review
Logan Riggs
10d
18
262Towards a Typology of Strange LLM Chains-of-Thought
1a3orn
6d
21
196If Anyone Builds It Everyone Dies, a semi-outsider review
dvd
5d
49
103Meditation is dangerous
Algon
1d
17
227I take antidepressants. You’re welcome
Elizabeth
9d
8
149The "Length" of "Horizons"
Adam Scholl
2d
22
185The Most Common Bad Argument In These Parts
J Bostock
8d
38
106Cheap Labour Everywhere
Morpheus
3d
26
336Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël
1mo
27
315Why you should eat meat - even if you hate factory farming
KatWoods
24d
90
125That Mad Olympiad
Tomás B.
4d
11
Load MoreAdvanced Sorting/Filtering