1626

LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Your Feed
Load More
Slack and the Sabbath

Some things are fundamentally "out to get you," seeking your time, money and attention. Slack allows margin of error. You can relax. You can explore and pursue opportunities. 
You can plan for the long term. You can stick to principles and do the right thing.

First Post: Out to Get You
Raemon14h3331
Mo Putera, niplav, and 2 more
5
I feel so happy that "what's your crux?" / "is that cruxy" is common parlance on LW now, it is a meaningful improvement over the prior discourse. Thank you CFAR and whoever was part of the generation story of that.
GradientDissenter1d531
Hastings, Cipolla, and 3 more
5
When I was first trying to learn ML for AI safety research, people told me to learn linear algebra. And today lots of people I talk to who are trying to learn ML[1] seem under the impression they need to master linear algebra before they start fiddling with transformers. I find in practice I almost never use 90% of the linear algebra I've learned. I use other kinds of math much more, and overall being good at empiricism and implementation seems more valuable than knowing most math beyond the level of AP calculus. The one part of linear algebra you do absolutely need is a really, really good intuition for what a dot product is, the fact that you can do them in batches, and the fact that matrix multiplication is associative. Someone smart who can't so much as multiply matrices can learn the basics in an hour or two with a good tutor (I've taken people through it in that amount of time). The introductory linear algebra courses I've seen[2] wouldn't drill this intuition nearly as well as the tutor even if you took them. In my experience it's not that useful to have good intuitions for things like eigenvectors/eigenvalues or determinants (unless you're doing something like SLT). Understanding bases and change-of-basis is somewhat useful for improving your intuitions, and especially useful for some kinds of interp, I guess? Matrix decompositions are useful if you want to improve cuBLAS. Sparsity sometimes comes up, especially in interp (it's also a very very simple concept). The same goes for much of vector calculus. (You need to know you can take your derivatives in batches and that this means you write your d/dx as ∂/∂x or an upside-down triangle. You don't need curl or divergence.) I find it's pretty easy to pick things like this up on the fly if you ever happen to need them. Inasmuch as I do use math, I find I most often use basic statistics (so I can understand my empirical results!), basic probability theory (variance, expectations, estimators), having good int
J Bostock1m20
0
Coefficient Giving is one of the worst name changes I've ever heard: * Coefficient Giving sounds bad while OpenPhil sounded cool and snappy. * Coefficient doesn't really mean anything in this context, clearly it's a pun on "co" and "efficient" but that is also confusing. They say "A coefficient multiplies the value of whatever it's paired with" but that's just true of any number? * They're a grantmaker who don't really advise normal individuals about where to give their money, so why "Giving" when their main thing is soliciting large philanthropic efforts and then auditing that * Coefficient Giving doesn't tell you what the company does at the start! "Good Ventures" and "GiveWell" tell you roughly what the company is doing. * "Coefficient" is a really weird word, so you're burning weirdness points with the literal first thing anyone will ever hear you say, this seems like a name which you would only thing is good if you're already deep into rat/ea spaces. * It sounds bad. Open Philanthropy rolls off the tongue, as does OpenPhil. OH-puhn fi-LAN-thruh-pee. Two sets of three. CO-uh-fish-unt GI-ving is an awkward four-two with a half-emphasis on the fish of coefficient. Sounds bad. I'm coming back to this point but there is no possible shortening other than "Coefficient" which is bad because it's just an abstract noun and not very identifiable, whereas "OpenPhil" was a unique identifier. CoGive maybe, but then you have two stressed syllables which is awkward. It mildly offends my tongue to have to even utter their name.  Clearly OP wanted to shed their existing reputation, but man, this is a really bad name choice.
Simon Lermen2d8516
habryka, Daniel Tan, and 6 more
16
What's going on with MATS recruitment? MATS scholars have gotten much better over time according to statistics like mentor feedback, CodeSignal scores and acceptance rate. However, some people don't think this is true and believe MATS scholars have actually gotten worse. So where are they coming from? I might have a special view on MATS applications since I did MATS 4.0 and 8.0. I think in both cohorts, the heavily x-risk AGI-pilled participants were more of an exception than the rule. "at the end of a MATS program half of the people couldn't really tell you why AI might be an existential risk at all." - Oliver Habryka I think this is sadly somewhat true, I talked with some people in 8.0 who didn't seem to have any particular concern with AI existential risk or seemingly never really thought about that. However, I think most people were in fact very concerned about AI existential risk. I ran a poll at some point during MATS 8.0 about Eliezer's new book and a significant minority of students seemed to have pre-ordered Eliezer's book, which I guess is a pretty good proxy for whether someone is seriously engaging with AI X-risk. I think I met some excellent people at MATS 8.0 but would not say they are stronger than 4.0, my guess is that quality went down slightly. I remember in 4.0 a few people that impressed me quite a lot, which I saw less in 8.0. (4.0 had more very incompetent people though). Suggestions for recruitment This might also apply for other Safety Fellowships. Better metrics: My guess is that the recruitment process might need another variable to measure rather than academics/coding/ml experience. The kind of thing that Tim Hua (8.0 scholar) has who created an AI psychosis bench. Maybe something like LessWrong karma but harder to Goodhart. More explicit messaging: Also it seems to me that if you build an organization that tries to fight against the end of the world from AI, somebody should say that. Might put off some people and perhaps that sh
Daniel Kokotajlo17h1910
Raemon, StanislavKrym, and 1 more
3
I feel like this should be a top-level linkpost: https://www.beren.io/2025-08-02-Do-We-Want-Obedience-Or-Alignment/
Wei Dai11h*911
habryka, Ben Pace
13
I was curious what Habryka meant when he said this. Don't non-profits usually have some kind of board oversight? It turns out (from documents filed with the State of California), that Lightcone Infrastructure, which operates LW, is what's known as a sole-member nonprofit, with a 1-3 person board of directors determined by a single person (member), namely Oliver Habryka. (Edit: It looks like this is correct after all, but was unintentional. See Habryka's clarification.) However, it also looks like the LW domain is owned by MIRI, and MIRI holds the content license (legally the copyright is owned by each contributor and licensed to MIRI for use on LW). So if there was a big enough dispute, MIRI could conceivably find another team to run LW. I'm not sure who owns the current code for LW, but I would guess it's Lightcone, so MIRI would have to also recreate a codebase for it (or license GreaterWrong's, I guess). I was initially confused why Lightcone was set up that way (i.e., why was LW handed over to an organization controlled by a single person), but the structure probably makes it more nimble and the risk of Lightcone "going rogue" is mitigated to a large extent by MIRI retaining the option to swap out the team. Anyway it took me a while to figure all this out, and I thought I'd share it so others would be informed while participating on LW.
Buck1d240
Petropolitan, TsviBT, and 2 more
5
I think it's worth drilling your halfish-power-of-ten times tables, by which I mean memorizing the products of numbers like 1, 3, 10, 30, 100, 300, etc, while pretending that 3x3=10. For example, 30*30=1k, 10k times 300k is 3B, etc. I spent an hour drilling these on a plane a few years ago and am glad I did.
Load More (7/53)
Finding Balance & Opportunity in the Holiday Flux [free public workshop]
Sun Nov 23•Online
Berkeley Solstice Weekend
Fri Dec 5•Berkeley
OxRat November Pub Social
Wed Nov 19•Oxfordshire
What the Luddites Can Teach Us About Societal Response to AI
Wed Nov 19•Toronto
497Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
76
299
Paranoia: A Beginner's Guide
habryka
4d
62
44
Human Values ≠ Goodness
johnswentworth
7d
70
44Solstice Season 2025: Ritual Roundup & Megameetups
Raemon
12d
8
163New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
peterbarnett, Aaron_Scher, David Abecassis, Brian Abeyta
19h
13
299Paranoia: A Beginner's Guide
habryka
4d
62
159Where is the Capital? An Overview
johnswentworth
3d
17
116How Colds Spread
RobertM
1d
8
755The Company Man
Tomás B.
2mo
70
1667 Vicious Vices of Rationalists
Ben Pace
3d
24
352Legible vs. Illegible AI Safety Problems
Ω
Wei Dai
10d
Ω
93
99ARC progress update: Competing with sampling
Eric Neyman
21h
2
696The Rise of Parasitic AI
Adele Lopez
2mo
179
311I ate bear fat with honey and salt flakes, to prove a point
aggliu
16d
51
110Varieties Of Doom
jdp
2d
23
72Status Is The Game Of The Losers' Bracket
johnswentworth
21h
29
Load MoreAdvanced Sorting/Filtering