I just donated $1,000. This is not a minor amount for me, and I almost just donated $10 as suggested in Shoshannah's comment, but I knew I could donate that much without thought or effort, and I wanted to really put at least some effort into this, after seeing how much obvious effort Oliver and others at Lesswrong have been putting in.
My decision process was as follows:
First, I dealt with my risk aversion/loss aversion/flinch response to giving large sums of money away. This took a couple minutes, much faster than it used to be thanks to things like my Season of Wealth a couple years ago, but felt like a mildly sharp object jiggling around in my chest until I smoothed it out with reminders of how much money I make these days compared to the relatively poor upbringing I had and the not-particularly-high salary I made for the first ~decade of my adult life.
Second, I thought of how much I value Lesswrong and Lighthaven existing in the world as a vague thing. Impersonally, not in the ways they have affected me, just like... worlds-with-these-people-doing-this-thing-in-it vs worlds-without. This got me up to a feeling of more than double what I wanted to give, somewher...
So it benefits me and conflict of interest and all that, but I think this is a pretty great comment in terms of broadcasting how one might go about figuring out how much to donate. This is often a pretty messy process. There are some people out there who do more actual math here, but, I think for most people this sort of thing is more useful. (Integrating this-sort-of-thing into some back-of-envelope calculations would be cool too if someone good at that did it and could articulate what went on inside them)
To somewhat account for my conflict-of-interest, I'd add: "a thing potentially missing here is what other things might fill a similar role as Lightcone in your world?". If you have ~$1000ish you can give without hardship, you might want to reflect more on the alternatives.
It gets sort of overwhelming to think about all the alternatives, so I think my recommendation to people is to come up with ~3 things they might consider giving money to, and then use a process like the one described here to figure out which one is best, or how to split money if you want to for whatever reason.
My wife and I just donated $10k, and will probably donate substantially more once we have more funds available.
LW 1.0 was how I heard about and became interested in AGI, x-risk, effective altruism, and a bunch of other important ideas. LW 2.0 was the place where I learned to think seriously about those topics & got feedback from others on my thoughts. (I tried to discuss all this stuff with grad students at professors at UNC, where I was studying philosophy, with only limited success). Importantly, LW 2.0 was a place where I could write up my ideas in blog post or comment form, and then get fast feedback on them (by contrast with academic philosophy where I did manage to write on these topics but it took 10x longer per paper to write and then years to get published and then additional years to get replies from people I didn't already know). More generally the rationalist community that Lightcone has kept alive, and then built, is... well, it's hard to quantify how much I'd pay now to retroactively cause all that stuff to happen, but it's way more than $10k, even if we just focus on the small slice of it that benefitted me personally.
Looking forward, I expect a diminished role, ...
I've gotten enormous value out of LW and its derived communities during my life, at least some of which is attributable to the LW2.0 revival and its effects on those communities. More recently, since moving to the Bay, I've been very excited by a lot of the in-person events that Lighthaven has helped facilitate. Also, LessWrong is doing so many things right as a website and source-of-content that no one else does (karma-gated RSS feeds! separate upvote and agree-vote! built-in LaTeX support!) and even if I had no connection to the other parts of its mission I'd want to support the existence of excellently-done products. (Of course there's also the altruistic case for impact on how-well-the-future-goes, which I find compelling on its own merits.) Have donated $5k for now, but I might increase that when thinking more seriously about end-of-year donations.
(Conflict of interest notice: two of my housemates work at Lightcone Infrastructure and I would be personally sad and slightly logistically inconvenienced if they lost their jobs. I don't think this is a big contributor to my donation.)
I'm considering donating. Can you give us a little more information on breakdown of the costs? What are typical large expenses that the 1.6 million upkeep of Lighthaven consists of? Is this a usual cost for a similar sized event space, or is something about the location or the specialness of the place that makes it more expensive?
How much money does running LW cost? The post says it's >1M, which somewhat surprised me, but I have no idea what's the usual cost of running such a site is. Is the cost mostly server hosting or salaries for content moderation or salaries for software development or something I haven't thought of?
Very reasonable question! Here is a breakdown of our projected budget:
Type | Cost | |
---|---|---|
Core Staff Salaries, Payroll, etc. (6 people) | $1.4M | |
Lighthaven (Upkeep) | ||
Operations & Sales | $240k | |
Repairs & Maintenance Staff | $200k | |
Porterage & Cleaning Staff | $320k | |
Property Tax | $300k | |
Utilities & Internet | $180k | |
Additional Rental Property | $180k | |
Supplies (Food + Maintenance) | $180k | |
Lighthaven Upkeep Total | $1.6M | |
Lighthaven Mortgage | $1M | |
LW Hosting + Software Subscriptions | $120k | |
Dedicated Software + Accounting Staff | $330k | |
Total Costs | $4.45M | |
Expected Lighthaven Income | ($2.55M) | |
Annual Shortfall | $1.9M |
And then, as explained in the post, in the coming year, we will have an additional mortgage payment of $1M due in March.
The core staff consists of generalists who work on a very wide range of different projects. My best guess is about 65% of the generalist labor in the coming year will go into LW, but that might drastically change depending on what projects we take on.
Is this a usual cost for a similar sized event space, or is something about the location or the specialness of the place that makes it more expensive?
The costs of event venues and hotels differs enormously across th...
I am slightly worried about the rate at which LW is shipping new features. I'm not convinced they are net positive. I see lesswrong as a clear success, but unclear user of the marginal dollar; I see lighthaven as a moderate success and very likely positive to expand at the margin.
The interface has been getting busier[1] whereas I think the modal reader would benefit from having as few distractions as possible while reading. I don't think an LLM-enhanced editor would be useful, nor am I excited about additional tutoring functionality.
I am glad to see that people are donating, but I would have preferred this post to carefully signpost the difference between status-quo value of LW (immense) from the marginal value of paying for more features for LW (possibly negative), and from your other enterprises. Probably not worth the trouble, but is it possible to unbundle these for the purposes of donations?
Separately, thank you to the team! My research experience over the past years has benefitted from LW on a daily basis.
EDIT: thanks to Habryka for more details. After comparing to previous site versions I'm more optimistic about the prospects for active work on LW.
(edit) in some places,
Yeah, I think this concern makes a bunch of sense.
My current model is that LW would probably die a slow death within 3-4 years if we started developing at a much slower pace than the one which we have historically been developing. One reason for that is that that is exactly what happened with LW 1.0. There were mods, and the servers were kept on and bugs were being fixed, but without substantial technical development the site fell into disrepair and abandonment surprisingly quickly.
The feature development here is important in the eternal race against spammers and trolls, but the internet is also constantly shifting, and with new modalities of how to read and interact with ideas, it does matter to have an active development team, even just for basic traffic and readability reasons. LW 1.0 missed a bunch of the transition to mobile and this was a large component of its decline. I think AI chat systems are likely a coming transition where you really want a team to actively iterate on how to best handle that shift (my current guess is 15-20% of new users are already referred to the site because ChatGPT or Claude told them to read things here), but it might also end up somet...
Thanks for these details. These have updated me to be significantly more optimistic about the value of spending on LW infra.
I donated $3,000. I've gained and will continue to gain a huge amount of value from LW and other activities of Lightcone Infrastructure, so it seemed like a cooperative and virtuous move to donate.[1]
I tried to donate at a level such that if all people using LW followed a similar policy to me, Lightcone would be likely be reasonably funded, at least for the LW component.
I think marginal funding to Lightcone Infrastructure beyond the ~$3 million needed to avoid substantial downsizing is probably worse than some other funding opportunities. So, while I typically donate larger amounts to a smaller number of things, I'm not sure if I will donate a large amount to Lightcone yet. You should interpret my $3,000 donation as indicating "this is a pretty good donation opportunity and I think there are general cooperativeness reasons to donate" rather than something stronger. ↩︎
What happens to the general Lightcone portfolio if you don't meet a fundraising target, either this year or a future year?
For concreteness, say you miss the $1M target by $200K.
Well argued. I’m in. I’ve received ample surplus value over the years from LW. Less Online was a blast this year. Thank you for all the work you and your team do!
I just made my initial donation, with the intention of donating more over time.
The last year of my life was the hardest I've ever been through. In the spring, with a new job and a week's notice - I moved across the country to Berkeley with only my suitcase. I was at my rope's end for keeping it all together, and Lighthaven was there to catch me. I rented a basement room and was able to stay for a month or so until I could figure out a permanent place to live.
It's hard to write how much it meant to me. The logistics of finding a place to sleep was great of course, but more than that, when everything had fallen apart, every friendly face and hello, every coworking session, every late night fireside discussion showed me that I wasn't by myself.
I think this is what Lighthaven means to many people - a place where we can go and see that we're not alone.
I like a lot of what you are doing, and I might donate your cause, but I feel there are some questions that need to be asked. (I feel uncomfortable about the questions, that's why I use a pseudonym.)
Have you considered cutting salaries in half? According to the table you share in the comments, you spend 1.4 million on the salary for the 6 of you, which is $230k per person. If the org was in a better shape, I would consider this a reasonable salary, but I feel that if I was in the situation you guys are in, I would request my salary to be at least halved.
Relatedly, I don't know if it's possible for you to run with fewer employees than you currently have. I can imagine that 6 people is the minimum that is necessary to run this org, but I had the impression that at least one of you is working on creating new rationality and cognitive trainings, which might be nice in the long-term (though I'm pretty skeptical of the project altogether), but I would guess you don't have the slack for this kind of thing now if you are struggling for survival.
On the other side of the coin, can you extract more money out of your customers? The negotiation strategy you describe in the post (50-50ing ...
Have you considered cutting salaries in half? According to the table you share in the comments, you spend 1.4 million on the salary for the 6 of you, which is $230k per person. If the org was in a better shape, I would consider this a reasonable salary, but I feel that if I was in the situation you guys are in, I would request my salary to be at least halved.
We have! Indeed, we have considered it so hard that we did in fact do it. For roughly the last 6-8 months our salaries have on-average been halved (and I have completely forfeited my salary, and donated ~$300k to Lightcone at the end of last year myself to keep us afloat).
I don't think this is a sustainable situation and I expect that in the long run I would end up losing staff over this, or I would actively encourage people to make 3x[1] their salary somewhere else (and maybe donating it, or not) since I don't think donating 70% of your counterfactual salary is a particularly healthy default for people working on these kinds of projects. I currently think I wouldn't feel comfortable running Lightcone at salaries that low in the long run, or would at least want to very seriously rearchitect how Lightcone operat...
Thanks for the answers. I appreciate the team's sacrifices and will probably donate some good money to Lightcone.
I could have saved a bit of money with better tax planning, but not as much as one might think.
The money I was able to donate came from appreciated crypto, and was mostly unrelated to my employment at Lightcone (and also as an appreciated asset was therefore particularly tax-advantageous to donate).
I have generally taken relatively low salaries for most of my time working at Lightcone. My rough guess is that my average salary has been around $70k/yr[1]. Lightcone only started paying more competetive salaries in 2022 when we expanded beyond some of our initial founding staff, and I felt like it didn't really make cultural or institutional sense to have extremely low salaries. The only year in which I got paid closer to any competetive Bay Area salary was 2023, and in that year I also got to deduct most of that since I donated in the same year.
(My salary has always been among the lowest in the organization, mostly as a costly signal to employees and donors that I am serious about doing this for impact reasons)
I don't have convenient tax records for years before 2019, but my income post-federal-tax (but before state tax) for the last 6 years was $59,800 (2019), $71,473 (2
I was going to email but I assume others will want to know also so I'll just ask here. What is the best way to donate an amount big enough that it's stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?
Yes, we have a brokerage account and a Coinbase account and can accept basically whatever crazy asset you want to give to us, including hard to value ones (and honestly, it sounds fun to go on an adventure to figure out how much a first edition MtG Black Lotus costs, how to sell it, and how to make sure you get an appropriate tax return, if that's the kind of asset you want to donate).
We of course also accept bank transfers to avoid the Stripe fees.
What do you think are the biggest mistakes you/LightCone have made in the last ~2 years?
And what do you think a 90th percentile outcome looks like for you/LightCone in 2025? Would would success look like?
(Asking out of pure curiosity– I'd have these questions even if LC wasn't fundraising. I figured this was a relevant place to ask, but feel free to ignore it if it's not in the spirit of the post.)
I worry that cos this hasn't received a reply in a bit, people might think it's not in the spirit of the post. I'm even more worried people might think that critical comments aren't in the spirit of the post.
Both critical comments and high-effort-demanding questions are in the spirit of the post, IMO! But the latter might take awhile to get a response
I expect Lightcone to be my primary or maybe only x-risk-related donation this year—see my manifund comment here for my endorsement:
As a full-time AGI safety / alignment researcher (see my research output), I can say with confidence that I wouldn’t have been able to get into the field in the first place, and certainly wouldn’t have made a fraction as much progress, without lesswrong / alignment forum (LW/AF). I continue to be extremely reliant on it for my research progress. … [much more here]
Wish I had more to give, but I’ll send something in the mid four figures at the beginning of January (for tax reasons).
Minor point, but I'd be happy if LessWrong/Lightcone had various (popular) subscriptions for perks, like Patreon.
Some potential perks:
I realize these can be a pain to set up though.
(I'd want this if it helped with total profit, to Lightcone)
Yeah, I agree, and I've been thinking through things like this. I want to be very careful in making the site not feel like it's out to get you, and so isn't trying to sell you anything, and so have been hesitant for things in the space that come with prominent UI implications, but I also think there are positive externalities. I expect we will do at least some things in this space.
I've run two workshops at LightHaven and it's pretty unthinkable to run a workshop anywhere else in the Bay Area. Lightcone has really made it easy to run overnight events without setup
I donated $100, roughly equivalent to my yearly spending on Twitter/X Premium, because I believe LessWrong offers similar value. I would encourage most readers to do the same.
Appreciate the post. I've previously donated $600 through the EA Manifund thing and will consider donating again late this year / early next year when thinking through donations more broadly.
I've derived lots of value with regards to thinking through AI futures from LW/AIAF content (some non-exhaustive standouts: 2021 MIRI conversations, List of Lethalities and Paul response, t-AGI framework, Without specific countermeasures..., Hero Licensing). It's unclear to me how much of the value would have been retained if LW didn't exist, but plausibly LW is responsible for a large fraction.
In a few ways I feel not fully/spiritually aligned with the LW team and the rationalist community: my alignment difficulty/p(doom()[1] is farther from Eliezer's[2] than my perception of the median of the LW team[3] (though closer to Eliezer than most EAs), I haven't felt sucked in by most of Eliezer's writing, and I feel gut level cynical about people's ability to deliberatively improve their rationality (edit: with large effect size) (I haven't spent a long time examining evidence to decide whether I really believe this).
But still LW has probably made a large positive difference in my lif...
I donated $500. I get a lot of value from the website and think it's important for both the rationalist and AI safety communities. Two related things prevented me from donating more:
Though it's the website which I find important, as I understand it, the majority of this money will go towards supporting Lighthaven.
I think this is backwards! As you can see in the budget I posted here, and also look at the "Economics of Lighthaven" section, Lighthaven itself is actually surprisingly close to financially breaking even. If you ignore our deferred 2024 interest payment, my guess is we will overall either lose or gain some relatively small amount on net (like $100k).
Most of the cost in that budget comes from LessWrong and our other generalist activities. At least right now, I think you should be more worried about the future of Lighthaven being endangered by the financial burden of LessWrong (and in the long run, I think it's reasonably likely that LessWrong will end up in part funded by revenue from Lighthaven).
I see much more value in Lighthaven than in the rest of the activity of Lightcone.
I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.
All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.
That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large ...
Hm, I was going to say I'd like LW distinguished from lighthaven so I could give more to LW.
The things you note about encouraging groupthink are good points. They should be addressed.
But the average quality of discussion here cannot be matched anywhere else. Non-voting comment systems like X and Slate Star Codex are too disorganized to consistently find the real in-depth discussions. Subreddits do not have the quality of community to make the comment voting work well. (They either have too few experts to sustain a conversation, or too many novices voting on vibes).
So while the risk of groupthink is pretty high, I don't know where else I can go that might advance the discussion fast enough to stay ahead of AI advances.
Groupthink would be super bad, but so would just remaining confused and conflicted when there are better answers available through collaborative analysis of important issues.
I'm curious what alternatives you suggest.
In the meantime, I'm donating to support LW.
I wanted a datapoint for Czynski's hypothesis that LW 2.0 killed the comment sections, so I checked how many comments your blogposts were getting in the first 3 months of 2017 (before LW 2.0 rebooted). There were 13 posts, and the comment counts were 0, 0, 2, 6, 9, 36, 0, 5, 0, 2, 0, 0, 2. (The 36 was a a political post in response to the US election, discussion of which I generally count as neutral or negative on LW, so I'd discount this.)
I'll try the same for Zvi. 13, 8, 3, 1, 3, 18, 2, 19, 2, 2, 2, 5, 3, 7, 7, 12, 4, 2, 61, 31, 79. That's more active (the end was his excellent sequence Against Facebook, and the last one was a call for people to share links to their blogs).
So that's not zero, there was something to kill. How do those numbers compare during LessWrong 2.0? My sense is that there's two Zvi eras, there's the timeless content (e.g. Mazes, Sabbaths, Simulacra) and the timeful content (e.g. Covid, AI, other news). The latter is a newer, more frequent, less deep writing style, so it's less apples to apples, so instead let's take the Moral Mazes sequence from 2020 (when LW 2.0 would've had a lot of time to kill Zvi's comments). I'm taking the 17 posts in this main sequenc...
Rumors are that 2025 lighthaven is jam packed. If this is the case, and you need money, rudimentary economics suggests only the obvious: raise prices. I know many clients are mission aligned, and there's a reasonable ideological reason to run the joint at or below cost, but I think it's aligned with that spirit if profits from the campus fund the website.
I also want to say in print what I said in person a year ago: you can ask me to do chores on campus to save money, it'd be within my hufflepuff budget. There are good reasons to not go totally "by and for the community" DIY like many say community libraries or soup kitchens, but nudging a little in that direction seems right.
EDIT: I did a mostly symbolic $200 right now, may or may not do more as I do some more calculations and find out my salary at my new job
Realised that my donation did not reflect how much I value lesswrong, the alignment forum and the wider rationalist infrastructure. Have donated $100 more, although that still only reflects my stinginess rather than the value i receive from your work.
The donation site said I should leave a comment here if I donate, so I'm doing that. Gave $200 for now.
I was in Lighthaven for the Illiad conference. It was an excellent space. The LessWrong forum feels like what some people in the 90s used to hope the internet would be.
Edit 03.12.2024: $100 more donated by me since the original message.
I'm a broke student but I donated what I could muster right now, intending to donate more in the future.
LessWrong is without a doubt worth more to me and to the world than what I can currently pay!
(Commenting because it might marginally increase probability of other people donating as well!)
I don't have a lot to give right now but I chipped in what I can. Lighthaven is definitely a worthy cause!
I am excited for this grounds of "we deserve to have nice things," though for boring financial planning reasons I am not sure whether I will donate additional funds prior to calendar year end or in calendar year 2025.
(Note that I made a similar statement in the past and then donated $100 to Lighthaven very shortly thereafter, so, like, don't attempt to reverse-engineer my financial status from this or whatever.)
Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help.
Donated $300 now, intend to donate more (after more thinking).
My impression is that if you read LessWrong regularly, it could easily be worth $10-$30/month for you. If you've attended Lighthaven, there's an extra benefit there, which could be much more. So I think it's very reasonable for people in our community to donate $100 (a ~$10/month Substack subscription) to $10k (a fancy club membership) per person or so, depending on the person, just from the standpoint of thinking of it as communal/local infrastructure.
One potential point of contention is with the fact that I believe some of the team is working on future, more experimental projects, than just the LessWrong/Lighthaven basics. But I feel pretty good about this in-general, it's just more high-risk and more difficult to recommend.
I also think it's just good practice for community-focused projections to get donations from the community. This helps keep incentives more aligned. I think that the Lighthaven team is about as relevant as things get, on this topic, now.
I just donated $400. This is not a minor amount for me but after thinking about it carefully this is an amount that feels substantial while remaining in my budget. I think it's important to support things, people and institutions that bring great value to oneself and the world. LessWrong is certainly one of those.
Haven't finished reading this, but I just want to say how glad I am that LW 2.0 and everything related to it (lightcone, etc) happened. I came across lw at a time when it seemed "the diaspora" was just going to get more and more disperse; that "the scene" had ended. I feel disappointed/guilty with how little I did to help this resurgence, like watching on the sidelines as a good thing almost died but then saved itself.
How I felt at the time of seemingly peak "diaspora" actually somewhat reminds me of how I feel about CFAR now (but to a much lesser extent than LW); I think there is still some activity but it seems mostly dead; a valiant attempt at a worthwhile problem; but there are many Problems and many Good Things in the world, but limited time, and am I really going to invest time figuring out if this particular Thing is truly dead? Or start up my own rationality-training-adjacent effort? Or some other high leverage Good Thing? Generic EA? A giving pledge? The result is I carry on trying to do what I thought was most valuable, perversely hoping some weird mix of "that Good Thing was actually dead or close to it; it's good you didn't jump in as you'd be swimming against the...
Donated $100.
It was mostly due to LW2 that I decided to work on AI safety, actually, so thanks!
I've had the pleasure of interacting w/ the LW team quite a bit and they definitely embody the spirit of actually trying. Best of luck to y'all's endeavors!
I remember that Lightcone was interested in working on human intelligence amplification and/or pausing AI (I can't find the LW comment, I'm afraid). Is that still part of the plan?
Donated like 20 CAD - felt like it was the least I could do and didn't want to let hesistancy about it being enough stop me.
and have clearly been read a non-trivial amount by Elon Musk
Nit: He heard this idea in conversation with an employee AFAICT.
"We'll probably display this until the New Year"
I'd guess plenty are planning to donate after Jan 1st for tax reasons, so perhaps best to keep highlighting the donation drive through the first week of Jan.
Also I donated $1,000. Lightcone's works have brought me a lot of direction and personal value over the years, so I'm happy I'm able to lend some support now
I think the expenses for the website look high in this post because so much of it goes into invisible work like mod tools. Could you say more about that invisible work?
Due to an apparently ravenous hunger among you all for having benches with plaques dedicated to them, and us not actually having that many benches, I increased the threshold for getting a bench (or equivalent) with a plaque to $2,000. Everyone who donated more than $1,000 but less than $2,000 before Dec 2nd will still get their plaque.
What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?
...There currently doesn't really exist any good way for people who want to contribute to AI existential risk reduction to give money in a way that meaningfully gives them assistance in figuring out what things are good to fund. This is particularly sad since I think there is now a huge amount of interest from funders and philanthropists who want to somehow help with AI x-risk stuff, as progress in capabilities has made work in the space a lot more urgent, but the ecosystem is currently at a particular low-point in terms of trust and ability to direct that fu
I think a lot of projects in the space are very high variance, and some of them are actively deceptive, and I think that really means you want a bunch of people with context to do due diligence and think hard about the details. This includes some projects that Zvi recommends here, though I do think Zvi's post is overall great and provides a lot of value.
Another big component is doing fair splitting. I think many paths to impact require getting 4-5 pieces in place, and substantial capital investment, and any single donor might feel that there isn't really any chance for them to fund things in a way that gets the whole engine going, and before they feel good giving they want to know that other people will actually put in the other funds necessary to make things work. That's a lot of what our work on the S-Process and Lightspeed Grants was solving.
In-general, the philanthropy space is dominated by very hard principal-agent problems. If you have a lot of money, you will have tons of people trying to get your money, most of them for bad reasons. Creating infrastructure to connect high net worth people with others who are actually trustworthy and want to put in a real effort to help them is quite hard (especially in a way that results in the high net-worth people then actually building justified trust in those people).
As someone who isn't really in a position to donate much at all, and who feels rather silly about the small amount I could possibly give, and what a tiny drop that is compared the bucket this post is sketching...
I uh ... sat down and did some simple math. If everyone who ever votes (>12M) donates $10 then you'd have >$120 million covered. If we follow bullshit statistics of internet activity, where it's said 99% of all content is generated by 1% of all people, then this heuristic would get us $1.2M from people paying this one time "subscription" fee....
everyone who ever votes (>12M)
I . . . don't think that's a correct reading of the stats presented? Unless I'm missing something, "votes" counts each individual [up|down]vote each individual user makes, so there are many more total votes than total people.
'Everyone' paying a one-time $10 subscription fee would solve the problem.
A better (though still imperfect) measure of 'everyone' is the number of active users. The graph says that was ~4000 this month. $40,000 would not solve the problem.
I donated $100. I'm fairly income-constrained at the moment so I'd be nervous about donating more.
Leaving a comment because it apparently helps. I've been occasionally involved with the Berkeley area rationality community since 2010, enjoyed re-reading the sequences last year, and continue to find interesting and valuable posts today. I hope to be more involved with the community again in the coming years. Thank you, Lightcone.
✨ I just donated 71.12 USD (100 CAD 🇨🇦) ✨
I'd like to donate a more relevant amount but I'm finishing my undergrad and have no income stream... in fact, I'm looking to become a Mech Interp researcher (& later focus on agent foundations) but I'm not going to be able to do that if misaligned optimizers eat the world, so I support lightcone's direction as I understand it (policy that promotes AI not killing everyone).
If anyone knows of good ways to fund myself as a MI researcher, ideally focusing on this research direction I've been developing, please let me know : )
LessWrong has been critical to my intellectual development. Just donated $1000. Thank you for all you do!
I donated $1000. Originally I was worried that this is a bottomless money-pit, but looking at the cost breakdown, it's actually very reasonable. If Oliver is right that Lighthaven funds itself apart from the labor cost, then the real costs are $500k for the hosting, software and accounting cost of LessWrong (this is probably an unavoidable cost and seems obviously worthy of being philanthropically funded), plus paying 4 people (equivalent to 65% of 6 people) to work on LW moderation and upkeep (it's an unavoidable cost to have some people working on LW, 4 ...
Regarding donor recognition, I think online recognition makes a lot of sense, e.g. colored nickname on the forum, or a badge of some sorts (like the one GWWC encourages).
Thank you for clearly laying out the arguments for donating to the Lightcone. I will!
Here's $1649 from me. Lighthaven is one of the most incredible places in the world, largely because of its people. I hope to see you all there next year at LessOnline and Manifest!
Although not much, I have donated 10$. I hope you will find some generours sponsors that are able financially support Lightcone!
I just went to try to give you $40 (because there's an event that I expect to be hosted at Lighthaven, and I want to go to it, and would be happy to pay for a ticket at something in that ballpark of a price, but kind of expect to be offered free entry, so I might as well "pay for my ticket" now to make sure the place is there to have the event in).
But the form requires a phone number and will not accept all zeroes or all nines and you can have forty dollars but you cannot have a real phone number.
Donated $10. If I start earning substantially more, I think I'd be willing to donate $100. As it stands, I don't have that slack.
I'd love to donate to Lightcone ~5K€ next year, but as long as it's not tax-deductible in France I'll keep to French AI safety orgs as the French non-profit donation tax break is stupidly good: it can basically triple the donation amount and reduce income tax to 0.
I know that Mieux Donner, a new French effective giving org, is acting as French tax-deductible front for a number of EA organisations. I'll contact them to check whether they could forward a donation to Lightcone and give an update under this comment.
My employer's matching program currently doesn't accept Lightcone Infrastructure as a registered cause for donation matching, even though we're on your list of employer matching. We use Benevity, and the portal says that "a registration email has been sent to the cause", and that the cause should register through https://causes.benevity.org/ .
Is benevity registration on your list? I'd much rather donate with matching than without.
(From the UK, if that matters)
Chief among them is having built-in UI for "base-model Claude 3.5 Sonnet" and Llama 405b-base continuing whatever comment or post I am in the middle of writing
I was extremely surprised to read that Anthropic is giving out access to base models to outside parties. Especially as a single throwaway sentence in a giant post. What were the terms of your agreement with them? Do they do this with other people? Do they also give certain people access to the helpful-only (i.e. not necessarily harmless or honest) post-trained models, or just the base pretrained ones?
I'm pretty poor right now so didn't donate, but I do generally believe that the Lightcone team has done a good job, overall, and is worth donating to.
Doesn't EAIF give to other EVF orgs? Seems weird that you would be a conflict of interest but that isn't.
I gave $290. Partly because of the personal value I get out of LW, partly because I think it's a solidly cost-effective donation.
I think you could get a lot out of adding a temporary golden dollar sign with amount donated next to our LW names! Upon proof of donation receipt or whatever.
Seems like the lowest hanging fruit for monetizing vanity— benches being usually somewhat of a last resort!
(The benches seem still underpriced to me, given expected amount raised and average donation size in the foreseeable future).
Why aren't the less wrong books available on amazon anymore, even as print on demand. Wouldn't that be additional revenue?
TLDR: LessWrong + Lighthaven need about $3M for the next 12 months. Donate here, or send me an email, DM signal message (+1 510 944 3235), or leave a comment, if you want to support what we do. We are a registered 501(c)3, have big plans for the next year, and due to a shifting funding landscape need support from a broader community more than in any previous year. [1]
I've been running LessWrong/Lightcone Infrastructure for the last 7 years. During that time we have grown into the primary infrastructure provider for the rationality and AI safety communities. "Infrastructure" is a big fuzzy word, but in our case, it concretely means:
In general, Lightcone considers itself responsible for the end-to-end effectiveness of the extended rationality and AI safety community. If there is some kind of coordination failure, or part of the engine of impact that is missing, I aim for Lightcone to be an organization that can jump in and fix that, whatever it is.
Doing that requires a non-trivial amount of financial capital. For the next 12 months, we expect to spend around $3M, and in subsequent years around $2M (though we have lots of opportunities to scale up if we can get more funding for it). We currently have around $200k in the bank.[3]
Lightcone is, as far as I can tell, considered cost-effective by the large majority of people who have thought seriously about how to reduce existential risk and have considered Lightcone as a donation target, including all of our historical funders. Those funders can largely no longer fund us, or expect to fund us less, for reasons mostly orthogonal to cost-effectiveness (see the section below on "Lightcone and the funding ecosystem" for details on why). Additionally, many individuals benefit from our work, and I think it makes sense for those people to support the institutions that provide them value.
This, I think, creates a uniquely strong case for people reading this to donate to us.[4]
I personally think there exists no organization that has been more cost-effective at reducing AI existential risk in the last 5 years, and I think that's likely to continue to be the case in the coming 5 years. Our actions seem to me responsible for a substantial fraction of the positive effects of the field of AI safety, and have also substantially alleviated the negative effects of our extended social cluster (which I think are unfortunately in-expectation of comparable magnitude to the positive effects, with unclear overall sign).
Of course, claiming to be the single most cost-effective intervention out there is a big claim, and one I definitely cannot make with great confidence. But the overall balance of evidence seems to me to lean this way, and I hope in this post to show you enough data and arguments that you feel comfortable coming to your own assessment.
This post is a marathon, so strap in and get comfortable. Feel free to skip to any section of your choice (the ToC on the left, or in the hamburger menu is your friend). Also, ask me questions in the comments (or in DMs), even if you didn't read the whole post.
Now let's zoom out a bit and look at some of the big picture trends and data of the projects we've been working on in the last few years and see what they tell us about Lightcone's impact:
LessWrong
Here are our site metrics from 2017 to 2024:
On almost all metrics, we've grown the activity levels of LessWrong by around 4-5x since 2017 (and ~2x since the peak of LW 1.0). In more concrete terms, this has meant something like the following:
You will also quickly notice that many metrics peaked in 2023, not 2024. This is largely downstream the launch of ChatGPT, Eliezer's "List of Lethalities" and Eliezer's TIME article, which caused a pretty huge spike in traffic and activity on the site. That spike is now over and we will see where things settle in terms of growth and activity. The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent, and I expect we are also experiencing some of that (though much less than more centrally EA-associated platforms like 80,000 hours and the EA Forum, as far as I can tell).
While I think these kind of traffic statistics are a very useful "sign of life" and sanity-check that what we are doing is having any effect at all in the grand scale of things, I don't think they are remotely sufficient for establishing we are having a large positive impact.
One way to get closer to an answer to that question is to decompose it into two questions: "Do the writings and ideas from LessWrong influence important decision-makers?" and "Does LessWrong make its readers & writers more sane?".
I expect the impact of LessWrong to end up extremely heavy-tailed, with a large fraction of the impact coming from a very small number of crucial decision-makers having learned something of great importance on a highly leveraged issue (e.g. someone like Geoffrey Hinton becoming concerned about AI existential risk, or an essay on LW opening the Overton window at AI capability companies to include AI killing everyone, or someone working on an AI control strategy learning about some crucial component of how AIs think that makes things work better).
Does LessWrong influence important decisions?
It's tricky to establish whether reading LessWrong causes people to become more sane and better informed on key issues. It is however relatively easy to judge whether LessWrong is being read by some of the most important decision-makers of the 21st century, or whether it is indirectly causing content to be written that is being read by the most important decision-makers of the 21st century.
I think the extent of our memetic reach was unclear for a few years, but there is now less uncertainty. Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.[6] While the effect outside of Silicon Valley tech and AI is less clear, things look promising to me there too:
Matt Clifford, CEO of Entrepreneur First and Chair of the UK’s ARIA recently said on a podcast (emphasis mine):
Patrick Collison talks on the Dwarkesh podcast about Gwern’s writing on LW and his website:
Lina Khan (head of the FTC) answering a question about her “p(doom)”, a concept that originated in LessWrong comments.
Does LessWrong make its readers/writers more sane?
I think this is a harder question to answer. I think online forums and online discussion tend to have a pretty high-variance effect on people's sanity and quality of decision-making. Many people's decision-making seems to have gotten substantially worse by becoming very involved with Twitter, and many subreddits seem to me to have similarly well-documented cases of smart people becoming markedly less sane.
We have tried a lot of things to make LessWrong have less of these sorts of effects, though it is hard to tell how much we have succeeded. We definitely have our own share of frustrating flame wars and tribal dynamics that make reasoning hard.
One proxy that seems useful to look at is something like, "did the things that LessWrong paid attention to before everyone else turn out to be important?". This isn't an amazing proxy for sanity, but it does tell you whether you are sharing valuable information. In market terms, it tells you how much alpha there is in reading LessWrong.
I think on information alpha terms, LessWrong has been knocking it out of the park over the past few years. Its very early interest in AI, early interest in deep learning, early interest in crypto, early understanding of the replication crisis, early interest in the COVID pandemic and early interest in prediction markets all have paid off handsomely, and indeed many LessWrong readers have gotten rich off investing in the beliefs they learned from the site (buying crypto and Nvidia early, and going long volatility before the pandemic, sure gives you high returns).[7]
On a more inside-view-y dimension, I have enormously benefitted from my engagement with LessWrong, and many of the people who seem to me to be doing the best work on reducing existential risk from AI and improving societal decision-making seem to report the same. I use many cognitive tools I learned on LessWrong on a daily level, and rarely regret reading things written on the site.
Some quotes and endorsements to this effect:
LessWrong and intellectual progress
While I think ultimately things on LessWrong have to bottom out in people making better decisions of some kind, I often find it useful to look at a proxy variable of something like "intellectual progress". When I think of intellectual progress, I mostly think about either discovering independently verifiable short descriptions of phenomena that previously lacked good explanations, or distilling ideas in ways that are clearer and more approachable than any previous explanation.
LessWrong hosts discussion about a very wide variety of interesting subjects (genetic engineering, obesity, US shipping law, Algorithmic Bayesian Epistemology, anti-aging, homemade vaccines, game theory, and of course the development of the art of rationality), but the single biggest topic on LessWrong is artificial intelligence and its effects on humanity's long term future. LessWrong is the central discussion and publication platform for a large ecosystem of people who discover, read, and write research about the problems facing us in the development of AI.
I think the ideas developed here push the frontier of human civilization's understanding of AI, how it will work, and how to navigate its development.
This next section primarily consists of the latter sort of evidence, which is the only one I can really give you in a short amount of space.
Public Accessibility of the Field of AI Alignment
In 2017, trying to understand and contribute to the nascent field of AI alignment using the public written materials was basically not possible (or took 200+ hrs). Our goal with the AI Alignment Forum was to move the field of AI from primarily depending on a people's direct personal conversations with a few core researchers (at the time focused around MIRI and Paul Christiano) to being a field whose core ideas could be learned via engaging with the well-written explanations and discussions online.
I think we largely achieved this basic goal. By 2020 many people had a viable route by spending 20-30 hours engaging with the best LessWrong content. DeepMind's Rohin Shah agreed, writing in 2020 that “the AI Alignment Forum improved our pedagogic materials from 0.1 to 3 out of 10.”
To show this, below I've collected some key posts and testimonials about those posts from researchers and LW contributors about those posts.
Paul Christiano's Research Agenda FAQ was published in 2018 by Alex Zhu (independent).
An overview of 11 proposals for building safe advanced AI by Evan Hubinger (Anthropic) in May 2020
It Looks Like You're Trying To Take Over The World by Gwern (Gwern.net) in March 2022
Counterarguments to the basic AI x-risk case by Katja Grace in October 2022
If you want to read more examples of this sort of thing, click to expand the collapsible section below.
10 more LW posts with testimonials
Embedded Agency is a mathematical cartoon series published in 2018 by MIRI researchers Scott Garrabrant and Abram Demski.
Risks from Learned Optimization is the canonical explanation of the concept of inner optimizers, by Hubinger et al in 2019.
Inner Alignment: Explain like I'm 12 Edition by Rafael Harth (Independent) in August 2020
The Solomonoff Prior is Malign by Mark Xu (Alignment Research Center) in October 2020
Fun with +12 OOMs of Compute by Daniel Kokotajlo (of AI Futures) in March 2021
Another (outer) alignment failure story by Paul Christiano (US AISI) in April 2021
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by Andrew Critch (Center for Human-Compatible AI) in April 2021
Selection Theorems: A Program For Understanding Agents by John Wentworth (Independent) in September 2021
MIRI announces new "Death With Dignity" strategy by Eliezer Yudkowsky (MIRI) in April 2022
Let’s think about slowing down AI by Katja Grace (AI Impacts) in December 2022.
LessWrong's influence on research
I think one of the main things LessWrong gives writers and researchers is an intelligent and philosophically mature audience who want to read great posts. This pulls writing out of authors that they wouldn't write if this audience wasn't here. A majority of high-quality alignment research on LessWrong is solely written for LessWrong, and not published elsewhere.
As an example, one of Paul Christiano’s most influential essays is What Failure Looks Like, and while Christiano does have his own AI alignment blog, this essay was only written on the AI Alignment Forum.
As further evidence on this point, here is a quote from Rob Bensinger (from the MIRI staff) in 2021:
So I think that the vast majority of this work wouldn't have been published if not for the Forum, and would've been done to a lower quality had the Forum not existed. For example, with the 2018 FAQ above on Christiano's Research, even though Alex Zhu may well have spent the same time understanding Paul Christiano’s worldview, Eliezer Yudkowsky would not have been able to get the benefit of reading Zhu’s write-up, and the broader research community would have seen neither Zhu’s understanding or Yudkowsky’s response.
Lighthaven
Since mid-2021 the other big thread in our efforts has been building in-person infrastructure. After successfully reviving LessWrong, we noticed that in more and more of our user interviews "finding collaborators" and "getting high-quality high-bandwidth feedback" were highlighted as substantially more important bottlenecks to intellectual progress than the kinds of things we could really help with by adding marginal features to our website. After just having had a year of pandemic lockdown with very little of that going on, we saw an opportunity to leverage the end of the pandemic into substantially better in-person infrastructure for people working on stuff we care about than existed before.
After a year or two of exploring by running a downtown Berkeley office space, we purchased a $16.5M hotel property, renovated it for approximately $6M and opened it up to events, fellowships, research collaborators and occasional open bookings under the name Lighthaven.
I am intensely proud of what we have built with Lighthaven and think of it as a great validation of Lightcone's organizational principles. A key part of Lightcone's philosophy is that I believe most cognitive skills are general in nature. IMO the key requirement to building great things is not to hire the best people for the specific job you are trying to get done, but to cultivate general cognitive skills and hire the best generalists you can find, who can then bring their general intelligence to bear on whatever problem you decide to focus on. Seeing the same people who built LessWrong, the world's best discussion platform, pivot to managing a year long $6M construction project, and see it succeed in quality beyond anything else I've seen in the space, fills me with pride about the flexibility and robustness of our ability to handle whatever challenges stand between us and our goals (which I expect will be myriad and similarly varied).
Others seem to think the same:
And a quick collage of events we've hosted here (not comprehensive):
At conferences where we managed to sneak in a question about the venue quality, we've received a median rating of 10/10, with an average of 9.4. All annual conferences organized here wanted to come back the following year, and as far as I know we've never had a client who was not hoping to run more events at Lighthaven in the future (in Lighthaven's admittedly short life so far).
Lighthaven is a very capital-intensive project, and in contrast to our ambitions with LessWrong, is a project where we expect to recoup a substantial chunk of our costs by people just paying us. So a first lens to analyze Lighthaven through is to look at how we are doing in economic terms.
The economics of Lighthaven
We started Lighthaven when funding for work on rationality community building, existential risk, and AI safety was substantially more available. While FTX never gave us money directly for Lighthaven, they encouraged us to expand aggressively, and so I never intended it to be in a position to break even on purely financial grounds.
Luckily, despite hospitality and conferencing not generally being known as an industry with amazing margins, we made it work. I originally projected an annual shortfall of $1M per year, which we would need to make up with philanthropic donations. However, demand has been substantially higher than I planned for, and correspondingly our revenue has been much higher than I was projecting.
Last year, while fundraising, I projected that we would spend about $800k on the upkeep, utilities and property taxes associated with Lighthaven in 2024 and 2025, as well as $1M on our annual interest payment. I expected we would make about $1M in revenue, resulting in a net loss of ~$500k - $800k.
Since demand was substantially higher, we instead spent ~$1.6M on improvements, upkeep, staffing and taxes, plus an additional $1M in interest payment, against a total of around $1.8M in revenue, in a year in which the campus wasn't operational for a substantial fraction of that year, overall producing revenue much above my expectations.
My best projections for 2025 are that we will spend the same amount[9], but this time make ~$2.6M in revenue—breaking even—and if we project that growth out a bit more, we will be in a position to subsidize and fund other Lightcone activities in subsequent years. At this level of expenditure we are also making substantial ongoing capital investments into the venue, making more of our space usable and adding new features every month[10].
Here is a graph of our 2024 + 2025 monthly income with conservative projections:
How does Lighthaven improve the world?
The basic plan for Lighthaven to make the world better is roughly:
I think the impact of in-person collaborative spaces on culture and effective information exchange can be very large. The exact models of how Lightcone hopes to do that are hard to communicate and are something I could write many posts about, but we can do a quick case study of how Lightcone differs from other event venues:
Nooks nooks nooks nooks nooks
One of the central design principles of Lighthaven is that we try to facilitate small 2-6 person conversations in a relaxed environment, with relative privacy from each other, while making it as easy as possible to still find anyone you might be looking for. One of the central ways Lighthaven achieves that is by having a huge number of conversational nooks both on the inside and outside of the space. These nooks tend to max out at being comfortable for around 8 people, naturally causing conversations to break up into smaller chunks.
Conferences at Lighthaven therefore cause people to talk much more to each other than in standard conference spaces, in which the primary context for conversation might be the hallways, usually forcing people to stand, and often ballooning into large conversations of 20+ people, as the hallways provide no natural maximum for conversation size.
More broadly, my design choices for Lighthaven have been heavily influenced by Christopher Alexander's writing on architecture and the design of communal spaces. I recommend skimming through A Pattern Language and reading sections that spark your interest if you are interested in how Lighthaven was designed (I do not recommend trying to read the book from front to back, it will get boring quickly).
Lighthaven "permanent" residents and the "river and shore" metaphor
In the long run, I want Lightcone to become a thriving campus with occupants at many different timescales:
The goal is for each of these to naturally feed into the following ones, creating a mixture of new people and lasting relationships across the campus. Metaphorically the flow of new people forms a fast-moving and ever-changing "river", with the "shore" being the aggregated sediment of the people who stuck around as a result of that flow.
Since we are just getting started, we have been focusing on the first and second of these, with only a small handful of permanently supported people on our campus (at present John Wentworth, David Lorell, Adam Scholl, Aysja Johnson, Gene Smith and Ben Korpan).
On the more permanent organizational side, I hope that the campus will eventually house an organization worthy of an informal title like "FHI of the West", either directly run by Lightcone, or heavily supported by us, but I expect to grow such an organization slowly and incrementally, instead of in one big push (which I initially considered, and might still do in the future, but for now decided against).
Does Lighthaven improve the events we run here?
I've run a lot of conferences and events over the years (I was in charge of the first EA Global conference, and led the team that made EA Global into a global annual conference series with thousands of attendees). I designed Lighthaven to really leverage the lessons I learned from doing that, and I am pretty confident I succeeded, based on my own experiences of running events here, and the many conversations I've had with event organizers here.
The data also seems to back this up (see also my later section on estimating the value of Lighthaven's surplus based on what people have told us they would be willing to pay to run events here):
I expect a number of people who have run events at Lighthaven will be in the comments and will be happy to answer questions about what it's been like.[11]
The relationship between Lighthaven and LessWrong
The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk. LessWrong is also the place where many people who have spent years thinking about these topics write and share their ideas, which then attracts more people, which in some sense forms the central growth loop of the rationalist ecosystem. Lighthaven and the in-person programs it supports is one of the many components of what happens between someone reading LessWrong for the first time, and someone becoming an active intellectual contributor to the site, which I think usually takes about 3-4 years of lots of in-person engagement and orienting and talking to friends and getting a grip on these ideas, when it happens.
This means in some sense the impact of Lighthaven should in substantial parts be measured by its effects on producing better research and writing on LessWrong and other parts of public discourse.
Of course, the intellectual outputs in the extended rationality and AI safety communities are far from being centralized on LessWrong, and much good being done does not route through writing blog posts or research papers. This makes the above a quite bad approximation of our total impact, but I would say that if I saw no positive effects of Lighthaven on what happens on LessWrong and the AI Alignment Forum, something would have gone quite wrong.
On this matter, I think it's quite early to tell whether Lighthaven is working. I currently feel optimistic that we are seeing a bunch of early signs of a rich intellectual community sprouting up around Lighthaven, but I think we won't know for another 2-3 years whether LessWrong and other places for public intellectual progress have gotten better as a result of our efforts here.
Lightcone and the funding ecosystem
Having gone through some of our historical impact, and big projects, let's talk about funding.
Despite what I, and basically all historical funders in the ecosystem, consider to be a quite strong track record, practically all historical mechanisms by which we have historically received funding are unable to fund us going forward, or can only give us substantially reduced funding.
Here is a breakdown of who we received funding from over the last few years:
You might notice the three big items in this graph, FTX Future Fund[12], Open Philanthropy, and the Survival and Flourishing Fund.
FTX Future Fund is no more, and indeed we ended up returning around half of the funding we received from them[13], and spent another 15% of the amount they gave to us in legal fees, and I spent most of my energy last year figuring out our legal defense and handling the difficulties of being sued by one of the most successful litigators of the 21st century, so that was not very helpful. And of course the Future Fund is even less likely to be helpful going forward.
Good Ventures will not accept future Open Philanthropy recommendations to fund us and Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz. Importantly, Open Phil cannot make grants through Good Ventures to projects involved in almost any amount of "rationality community building", even if that work is only a fraction of the organizations efforts and even if there still exists a strong case on grounds unrelated to any rationality community building. The exact lines here seem somewhat confusing and unclear and my sense are still being figured out, but Lightcone seems solidly out.
This means we aren't getting any Open Phil/Good Ventures money anymore, while as far as I know, most Open Phil staff working on AI safety and existential risk think LessWrong is very much worth funding, and our other efforts at least promising (and many Open Phil grantees report being substantially helped by our work).
This leaves the Survival and Flourishing Fund, who have continued to be a great funder to us. And 2/3 of our biggest funders disappearing would already be enough to force us to seriously change how we go about funding our operations, but there are additional reasons why it's hard for us to rely on SFF funding:
Speaking extremely roughly, this means compared to 2022, two thirds of our funders have completely dropped out of funding us, and another sixth is going to be used to pay work that we had originally done under an FTX Future Fund grant, leaving us with one sixth of the funding, which is really not very much.
This all, importantly, is against a backdrop where none of the people or institutions that have historically funded us have updated against the cost-effectiveness of our operations. To the contrary, my sense is the people at Open Philanthropy, SFF and Future Fund have positively updated on the importance of our work, while mostly non-epistemic factors have caused the people involved to be unable to recommend funding to us.
This I think is a uniquely important argument for funding us. I think Lightcone is in the rare position of being considered funding-worthy by many of the key people that tend to try to pick up the most cost-effective interventions, while being de-facto unable to be funded by them.
I do want to express extreme gratitude for the individuals that have helped us survive throughout 2023 when most of these changes in the funding landscape started happening, and Lightcone transitioned from being a $8M/yr organization to a $3M/yr organization. In particular, I want to thank Vitalik Buterin and Jed McCaleb who each contributed $1,000,000 in 2023, Scott Alexander who graciously donated $100,000, Patrick LaVictoire who donated $50,000, and many others who contributed substantial amounts.
Our work on funding infrastructure
Now that I've established some context on the funding ecosystem, I also want to go a bit into the work that Lightcone has done on funding around existential risk reduction, civilizational sanity and rationality development.
The third big branch of historical Lightcone efforts has been to build the S-Process, a funding allocation mechanism used by SFF, FLI and Lightspeed Grants.
Together with the SFF, we built an app and set of algorithms that allows for coordinating a large number of independent grant evaluators and funders much more efficiently than anything I've seen before, and it has successfully been used to distribute over $100M in donations over the last 5 years. Internally I feel confident that we substantially increased the cost-effectiveness of how that funding was allocated—my best guess is on the order of doubling it, but more confidently by at least 20-30%[17], which I think alone is a huge amount of good done.[18]
Earlier this year, we also ran our own funding round owned end-to-end under the banner of "Lightspeed Grants":
Somewhat ironically, the biggest bottleneck to us working on funding infrastructure has been funding for ourselves. Working on infrastructure that funds ourselves seems ripe with potential concerns about corruption and bad incentives, and so I have not felt comfortable applying for funding from a program like Lightspeed Grants ourselves. Our non-SFF funders historically were also less enthusiastic about us working on funding infrastructure for the broader ecosystem than our other projects.
This means that in many ways, working on funding infrastructure reduces the amount of funding we receive, by reducing the pots of money that could potentially go to us. As another instance of this, I have been spending around 10%-20% of my time over the past 5 years working as a fund manager on the Long Term Future Fund. As a result, Lightcone has never applied to the LTFF, or the EA Infrastructure Fund, as my involvement with EA Funds would pose too tricky of a COI in evaluating our application. But I am confident that both the LTFF and the EAIF would evaluate an application by Lightcone quite favorably, if we had never been involved in it.
(The LTFF and the EAIF are therefore two more examples of funders that usually pick up the high cost-effectiveness fruit, but for independent reasons are unable to give to Lightcone Infrastructure, leaving us underfunded relative to our perceived cost-effectiveness.)
If it's worth doing it's worth doing with made-up statistics
Ok, so I've waffled about with a bunch of high-level gobbledigosh, but as spreadsheet altruists the only arguments we are legally allowed to act on must involve the multiplication of at least 3 quantities and at least two google spreadsheets.
So here is the section where I make some terrible quantitative estimates which will fail to model 95% of the complexity of the consequences of any of our actions, but which I have found useful in thinking about our impact, and which you will maybe find useful too, and which you can use to defend your innocence when the local cost-effectiveness police demands your receipts.
The OP GCR capacity building team survey
Open Philanthropy has run two surveys in the last few years in which they asked people they thought were now doing good work on OP priority areas like AI safety what interventions, organizations and individuals were particularly important for people getting involved, or helped people to be more productive and effective.
Using that survey, and weighting respondents by how impactful Open Phil thought their work was going to be, they arrived at cost-effectiveness estimates for various organizations (to be clear, this is only one of many inputs in OPs grantmaking).
In their first 2020 survey, here is the table they produced[19]:
(approx; lower is better)
As you can see, LessWrong 2.0's impact was in estimated cost-effectiveness only behind SPARC (which is a mostly volunteer driven program, and this estimate does not take into account opportunity cost of labor).
In their more recent 2023 survey, Lightcone's work performed similarly well. While the data they shared didn't include any specific cost-effectiveness estimates, they did include absolute estimates on the number of times that various interventions showed up in their data:
To get some extremely rough cost-effectiveness numbers out of this, we can divide the numbers here by the budget for the associated organizations, though to be clear, this is definitely an abuse of numbers.
Starting from the top, during the time the survey covered (2020 - early 2023) the annual budget of 80,000 Hours averaged ~$6M. Lightcone's spending (excluding Lighthaven construction, which can't have been relevant by then) averaged around $2.3M. University groups seem to have been funded at around $5M/yr[20], and my best guess is that EAG events cost around $6M a year during that time. I am going to skip Open Philanthropy because that seems like an artifact of the survey, and Eliezer, because I don't know how to estimate a reasonable number for him.
This produces this table (which I will again reiterate is a weird thing to do):
As you can see, my totally objective table says that we are the most cost-effective intervention that you can fund out there (to be clear, I think the central takeaway here is more "by this very narrow methodology Lightcone is competitive with the best interventions, I think the case for it being the very best is kind of unstable")
Lightcone/LessWrong cannot be funded by just running ads
An IMO reasonable question to ask is "could we fund LessWrong if we just ran ads?". It's not fully clear how that relates to our cost-effectiveness, but I still find it a useful number to look at as a kind of lower-bound on the value that LessWrong could produce, with a small change.
LessWrong gets around 20 million views a year, for around 3 million unique users and 12 million engagement minutes. For our audience (mostly American and english-speaking) using Google AdSense, you would make about $2 per 1000 views, resulting in a total ad revenue of around $40,000, a far cry from the >$1,000,000 that LessWrong spends a year.
Using Youtube as another benchmark, Youtubers are paid about $15 for 1000 U.S. based ad impressions, with my best guess of ad frequency on Youtube being about once every 6 minutes, resulting in 2 million ad impressions and therefore about $30,000 in ad revenue (this is ignoring sponsorship revenue for Youtube videos which differ widely for different channels, but where my sense is they tend to roughly double or triple the default Youtube ad revenue, so a somewhat more realistic number here is $60,000 or $90,000).
Interestingly, this does imply that if you were willing to buy advertisements that just consisted of getting people in the LessWrong demographic to read LessWrong content, that would easily cover LessWrong's budget. A common cost per click for U.S. based ads is around $2, and it costs around $0.3 to get someone to watch a 30-second ad on Youtube, resulting in estimates of around $40,000,000 to $4,000,000 to get people to read/watch LessWrong content by just advertising for it.
Comparing LessWrong to other websites and apps
Another (bad) way of putting some extremely rough number of the value LessWrong provides to the people on it, is to compare it against revenue per active user numbers for other websites and social networks.
I think by the standards of usual ARPU numbers, LessWrong has between 3,000 and 30,000 active users. So if we use Reddit as a benchmark this would suggest something like $75,000 - $750,000 per year in revenue, and if we use Facebook as a benchmark, this would suggest something like $600,000 - $6,000,000.
Again, it's not enormously clear what exactly these numbers mean, but I still find them useful as very basic sanity-checks on whether we are just burning money in highly ineffectual ways.
Lighthaven event surplus
Over the last year, we negotiated pricing with many organizations that we have pre-existing relationships with using the following algorithm:
This allows a natural estimate of the total surplus generated by Lighthaven, measured in donations to the organizations that have hosted events here.
On average, event organizers estimated total value generated at around 2x our marginal cost.
Assuming this ratio also holds for all events organized at Lighthaven, which seems roughly right to me, we can estimate the total surplus generated by Lighthaven. Also, many organizers adjusted the value-add from Lighthaven upwards after the event, suggesting this is an underestimate of the value we created (and we expect to raise prices in future years to account for that).
This suggests that our total value generated this way is ~1.33x our revenue from Lighthaven, which is likely to be around $2.8M in the next 12 months. This suggests that as long as Lighthaven costs less than ~$3.72M, it should be worth funding if you thought it was worth funding the organizations that have hosted events and programs here (and that in some sense historical donations to Lighthaven operate at least at a ~1.33x multiplier compared to the average donation to organizations that host events here).
To help get a sense of what kind of organizations do host events here, here is an annotated calendar of all the events hosted here in 2024, and our (charitable) bookings for 2025:
The future of (the) Lightcone
Now that I have talked extensively about all the things we have done in the past, and how you should regret not giving to us last year, now comes the part where I actually describe what we might do in the future. In past fundraising documents to funders and the public, I have found this part always the hardest. I value flexibility and adaptability very highly, and with charities, even more so than with investors in for-profit companies, I have a feeling that people who give to us often get anchored on the exact plans and projects that we were working on when they did.
I think to predict what we will work on in the future, it is helpful to think about Lightcone at two different levels: What are the principles behind how Lightcone operates, and what are the concrete projects that we are considering working on?
Lightcone culture and principles
Lightcone has grown consistently but extremely slowly over the last 7 years. There are some organizations I have had a glimpse into that have seen less net-growth, but I can’t think of an organization that has added as few hires (including people who later left) to their roster that now still work there. I’ve consistently hired ~1 person per year to our core team for the six years Lightcone has existed (resulting in a total team size of 7 core team members).
This is the result of the organization being quite deeply committed to changing strategies when we see the underlying territory shift. Having a smaller team, and having long-lasting relationships, makes it much easier for us to pivot, and allows important strategic and conceptual updates to propagate through the organization more easily.[21]
Another result of the same commitment is that we basically don’t specialize into narrow roles, but instead are aiming to have a team of generalists where, if possible, everyone in the organization can take on almost any other role in the organization. This enables us to shift resources between different parts of Lightcone depending on which part of the organization is under the most stress, and to feel comfortable considering major pivots that would involve doing a very different kind of work, without this requiring major staff changes every time. I don't think we have achieved full universal generality among our staff, but it is something we prioritize and have succeeded at much more than basically any other organization I can think of.
Another procedural commitment is that we try to automate as much of our work as possible, and aim for using software whenever possible to keep our total staff count low, and create processes to handle commitments and maintain systems, instead of having individuals who perform routine tasks on an ongoing basis (or at the very least try our best to augment the individuals doing routine tasks using software and custom tools).
There is of course lots more to our team culture. For a glimpse into one facet of it, see our booklet "Adventures of the Lightcone Team".
Things I wish I had time and funding for
AGI sure looks to me like it's coming, and it's coming uncomfortably fast. While I expect the overall choice to build machine gods beyond our comprehension and control will be quite bad for the world, the hope that remains routes in substantial chunks through leveraging the nascent AGI systems that we have access to today and will see in the coming years.
Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. If we build something here, it will immediately be available to and can easily be experimented with by people working on reducing AI existential risk, and I think has a much larger chance than usual of differentially accelerating good things.
We've already spent a few weeks building things in the space, but our efforts here are definitely still at a very early stage. Here is a quick list of things I am interested in exploring, though I expect most of these to not be viable, and the right solutions and products to probably end up being none of these:
Building an LLM-based editor.
LessWrong admins currently have access to a few special features in our editor that I have found invaluable. Chief among them is having built-in UI for "base-model Claude 3.5 Sonnet"[22] and Llama 405b-base continuing whatever comment or post I am in the middle of writing, using my best LessWrong comments and posts as a style and content reference (as well as some selected posts and comments by other top LW authors). I have found this to be among the best tools against writer's block, where every time I solidly get stuck, I generate 5-10 completions of what the rest of my post could look like, use it as inspiration of all kinds of different directions my post could go, then delete them and keep writing.
Using base models has at least so far been essential for getting any useful writing work out of LLMs, with the instruction-tuned models reliably producing obtuse corpo-speak when asked to engage in writing tasks.
Similarly LLMs are now at a point where they can easily provide high-level guidance to drafts of yours, notice sections where your explanations are unclear, fix typos, shorten and clean up extremely long left-branching sentences, and do various other straightforward improvements to the quality of your writing.
AI prompts and tutors as a content type on LW
LLM systems are really good tutors. They are not as good as human instructors (yet), but they are (approximately) free, eternally patient, and have a breadth of knowledge vastly beyond that of any human alive. With knowledge and skill transfer being one of the key goals for LessWrong, I think we should try to leverage that.
I would like to start with iterating on getting AI systems to teach the core ideas on LW, then after doing it successfully, experiment with opening up the ability to create tutors like that to authors on LessWrong, who would like to get AI assistance explaining and teaching the concepts they would like to communicate.
Authors and the LessWrong team can read the chats people had with our AI tutors[23], giving authors the ability to correct anything wrong that the AI systems said, and then using those corrections as part of the prompt to update how the tutor will do things in the future. I feel like this unlocks a huge amount of cool pedagogical content knowledge that has previously been inaccessible to people writing on LessWrong, and gives you a glimpse into how people fail to understand (or successfully apply your concepts) in ways that previously could have only been achieved by teaching people one on one.
Building something like an FHI of the West
But AI things are not the only things I want to work on. In a post a few months ago I said:
Since then, we had the fun and privilege of being sued by FTX, which made the umbrella of Lightcone a particularly bad fit for making things happen in the space, but now that that is over, I am hoping to pick this project back up again.
As I said earlier in this post, I expect that if we do this, I would want to go about it in a pretty incremental and low-key way, but I do think it continues to be one of the best things that someone could do, and with our work on LessWrong and ownership of a world-class 20,000 sq. ft. campus in the most important geographical region of the world, I think we are among the best placed people do this.
Building funding infrastructure for AI x-risk reduction
There currently doesn't really exist any good way for people who want to contribute to AI existential risk reduction to give money in a way that meaningfully gives them assistance in figuring out what things are good to fund. This is particularly sad since I think there is now a huge amount of interest from funders and philanthropists who want to somehow help with AI x-risk stuff, as progress in capabilities has made work in the space a lot more urgent, but the ecosystem is currently at a particular low-point in terms of trust and ability to direct that funding towards productive ends.
I think our work on the S-Process and SFF has been among the best work in the space. Similarly, our work on Lightspeed Grants helped, and I think could grow into a systemic solution for distributing hundreds of millions of dollars a year, at substantially increased cost-effectiveness.
Something something policy
Figuring out how to sanely govern the development of powerful AI systems seems like a top candidate for the most important thing going on right now. I do think we have quite a lot of positive effect on that already, via informing people who work in the space and causing a bunch of good people to start working in the space, but it is plausible that we want to work on something that is substantially more directed towards that.
This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).
I really don't know what doing more direct work in the space would look like. The obvious thing to do is to produce content that is more aimed at decision-makers in government, and to just talk to various policy people directly, but it might also involve doing things like designing websites for organizations that work more directly on influencing policy makers (like our recently-started collaborations with Daniel Kokotajlo's research team AI Futures and Zach Stein-Perlman's AI Lab Watch to help them with their website designs and needs).
A better review system for AI Alignment research
I do not believe in pre-publication private anonymous peer-review. I think it's dumb to gate access to articles behind submissions to journals, and I think in almost all circumstances it's not worth it for reviewers to be anonymous, both because I think great reviewers should be socially rewarded for their efforts, and bad reviewers should be able to be weeded out.
But I do think there is a kind of work that is often undersupplied that consists of engaging critically with research, suggesting improvements, helping the author and the reader discover related work, and successfully replicating, or failing to replicate key results. Right now, the AI Alignment field has very little incentive for that kind of work, which I think is sad.
I would like to work on making more of that kind of review happen. I have various schemes and ideas in mind for how to facilitate it, and think we are well-placed to do it.
Again, our operating philosophy values pivoting to whatever we end up thinking is best and I think it's quite likely we will not make any of the above a substantial focus of the next 1-2 years, but it still seemed useful to list.
What do you get from donating to Lightcone?
I think the best reason to donate to us is because you think that doing so will cause good things to happen in the world (like it becoming less likely that you and all your friends will die from a rogue AI). That said, credit allocation is important, and I think over the past few years there has been too little credit given to people donating to keep our community institutions intact, and I personally have been too blinded by my scope-sensitivity[24] and so ended up under-investing in my relationships to anyone but the very largest donors.
I think many things would be better if projects like LessWrong and Lighthaven were supported more by the people who are benefitting from them instead of large philanthropists giving through long chains of deference with only thin channels of evidence about our work. This includes people who benefitted many years ago when their financial means were much less, and now are in a position to help the institutions that allowed them to grow.
That means if you've really had your thinking or life-path changed by the ideas on LessWrong or by events and conversations at Lighthaven, then I'd make some small request for you to chip in to keep up the infrastructure alive for you and for others.
If you donate to us, I will try to ensure you get appropriate credit (if you desire). I am still thinking through the best ways to achieve that, but some things I feel comfortable committing to (and more to come):
$1,000$2,000[25], or you can get a whole hall or area of the campus named after you at higher numbers.[26]As the first instance of this, I'd like to give enormous thanks to @drethelin for opening our fundraiser with a $150,000 donation in whose thanks we have renamed our northwest gardens to "The Drethelin Gardens" for at least the next 2 years.
If you can come up with any ways that you think would be cool to celebrate others who have given to Lightcone, or have any ideas for how you want your own donation to be recognized, please reach out! I wasn't really considering naming campus sections after people until drethelin reached out, and I am glad we ended up going ahead with that.
Goals for the fundraiser
We have three fundraising milestones for this fundraiser, one for each of the million dollars:
We'll track our progress through each goal with a fundraising thermometer on the front page[27]. Not all of Lightcone's resources will come from this fundraiser of course. Whenever we receive donations (from any source), we'll add the funds to the "Raised" total on the frontpage.
Logistics of donating to Lightcone
We are a registered 501(c)3 in the US and if there is enough interest, can probably set up equivalence determinations in most other countries that have a similar concept of tax-deductability, making donations tax-deductible there as well (so far we've had interest from the UK and Switzerland).
We can also accept donations of any appreciated asset that you might want to donate. We are set up to receive crypto, stocks, stock options, and if you want to donate your appreciated Magic the Gathering collection, we can figure out some way of giving you a good donation receipt for that. Just reach out (via email, DM, or text/signal at +1 510 944 3235) and I will get back to you ASAP with the logistics.
Also, please check if your employer has a donation matching program! Many big companies double the donations made by their employees to nonprofits (for example, if you work at Google and donate to us, Google will match your donation up to $10k). Here is a quick list of organizations with matching programs I found, but I am sure there are many more.
If you want to donate less than $5k in cash, I recommend our Stripe donation link. We lose about 2-3% of that in fees if you use a credit card, and 1% if you use bank transfer, so if you want to donate more and want us to lose less to fees, you can reach out and I'll send you our wire transfer details.
If you want to send us BTC, we have a wallet! The address is
37bvhXnjRz4hipURrq2EMAXN2w6xproa9T
.Tying everything together
Whew, that was a marathon of a post. I had to leave out a huge number of things that we've done, and a huge number of hopes and aims and plans I have for the future. Feel free to ask me in the comments about any details.
I hope this all helps explain what Lightcone's deal is and gives you the evidence you need to evaluate my bold claims of cost-effectiveness.
So thank you all. I think with the help from the community and recent invigorated interest into AI x-risk stuff, we can pull the funds together to continue Lightcone's positive legacy.
If you can and want to be a part of that, donate to us here. We need to raise $3M to survive the next 12 months, and can productively use a lot of funding beyond that.
Donations are tax-deductible in the US. Reach out for other countries, we can likely figure something out.
Our technical efforts here also contribute to the EA Forum, which started using our code in 2019.
Why more money this year than next year? The reason is that we have an annual interest payment of $1M on our Lighthaven mortgage that was due in early November, which we negotiated to be deferred to March. This means this twelve month period will have double our usual mortgage payments.
We happen to also own a ~$1M building adjacent to Lighthaven in full, so we have a bit of slack. We are looking into taking out a loan on that property, but we are a non-standard corporate entity from the perspective of banks so it has not been easy. If for some reason you want to arrange a real-estate insured loan for us, instead of donating to us, that would also be quite valuable.
I am also hoping to create more ways of directing appreciation and recognition to people whose financial contributions allow us to have good things (see the section below on "What do you get from donating to Lightcone?").
What does "additional" mean here? That's of course quite tricky, since it's really hard to establish what would have happened if we hadn't worked on LessWrong. I am not trying to answer that tricky question here, I just mean "more content was posted to LW".
As a quick rundown: Shane Legg is a Deepmind cofounder and early LessWrong poster directly crediting Eliezer for working on AGI. Demis has also frequently referenced LW ideas and presented at both FHI and the Singularity Summit. OpenAI's founding team and early employees were heavily influenced by LW ideas (and Ilya was at my CFAR workshop in 2015). Elon Musk has clearly read a bunch of LessWrong, and was strongly influenced by Superintelligence which itself was heavily influenced by LW. A substantial fraction of Anthropic's leadership team actively read and/or write on LessWrong.
For a year or two I maintained a simulated investment portfolio at investopedia.com/simulator/ with the primary investment thesis "whenever a LessWrong comment with investment advice gets over 40 karma, act on it". I made 80% returns over the first year (half of which was buying early shorts in the company "Nikola" after a user posted a critique of them on the site).
After loading up half of my portfolio on some option calls with expiration dates a few months into the future, I then forgot about it, only to come back to see all my options contracts expired and value-less, despite the sell-price at the expiration date being up 60%, wiping out most of my portfolio. This has taught me both that LW is amazing alpha for financial investment, and that I am not competent enough to invest on it (luckily other people have done reasonable things based on things said on LW and do now have a lot of money, so that's nice, and maybe they could even donate some back to us!)
This example is especially counterfactual on Lightcone's work. Gwern wrote the essay at a retreat hosted by Lightcone, partly in response to people at the retreat saying they had a hard time visualizing a hard AI takeoff; and Garrett Baker is a MATS fellow who was a MATS fellow at office space run by Lightcone and provided (at the time freely) to MATS.
It might be a bit surprising to read that I expect the upkeep costs to stay the same, despite revenue increasing ~35%. The reason I expect this is that I see a large number of inefficiencies in our upkeep, and also we had a number of fixed-costs that we had to pay this year, that I don't expect to need to pay next year.
Yes, I know that you for some reason aren't supposed to use the word "feature" to describe improvements to anything but software, but it's clearly the right word.
"We shipped the recording feature in Eigen Hall and Ground Floor Bayes, you can now record your talks by pressing the appropriate button on the wall iPad"
Austin Chen from Manifold, Manifund and Manifest says:
For FTX, the graph above subtracts the amount we gave them in our settlement ($1.7M), from the total amount we received from them
Returning isn't really the right word, it's more like "ended up giving them". See below on how we settled with FTX using SFC's and Jaan's help.
SFF has a funding structure where grants get evaluated by a rotating set of "recommenders", which are usually people that Jaan Tallinn, the primary funder of SFF, respects. Those recommenders make funding recommendations 1-2 times a year via some cool mechanism design process that we helped build.
The parent organization of SFF
This exact number being lower than the amount Jaan & SFC contributed as a result of complicated dynamics in the settlement negotiations, and conversations we had around it, which ultimately settled with Jaan thinking this lower amount is fairer to garnish from future recommendations.
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
Going into the details of our work is I think beyond the scope of this post, but if you are interested in the things we've built, I recommend checking out Zvi's recent post about his experiences in the latest SFF round, and this (somewhat outdated) video by Andrew Critch talking about the S-Process.
This table is not exhaustive and OpenPhil told us they chose organisations for inclusion partially dependent on who it happened to be easy to get budget data on. Also, we've removed one organization at their request (which also ranked worse than LessWrong 2.0).
The linked grant is ~6M over a bit more than 2 years, and there are a bunch of other grants that seem also to university groups, making my best guess around $5M/yr, but I might be off here.
Though the last 2 years have been worse than par for that, for reasons that are over, like our fun lawsuit with FTX and a lot of soul searching post-FTX.
This is in quotes because we don't have access to Claude 3.5 Sonnet base model. However, you can get a model that behaves surprisingly close to it by using Anthropic's assistant completion prefix feature. H/t to Janus for pointing this out.
Unless they opt out or something, maybe requiring some amount of payment, since running LLMs isn't free.
Relatedly, I really benefitted from reading Scott Garrabrant's "Geometric Rationality" sequence, which critiques various forms of scope-sensitivity that had led me astray, and argues for something more geometric in credit and resource allocations
Due to an apparently ravenous hunger among our donor base for having benches with plaques dedicated to them, and us not actually having that many benches, the threshold for this is increased to $2,000. Everyone who donated more than $1,000 but less than $2,000 before Dec 2nd will still get their plaque.
I can't guarantee the benches/plaques/objects will stay around forever, so I think it makes sense to limit our promise of the plaque being visible to 2 years, though I expect the majority of them to stay for a lot longer.
We'll probably display this until the New Year