This is a special post for quick takes by MichaelDickens. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
131 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I get the sense that we can't trust Open Philanthropy to do a good job on AI safety, and this is a big problem. Many people would have more useful things to say about this than I do, but I still feel that I should say something.

My sense comes from:

  • Open Phil is reluctant to do anything to stop the companies that are doing very bad things to accelerate the likely extinction of humanity, and is reluctant to fund anyone who's trying to do anything about it.
  • People at Open Phil have connections with people at Anthropic, a company that's accelerating AGI and has a track record of (plausibly-deniable) dishonesty. Dustin Moskovitz has money invested in Anthropic, and Open Phil employees might also stand to make money from accelerating AGI. And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.

A lot of people (including me as of ~one year ago) consider Open Phil the gold standard for EA-style analysis. I think Open Phil is actually quite untrustworthy on AI safety (but probably still good on other causes).

I don't know what to do with this information.

[-]habryka*22168

Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.

I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will s... (read more)

Reply7733
[-]Orpheus16*9328

Adding my two cents as someone who has a pretty different lens from Habryka but has still been fairly disappointed with OpenPhil, especially in the policy domain. 

Relative to Habryka, I am generally more OK with people "playing politics". I think it's probably good for AI safely folks to exhibit socially-common levels of "playing the game"– networking, finding common ground, avoiding offending other people, etc. I think some people in the rationalist sphere have a very strong aversion to some things in this genre, and labels like "power-seeking" and "deceptive" get thrown around too liberally. I also think I'm pretty with OpenPhil deciding it doesn't want to fund certain parts of the rationalist ecosystem (and probably less bothered than Habryka about how their comms around this wasn't direct/clear).

In that sense, I don't penalize OP much for trying to "play politics" or for breaking deontological norms. Nonetheless, I still feel pretty disappointed with them, particularly for their impact on comms/policy. Some thoughts here:

  • I agree with Habryka that it is quite bad that OP is not willing to fund right-coded things. Even many of the "bipartisan" things funded by OP are quite l
... (read more)

This should be a top-level post.

8MichaelDickens
What are the norms here? Can I just copy/paste this exact text and put it into a top-level post? I got the sense that a top-level post should be more well thought out than this but I don't actually have anything else useful to say. I would be happy to co-author a post if someone else thinks they can flesh it out. Edit: Didn't realize you were replying to Habryka, not me. That makes more sense.

It feels sorta understandable to me (albeit frustrating) that OpenPhil faces these assorted political constraints.  In my view this seems to create a big unfilled niche in the rationalist ecosystem: a new, more right-coded, EA-adjacent funding organization could optimize itself for being able to enter many of those blacklisted areas with enthusiasm.

If I was a billionare, I would love to put together a kind of "completion portfolio" to complement some of OP's work.  Rationality community building, macrostrategy stuff, AI-related advocacy to try and influence republican politicians, plus a big biotechnology emphasis focused on intelligence enhancement, reproductive technologies, slowing aging, cryonics, gene drives for eradicating diseases, etc.  Basically it seems like there is enough edgy-but-promising stuff out there (like studying geoengineering for climate, or advocating for charter cities, or just funding oddball substack intellectuals to do their thing) that you could hope to create a kind of "alt-EA" (obviously IRL it shouldn't have EA in the name) where you batten down the hatches, accept that the media will call you an evil villain mastermind forever, and hop... (read more)

[-]Buck222

not even ARC has been able to get OP funding (in that case because of COIs between Paul and Ajeya)

As context, note that OP funded ARC in March 2022.

[-]habryka130

I think OP has funded almost everyone I have listed here in 2022 (directly or indirectly), so I don't really think that is evidence of anything (though it is a bit more evidence for ARC because it means the COI is overcomable).

7David Hornbein
Hm, this timing suggests the change could be a consequence of Karnofsky stepping away from the organization. Which makes sense, now that I think about it. He's by far the most politically strategic leader Open Philanthropy has had, so with him gone, it's not shocking they might revert towards standard risk-averse optionality-maxxing foundation behavior.

Isn't it just the case that OpenPhil just generally doesn't fund that many technical AI safety things these days? If you look at OP's team on their website, they have only two technical AI safety grantmakers. Also, you list all the things OP doesn't fund, but what are the things in technical AI safety that they do fund? Looking at their grants, it's mostly MATS and METR and Apollo and FAR and some scattered academics I mostly haven't heard of. It's not that many things. I have the impression that the story is less like "OP is a major funder in technical AI safety, but unfortunately they blacklisted all the rationalist-adjacent orgs and people" and more like "AI safety is still a very small field, especially if you only count people outside the labs, and there are just not that many exciting funding opportunities, and OpenPhil is not actually a very big funder in the field". 

[-]Buck2114

A lot of OP's funding to technical AI safety goes to people outside the main x-risk community (e.g. applications to Ajeya's RFPs).

[-]habryka123

Open Phil is definitely by far the biggest funder in the field.  I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn't constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy). 

AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.

6David Matolcsi
Yeah, I agree, and I don't know that much about OpenPhil's policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it's important to flag that they actually have pretty limited scope in this area.
5habryka
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).
[-]Xodarap110

what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations

is there a list of these somewhere/details on what happened?

[-]habryka5516

You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt 

The current list of areas that I know about are: 

  • Anything to do with the rationality community ("Rationality community building")
  • Anything to do with moral relevance of digital minds
  • Anything to do with wild animal welfare and invertebrate welfare
  • Anything to do with human genetic engineering and reproductive technology
  • Anything that is politically right-leaning

There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible: 

  • Most forms of broad public communication about AI (where you would need to align very closely with OP goals to get any funding)
  • Almost any form of macrostrategy work of the kind that FHI used to work on (i.e. Eternity in Six Hours and stuff like that)
  • Anything about acausal trade of cooperation in large worlds (and more broadly anything that is kind of weird game theory)

Huh, are there examples of right leaning stuff they stopped funding? That's new to me

6Xodarap
You said I'm wondering if you have a list of organizations where Open Phil would have funded their other work, but because they withdrew from funding part of the organization they decided to withdraw totally. This feels very importantly different from good ventures choosing not to fund certain cause areas (and I think you agree, which is why you put that footnote).
[-]habryka15-2

I don't have a long list, but I know this is true for Lightcone, SPARC, ESPR, any of the Czech AI-Safety/Rationality community building stuff, and I've heard a bunch of stories since then from other organizations that got pretty strong hints from Open Phil that if they start working in an area at all, they might lose all funding (and also, the "yes, it's more like a blacklist, if you work in these areas at all we can't really fund you, though we might make occasional exceptions if it's really only a small fraction of what you do" story was confirmed to me by multiple OP staff, so I am quite confident in this, and my guess is OP staff would be OK with confirming to you as well if you ask them).

1Xodarap
Thanks!
9evhub
Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity. Concretely, the current presidential election seems extremely important to me from an AI safety perspective, I expect that importance to only go up in future elections, and I think OpenPhil is correct on what candidates are best from an AI safety perspective. Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs, and probably the most important other stuff to do is governance and policy work, which this strategy seems helpful for. I don't know the actual marginal increase in political influence that they're buying here, but my guess would be that the numbers pencil and OpenPhil is making the right call. Separately, this is just obviously false. A lot of the old AI safety people just don't need OpenPhil funding anymore because they're working at labs or governments, e.g. me, Rohin Shah, Geoffrey Irving, Jan Leike, Paul (as you mention), etc.

Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs,

It might be the case that most of the quality weighted safety research involving working with large models is happening at labs, but I'm pretty skeptical that having this mostly happen at labs is the best approach and it seems like OpenPhil should be actively interested in building up a robust safety research ecosystem outside of labs.

(Better model access seems substantially overrated in its importance and large fractions of research can and should happen with just prompting or on smaller models. Additionally, at the moment, open weight models are pretty close to the best models.)

(This argument is also locally invalid at a more basic level. Just because this research seems to be mostly happening at large AI companies (which I'm also more skeptical of I think) doesn't imply that this is the way it should be and funding should try to push people to do better stuff rather than merely reacting to the current allocation.)

7evhub
Yeah, I think that's a pretty fair criticism, but afaict that is the main thing that OpenPhil is still funding in AI safety? E.g. all the RFPs that they've been doing, I think they funded Jacob Steinhardt, etc. Though I don't know much here; I could be wrong.
[-]kave103

Wasn't the relevant part of your argument like, "AI safety research outside of the labs is not that good, so that's a contributing factor among many to it not being bad to lose the ability to do safety funding for governance work"? If so, I think that "most of OpenPhil's actual safety funding has gone to building a robust safety research ecosystem outside of the labs" is not a good rejoinder to "isn't there a large benefit to building a robust safety research ecosystem outside of the labs?", because the rejoinder is focusing on relative allocations within "(technical) safety research", and the complaint was about the allocation between "(technical) safety research" vs "other AI x-risk stuff".

[-]habryka4418

Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity.

Yeah, I currently think Open Phil's policy activism has been harmful for the world, and will probably continue to be, so by my lights this is causing harm with the justification of causing even more harm. I agree they will probably get the bit right about what major political party would be better, but sadly the effects of policy work are much more nuanced and detailed than that, and also they will have extremely little influence on who wins the general elections.

We could talk more about this sometime. I also have some docs with more of my thoughts here (which I maybe already shared with you, but would be happy to do so if not).

Separately, this is just obviously false. A lot of the old AI safety people just don't need OpenPhil funding anymore because they're working at labs or governments

... (read more)
[-]Ben Pace*1911

sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade

Sacrificing half of it to avoid things associated with one of the two major political parties and being deceptive about doing this is of course not equal to half the cost of sacrificing all of such funding, it is a much more unprincipled and distorting and actively deceptive decision that messes up everyone’s maps of the world in a massive way and reduces our ability to trust each other or understand what is happening.

9gw
Thanks for sharing, I was curious if you could elaborate on this (e.g. if there are examples of AI policy work funded by OP that come to mind that are clearly left of center). I am not familiar with policy, but my one data point is the Horizon Fellowship, which is non-partisan and intentionally places congressional fellows in both Democratic and Republican offices. This straightforwardly seems to me like a case where they are trying to engage with people on the right, though maybe you mean not-right-of-center at the organizational level? In general though, (in my limited exposure) I don't model any AI governance orgs as having a particular political affiliation (which might just be because I'm uninformed / ignorant).
[-]habryka170

Yep, my model is that OP does fund things that are explicitly bipartisan (like, they are not currently filtering on being actively affiliated with the left). My sense is in-practice it's a fine balance and if there was some high-profile thing where Horizon became more associated with the right (like maybe some alumni becomes prominent in the republican party and very publicly credits Horizon for that, or there is some scandal involving someone on the right who is a Horizon alumni), then I do think their OP funding would have a decent chance of being jeopardized, and the same is not true on the left.

Another part of my model is that one of the key things about Horizon is that they are of a similar school of PR as OP themselves. They don't make public statements. They try to look very professional. They are probably very happy to compromise on messaging and public comms with Open Phil and be responsive to almost any request that OP would have messaging wise. That makes up for a lot. I think if you had a more communicative and outspoken organization with a similar mission to Horizon, I think the funding situation would be a bunch dicier (though my guess is if they were competent, an or... (read more)

6MichaelDickens
Thanks for the reply. When I wrote "Many people would have more useful things to say about this than I do", you were one of the people I was thinking of. Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it's generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I've criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they've gotten worse on AI safety (and weird causes in general).
5Eli Tyre
Nitpick, but this statement seems obviously false given what I understand your views to be? Paul, Carl, Buck, for starters. [edit: I now see that Oliver had already made a footnote to that effect.]
6habryka
(I like Buck, but he is one generation later than the one I was referencing. Also, I am currently like 50/50 whether Buck would indeed be blacklisted. I agree that Carl is a decent counterexample, though he is a bit of a weirder case)
8Buck
I agree that I didn’t really have much of an effect on this community’s thinking about AIS until like 2021.
4Eli Tyre
Jessica Taylor seems like she's also second generation?
4habryka
I remember running into her a bunch before I ran into Buck. Scott/Abram are also second generation. Overall, seems reasonable to include Buck (but communicating my more complicated epistemic state with regard to him would have been harder).
4yc
Out of curiosity - “it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded” Are these projects related to AI safety or just generally? And what are some examples?
2habryka
I am not sure I am understanding your question. Are you asking about examples of left-leaning projects that Dustin is involved in, or right-leaning projects that cannot get funding? On the left, Dustin is one of the biggest donors to the democratic party (with Asana donating $45M and him donating $24M to Joe Biden in 2020).
2yc
Examples of right leaning projects that got rejected by him due to his political affiliation, and if these examples are AI safety related
2habryka
I don't currently know of any public examples and feel weird publicly disclosing details about organizations that I privately heard about. If more people are interested I can try to dig up some more concrete details (but can't make any promises on things I'll end up able sharing).
1yc
No worries; thanks!
3ROM
Can you elaborate on what you mean by this?  OP appears to have been one of FHI's biggest funders according to Sandberg:[1] The hiring (and fundraising) freeze imposed by Oxf began in 2020.  1. ^ See page 15
3habryka
In 2023/2024 OP drastically changed it's funding process and priorities (in part in response to FTX, in part in response to Dustin's preferences). This whole conversation is about the shift in OPs giving in this recent time period. See also: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures 
-1ROM
I agree with the claim you're making: that if FHI still existed and they applied for a grant from OP it would be rejected. This seems true to me. I don't mean to nitpick, but it still feels misleading to claim "FHI could not get OP funding" when they did in fact get lots of funding from OP. It implies that FHI operated without any help from OP, which isn't true. 
2habryka
The "could" here is (in context) about "could not get funding from modern OP". The whole point of my comment was about the changes that OP underwent. Sorry if that wasn't as clear, it might not be as obvious to others that of course OP was very different in the past.
2ROM
I understand the claim you were making now and I hope the nitpicking isn't irritable. 
2Chris Lakin
fwiw, FABRIC was able to get funding in November 2024 (who knows if this date is correct though) nvm this was an "exit grant" lmao
2MichaelDickens
If Open Phil is unwilling to fund some/most of the best orgs, that makes earning to give look more compelling. (There are some other big funders in AI safety like Jaan Tallinn, but I think all of them combined still have <10% as much money as Open Phil.)
[-]Wei Dai4128

And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.

No kidding. From https://www.openphilanthropy.org/grants/openai-general-support/:

OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.

Wish OpenPhil and EAs in general were more willing to reflect/talk publicly about their mistakes. Kind of understandable given human nature, but still... (I wonder if there are any mistakes I've made that I should reflect more on.)

"Open Phil higher-ups being friends with Anthropic higher-ups" is an understatement. An Open Philanthropy cofounder (Holden Karnofsky) is married to an Anthropic cofounder (Daniela Amodei). It's a big deal!

[-]Raemon3216

I want to add the gear of "even if it actually turns out that OpenPhil was making the right judgment calls the whole time in hindsight, the fact that it's hard from the outside to know that has some kind of weird Epistemic Murkiness effects that are confusing to navigate, at the very least kinda suck, and maybe are Quite Bad." 

I've been trying to articulate the costs of this sort of thing lately and having trouble putting it into words, and maybe it'll turn out this problem was less of a big deal than it currently feels like to me. But, something like the combo of

a) the default being for many people to trust OpenPhil

b) many people who are paying attention think that they should at least be uncertain about it, and somewhere on a "slightly wary" to "paranoid" scale. and...

c) this at least causes a lot of wasted cognitive cycles

d) it's... hard to figure out how big a deal to make of it. A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently. Some of those concerns are, indeed, overly paranoid, but, like, it wasn't actually reasonable to calibrate the wariness/conflict-theory-detector to zero, you have to make guesses. Thi... (read more)

[-]habryka10-1

Some of those concerns are, indeed, overly paranoid

I am actually curious if you have any overly paranoid predictions from me. I was today lamenting that despite feeling paranoid on this stuff all the time, I de-facto have still been quite overly optimistic in almost all of my predictions on this topic (like, I only gave SPARC a 50% chance of being defunded a few months ago, which I think was dumb, and I was not pessimistic enough to predict the banning of all right-associated project, and not pessimistic enough to predict a bunch of other grant decisions that I feel weird talking publicly about). 

6Raemon
The predictions that seemed (somewhat) overly paranoid of yours were more about Anthropic than OpenPhil, and the dynamic seemed similar and I didn't check that hard while writing the comment. (maybe some predictions about how/why the OpenAI board drama went down, which was at the intersection of all three orgs, which I don't think have been explicitly revealed to have been "too paranoid" but I'd still probably take bets against) (I think I agree that overall you were more like "not paranoid enough" than "too paranoid", although I'm not very confident)
[-]habryka113

My sense is my predictions about Anthropic have also not been pessimistic enough, though we have not yet seen most of the evidence. Maybe a good time to make bets.

9Raemon
I kinda don't want to litigate it right now, but, I was thinking "I can think of one particular Anthropic prediction Habryka made that seemed false and overly pessimistic to me", which doesn't mean I think you're overall uncalibrated about Anthropic, and/or not pessimistic enough. And (I think Habryka got this but for benefit of others), a major point of my original comment was not just "you might be overly paranoid/pessimistic in some cases", but, ambiguity about how paranoid/pessimistic is appropriate to be results in some kind of confusing, miasmic social-epistemic process (where like maybe you are exactly calibrated on how pessimistic to be, but it comes across as too aggro to other people, who pushback). This can be bad whether you're somewhat-too-pessimistic, somewhat-too-optimistic, or exactly calibrated. 
6Ben Pace
My recollection is that Habryka seriously considered hypotheses that involved worse and more coordinated behavior than reality, but that this is different from "this was his primary hypothesis that he gave the most probability mass to". And then he did some empiricism and falsified the hypotheses and I'm glad those hypotheses were considered and investigated. Here's an example of him giving 20-25% to a hypothesis about conspiratorial behavior that I believe has turned out to be false.
2habryka
Yep, that hypothesis seems mostly wrong, though I more feel like I received 1-2 bits of evidence against it. If the board had stabilized with Sam being fired, even given all I know, I would have still thought a merger with Anthropic to be like ~5%-10% likely.
4MichaelDickens
My impression is that those people are paying a social cost for how willing they are to bring up perceived concerns, and I have a lot of respect for them because of that.
2Noosphere89
As someone who has disagreed quite a bit with Habryka in the past, endorsed. They are absolutely trying to solve a frankly pretty difficult problem, where there's a lot of selection for more conflict than is optimal, and also selection for being more paranoid than is optimal, because they have to figure out if a company or person in the AI space is being shady or outright a liar, which unfortunately has a reasonable probability, but there's also a reasonable probability of them being honest but them failing to communicate well. I agree with Raemon that you can't have your conflict theory detectors set to 0 in the AI space.
1Czynski
Things can be done to encourage this behavior anway, such as with how the site works. Instead the opposite has been done; this is the root of my many heated disagreements with the LW team.

Maybe make a post on the EA forum?

2MichaelDickens
I've been avoiding LW for the last 3 days because I was anxious that people were gonna be mad at me for this post. I thought there was a pretty good chance I was wrong, and I don't like accusing people/orgs of bad behavior. But I thought I should post it anyway because I believed there was some chance lots of people agreed with me but were too afraid of social repercussions to bring it up (like I almost was).
1MichaelDickens
I should add that I don't want to dissuade people from criticizing me if I'm wrong. I don't always handle criticism well, but it's worth the cost to have accurate beliefs about important subjects. I knew I was gonna be anxious about this post but I accepted the cost because I thought there was a ~25% chance that it would be valuable to post.

I find it hard to trust that AI safety people really care about AI safety.

  • DeepMind, OpenAI, Anthropic, and SSI were all founded in the name of safety. Instead they have greatly increased danger. And at least OpenAI and Anthropic have been caught lying about their motivations:
    • OpenAI: claiming concern about hardware overhang and then trying to massively scale up hardware; promising compute to superalignment team and then not giving it; telling board that model passed safety testing when it hadn't; too many more to list.
    • Anthropic: promising (in a mealy-mouthed technically-not-lying sort of way) not to push the frontier, and then pushing the frontier; trying (and succeeding) to weaken SB-1047; lying about their connection to EA (that's not related to x-risk but it's related to trustworthiness).
  • For whatever reason, I had the general impression that Epoch is about reducing x-risk (and I was not the only one with that impression) but:
    • Epoch is not about reducing x-risk, and they were explicit about this but I didn't learn it until this week
    • its FrontierMath benchmark was funded by OpenAI and OpenAI allegedly has access to the benchmark (see comment on why this is bad)
    • some of the
... (read more)
Reply71111

I think this is straightforwardly true and basically hard to dispute in any meaningful way. A lot of this is basically downstream of AI research being part of a massive market/profit generating endeavour (the broader tech industry), which straightforwardly optimises for more and more "capabilities" (of various kinds) in the name of revenue. Indeed, one could argue that long before the current wave of LLMs the tech industry was developing powerful agentic systems that actively worked to subvert human preferences in favour of disempowering them/manipulating them, all in the name of extracting revenue from intelligent work... we just called the AI system the Google/Facebook/Youtube/Twitter Algorithm.

The trend was always clear: an idealistic mission to make good use of global telecommunication/information networks finds initial success and is a good service. Eventually pressures to make profits cause the core service to be degraded in favour of revenue generation (usually ads). Eventually the company accrues enough shaping power to actively reshape the information network in its favour, and begins dragging everything down with it. In the face of this AI/LLMs are just another product to... (read more)

Our competitors/other parties are doing dangerous things? Maybe we could coordinate and share our concerns and research with them

What probability do you put that, if Anthropic had really tried, they could have meaningfully coordinated with Openai and Google? Mine is pretty low

I think many of these are predicated on the belief that it would be plausible to get everyone to pause now. In my opinion this is extremely hard and pretty unlikely to happen. I think that, even in worlds where actors continue to race, there are actions we can take to lower the probability of x-risk, and it is a reasonable position to do so.

I separately think that many of the actions you describe historically were dumb/harmful, but are equally consistent with "25% of safety people act like this" and 100%

What probability do you put that, if Anthropic had really tried, they could have meaningfully coordinated with Openai and Google? Mine is pretty low

Not GP but I'd guess maybe 10%. Seems worth it to try. IMO what they should do is hire a team of top negotiators to work full-time on making deals with other AI companies to coordinate and slow down the race.

ETA: What I'm really trying to say is I'm concerned Anthropic (or some other company) would put in a half-assed effort to cooperate and then give up, when what they should do is Try Harder. "Hire a team to work on it full time" is one idea for what Trying Harder might look like.

3Neel Nanda
Fair. My probability is more like 1-2%. I do think that having a team of professional negotiators seems a reasonable suggestion though. I predict the Anthropic position would be that this is really hard to achieve in general, but that if slowing down was ever achieved we would need much stronger evidence of safety issues. In addition to all the commercial pressure, slowing down now could be considered to violate antitrust law. And it seems way harder to get all the other actors like Meta or DeepSeek or xAI on board, meaning I don't even know if I think it's good for some of the leading actors to unilaterally slow things down now (I predict mildly net good, but with massive uncertainty and downsides)

I think it's important to distinguish between factual disagreements and moral disagreements. My understanding is that eg Jaime is sincerely motivated by reducing x risk (though not 100% motivated by it), just disagrees with me (and presumably you) about various empirical questions about how to go about it, what risks are most likely, what timelines are, etc. I'm much less sure the founders of Mechanize care.

And to whatever degree you trust my judgement/honesty, I work at DeepMind and reducing existential risk is a fairly large part of my motivation (though far from all of it), and I try to regularly think about how my team's strategy can be better targeted towards this.

And I know a lot of safety people at deepmind and other AGI labs who I'm very confident also sincerely care about reducing existential risks. This is one of their primary motivations, they often got into the field due to being convinced by arguments about ai risk, they will often raise in conversation concerns that their current work or the team's current strategy is not focused on it enough, some are extremely hard-working or admirably willing to forgo credits so long as they think that their work is actually matter... (read more)

My understanding is that eg Jaime is sincerely motivated by reducing x risk (though not 100% motivated by it), just disagrees with me (and presumably you) about various empirical questions about how to go about it, what risks are most likely

I don't think this is true. My sense is he views his current work as largely being good on non x-risk grounds, and thinks that even if it might slightly increase x-risk, he wouldn't think it would be worth it for him to stop working on it, since he thinks it's unfair to force the current generation to accept a slightly higher risk of not achieving longevity escape velocity and more material wealth in exchange for a small increase in existential risk. 

He says it so plainly that it seems as straightforwardly of a rejection of AI x-risk concerns that I've heard: 

I selfishly care about me, my friends and family benefitting from AI. For some of my older relatives, it might make a big difference to their health and wellbeing whether AI-fueled explosive growth happens in 10 vs 20 years.

[...]

I wont endanger the life of my family, myself and the current generation for a small decrease of the chances of AI going extremely badly in the long term.

... (read more)

I think you're strawmanning him somewhat

It seems very clear that Jaime thinks that AI x-risk, is unimportant relative to almost any other issue, given his non-interest in trading off x-risk against those other issues.

Does not seem a fair description of

I wont endanger the life of my family, myself and the current generation for a small decrease of the chances of AI going extremely badly in the long term

People are allowed to have multiple values! If someone would trade a small amount of value A for a large amount of value B, this is entirely consistent with them thinking both are important.

Like, if you offer people the option to commit suicide in exchange for reducing x-risk by x%, what value of x do you think they would require? And would you say they are not x risk motivated if they eg aren't willing to do it at 1e-6?

In practice this doesn't really come up, so it's not that relevant. Similarly for Jaime's position, how much he believes himself to be in situations where he's trading off meaningful harm to today and meaningful harm to the present generation seems very important.

2ozziegooen
I did a bit of digging, because these quotes seemed narrow to me. Here's the original tweet of that tweet thread. Then right after:   All said, this specific chain doesn't give us a huge amount of information. It totals something like 10-20 sentences. > He says it so plainly that it seems as straightforwardly of a rejection of AI x-risk concerns that I've heard:  This seems like a major oversimplification to me. He says "I am concerned about concentration of power and gradual disempowerment. I put the probability that ai ends up being net bad for humans at 15%." There is a cluster in the rationalist/EA community that believes that "gradual disempowerment" is an x-risk. Perhaps you wouldn't define "concentration of power and gradual disempowerment" as technically an x-risk, but if so, that seems a bit like a technicality to me. It can clearly be a very major deal. It sounds a lot to me that Jaime is very concerned about some aspects of AI risk but not others.  In the quote you reference, he clearly says, "Not that it should be my place to unilaterally make such a decision anyway.". I hear him saying, "I disagree with the x-risk community about the issue of slowing down AI, specifically. However, I don't think this disagreement a big concern, given that I also feel like it's not right for me to personally push for AI to be sped up, and thus I won't do it."
2habryka
I am not saying Jaime in-principle could not be motivated by existential risk from AI, but I do think the evidence suggests to me strongly that concerns about existential risk from AI are not among the primary motivations for his work on Epoch (which is what I understood Neel to be saying).  Maybe it is because he sees the risk as irreducible, maybe it is because the only ways of improving things would cause collateral damage for other things he cares about. I also think it should be our dominant prior that someone is not motivated by reducing x-risk unless they directly claim they do.
9ryan_greenblatt
My sense is that Jaime's view (and Epoch's view more generally) is more like: "making people better informed about AI in a way that is useful to them seems heuristically good (given that AI is a big deal), it doesn't seem that useful or important to have a very specific theory of change beyond this". From this perspective, saying "concerns about existential risk from AI are not among the primary motivations" is partially slightly confused as the heuristic isn't necessarily back chained from any more specific justification. Like there is no specific terminal motivation. Like consider someone who donates to Give Directly due to "idk, seems heuristically good to empower the worst off people" and someone who generally funds global health and well being due to specifically caring about ongoing human welfare (putting aside AI for now). This heuristic is partially motived via flow through from caring about something like welfare even though it doesn't directly show up. These people seem like natural allies to me except in surprising circumstances (e.g., it turns out the worst off people use marginal money/power in a way that is net negative for human welfare).
4habryka
I agree that there is some ontological mismatch here, but I think your position is still in pretty clear conflict to what Neel said, which is what I was objecting to:  "Not 100% motivated by it" IMO sounds like an implication that "being motivated by reducing x-risk would make up something like 30%-70% of the motivation". I don't think that's true, and I think various things that Jaime has said make that relatively clear.

I think you're conflating "does not think that slowing down AI obviously reduces x-risk" with "reducing x risk is not a meaningful motivation for his work". Jaime has clearly said that he believes x risk is a real and >=15% (though via different mechanisms to loss of control). I think that the public being well informed about AI generally reduces risk, and I think that Epoch is doing good work on this front, and that increasing the probability that AI goes well is part of why Jaime works on this. I think it's much less clear if Frontier Math was good, but Jaime wasn't very involved anyway, so doesn't seem super relevant.

I basically think the only thing he's said that you could consider objectionable is that he's reluctant to push for a substantial pause for AI since x risk is not the only thing he cares about. But he also (sincerely, imo) expresses uncertainty about whether such a pause WOULD be good for x risk

2ozziegooen
There are a few questions here. 1. Do Jaime's writings that that he cares about x-risk or not?  -> I think he fairly clearly states that cares.  2. Does all the evidence, when put together, imply that actually, Jaime doesn't care about x-risk?  -> This is a much more speculative question. We have to assess how honest he is in his writing. I'd bet money that Jaime at least believes that he cares and is taking corresponding actions. This of course doesn't absolve him of full responsibility - there are many people who believe they do things for good reasons, but causally actually do things for selfish reasons. But now we're getting to a particularly speculative area.  "I also think it should be our dominant prior that someone is not motivated by reducing x-risk unless they directly claim they do." -> Again, to me, I regard him as basically claiming that he does care. I'd bet money that if we ask him to clarify, he'd claim that he cares. (Happy to bet on this, if that would help) At the same time, I doubt that this is your actual crux. I'd expect that even if he claimed (more precisely) to care, you'd still be skeptical of some aspect of this.  --- Personally, I have both positive and skeptical feelings about Epoch, as I do other evals orgs. I think they're doing some good work, but I really wish they'd lean a lot more on [clearly useful for x-risk] work. If I had a lot of money to donate, I could picture donating some to Epoch, but only if I could get a lot of assurances on which projects it would go to.  But while I have reservations about the org, I think some of the specific attacks against them (and defenses or them) are not accurate. 

People's "deep down motivations" and "endorsed upon reflection values," etc, are not the only determiners of what they end up doing in practice re influencing x-risk. 

2Neel Nanda
I agree with that. I was responding specifically to this:
5Garrett Baker
In that case I think your response is a non sequitur, since clearly “really care” in this context means “determiners of what they end up doing in practice re influencing x-risk”.
4Neel Nanda
I personally define "really care" as "the thing they actually care about and meaningfully drives their actions (potentially among other things) is X". If you want to define it as eg "the actions they take, in practice, effectively select for X, even if that's not their intent" then I agree my post does not refute the point, and we have more of a semantic disagreement over what the phrase means. I interpret the post as saying "there are several examples of people in the AI safety community taking actions that made things worse. THEREFORE these people are actively malicious or otherwise insincere about their claims to care about safety and it's largely an afterthought put to the side as other considerations dominate". I personally agree with some examples, disagree with others, but think this is explained by a mix of strategic disagreements about how to optimise for safety, and SOME fraction of the alleged community really not caring about safety People are often incompetent at achieving their intended outcome, so pointing towards failure to achieve an outcome does not mean this was what they intended. ESPECIALLY if there's no ground truth and you have strategic disagreements with those people, so you think they failed and they think they succeeded
5MichaelDickens
I don't think "not really caring" necessarily means someone is being deceptive. I hadn't really thought through the terminology before I wrote my original post, but I would maybe define 3 categories: 1. claims to care about x-risk, but is being insincere 2. genuinely cares about x-risk, but also cares about other things (making money etc.), so they take actions that fit their non-x-risk motivations and then come up with rationalizations for why those actions are good for x-risk 3. genuinely cares about x-risk, and has pure motivations, but sometimes make mistakes and end up increasing x-risk I would consider #1 and #2 to be "not really caring". #3 really cares. But from the outside it can be hard to tell the difference between the three. (And in fact, from the inside, it's hard to tell whether you're a #2 or a #3.) On a more personal note, I think in the past I was too credulous about ascribing pure motivations to people when I had disagreements with them, when in fact the reason for the disagreement was that I care about x-risk and they're either insincere or rationalizing. My original post is something I think Michael!2018 would benefit from reading.
2Neel Nanda
Does 3 include "cares about x risk and other things, does a good job of evaluating the trade off of each action according to their values, but is sometimes willing to do things that are great according to their other values but slightly negative results x risk"?
1yams
This looks closer to 2 to me? Also, from the outside, can you describe how an observer would distinguish between [any of the items on the list] and the situation you lay out in your comment / what the downsides are to treating them similarly? I think Michael’s point is that it’s not useful/worth it to distinguish. Whether someone is dishonest, incompetent, or underweighting x-risk (by my lights) mostly doesn’t matter for how I interface with them, or how I think the field ought to regard them, since I don’t think we should brow beat people or treat them punitively. Bottom line is I’ll rely (as an unvalenced substitute for ‘trust’) on them a little less. I think you’re right to point out the valence of the initial wording, fwiw. I just think taxonomizing apparent defection isn’t necessary if we take as a given that we ought to treat people well and avoid claiming special knowledge of their internals, while maintaining the integrity of our personal and professional circles of trust.
2Neel Nanda
If we take this as a given, I'm happy for people to categorise others however they'd like! I haven't noticed people other than you taking that perspective in this thread
1yams
Oh man — I sure hope making 'defectors' and lab safety staff walk the metaphorical plank isn't on the table. Then we're really in trouble.
4Neel Nanda
My read is that in practice many people in the online LW community are fairly hostile, and many people in the labs think the community doesn't know what they're talking about and totally ignores them/doesn't really care if they're made to walk the metaphorical plank.
7testingthewaters
At the risk of seeming quite combative, when you say That's basically what I mean when I said in my comment And, after thinking about it, I don't see your statement conflicting with mine.

At a moderate P(doom), say under 25%, from a selfish perspective it makes sense to accelerate AI if it increases the chance that you get to live forever, even if it increases your risk of dying. I have heard from some people that this is their motivation.

If this is you: Please just sign up for cryonics. It's a much better immortality gambit than rushing for ASI.

7J Bostock
This seems not to be true assuming a P(doom) of 25% and a purely selfish perspective, or even a moderately altruistic perspective which places most of its weight on, say, the person's immediate family and friends. Of course any cryonics-free strategy is probably dominated by that same strategy plus cryonics for a personal bet at immortality, but when it comes to friends and family it's not easy to convince people to sign up for cryonics! But immortality-maxxing for one's friends and family almost definitely entails accelerating AI even at pretty high P(doom) (And that's without saying that this is very likely to not be the true reason for these people's actions. It's far more likely to be local-perceived-status-gradient-climbing followed by a post-hoc rationalization (which can also be understood as a form of local-perceived-status-gradient-climbing) and signing up for cryonics doesn't really get you any status outside of the deepest depths of the rat-sphere, which people like this are obviously not in since they're gaining status from accelerating AI)
[-]FVelde181

The more sacrifices someone has made, the easier it is to believe that they mean what they say. 
Kokotajlo gave up millions to say what he wants, so I trust he is earnest. People who have gotten arrested at Stop AI have spent time in jail for their beliefs, so I trust they are earnest. 
It doesn't mean these people are most useful for AI safety but on the subject of trust I know no better measurement than sacrifice.

Note that any competent capital holder has significant conflict of interest with AI, AI is already a significant fraction of the stock market and a pause would bring down most capital, not just private lab equity

Your comment about 1e-6 p-doom is not right because we face many other X-risks that developing AGI would reduce.

Otherwise yeah I’m on board with mood of your post.

Personally I really like doing math/philosophy and I have convinced myself that it is necessary to avert doom. At least I’m not accelerating progress much! 

2MichaelDickens
Ah you're right, I wasn't thinking about that. (Well I don't think it's obvious that an aligned AGI would reduce other x-risks, but my guess is it probably would.)

I still think it's weird that many AI safety advocates will criticize labs for putting humanity at risk while simultaneously being paid users of their products and writing reviews of their capabilities. Like, I get it, we think AI is great as long as it's safe, we're not anti-tech, etc.... but is "don't give money to the company that's doing horrible things" such a bad principle?

"I find Lockheed Martin's continued production of cluster munitions to be absolutely abhorrent. Anyway, I just unboxed their latest M270 rocket system and I have to say I'm quite impressed..."

6MichaelDickens
The argument people make is that LLMs improve the productivity of people's safety research so it's worth paying. That kinda makes sense. But I do think "don't give money to the people doing bad things" is a strong heuristic. I'm a pretty big believer in utilitarianism but I also think people should be more wary of consequentialist justifications for doing bad things. Eliezer talks about this in Ends Don't Justify Means (Among Humans), he's also written some (IMO stronger) arguments elsewhere but I don't recall where. Basically, if I had a nickel for every time someone made a consequentialist argument for why doing a bad thing was net positive, and then it turned out to be net negative, I'd be rich enough to diversify EA funding away from Good Ventures. ---------------------------------------- I have previously paid for LLM subscriptions (I don't have any currently) but I think I was not giving enough consideration to the "ends don't justify means among humans" principle, so I will not buy any subscriptions in the future.

I don't know what's going on inside the heads of x-risk people such that they see new evidence on the potentially imminent demise of humanity and they find it "exciting".

I take your point, and it's an important one, but I find your claim to not know what's going on in these people's heads to be too strong. I feel excited about some kinds of new evidence about "the potentially imminent demise of humanity" like the time horizon graph you mention because I had already priced in the risks this evidence points to and, the evidence just made it way more legible and makes it much easier to communicate my concerns (and getting the broader public and governments to understand this kind of thing seems paramount for safety).

This is especially true for researchers getting excited about publishing their own work because they've known their own results for months usually before they've published it and so publishing it just means they're more legible while the updates are completely priced in. 

I think there's also a tendency I have in myself to feel much too happy when new evidence makes things I was worried about legible for the same reason I enjoy saying I-told-you-so when my friends make mistakes I warned them about even though I care about my friends and I would have preferred they didn't make these mistakes. This is definitely a silly quirk of my brain but I don't think it's a big problem; it definitely doesn't push me to cause the things I'm predicting to come to fruition in cases where that would be bad.

7MichaelLowe
This is a good post, but it applies unrealistic standards and therefore draws too strong conclusions.  >And at least OpenAI and Anthropic have been caught lying about their motivations: Just face it: It is very normal for big companies to lie. That does make many of their press and public facing statements not trustworthy, but is not predictive of their general value system and therefore actions. Plus Anthropic, unlike most labs, did in fact support a version of SB 1047 at all. That has to count for something.   >There is a missing mood here. I don't know what's going on inside the heads of x-risk people such that they see new evidence on the potentially imminent demise of humanity and they find it "exciting". In a similar vein, humans do not act or feel rationally in light of their beliefs, and changing your behavior completely in response to a years off event is just not in the cards for the vast majority of folks. Therefore do not be surprised that there is a missing mood, just like it is not surprising that people who genuinely believe in the end of humanity due to climate change do not adjust their behavior accordingly. Having said that, I did sense a general increase and preponderance of anxiety when o3 was announced, perhaps that was a point where it started to feel real for many folks. Either way, I really want to stress that concluding much about the beliefs of folks based on these reactions is very tenuous, just like concluding that a researcher must not really care about AI safety because instead of working a bit more they watch some TV in the evening.  
5Lukas_Gloor
If you're not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still "can we do better by pausing." (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.)  Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you're that desperate for a deus ex machina, surely you also have to thihnk that there's a good chance things will get worse when you push technology forward.  (Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like "enjoy life with loved ones," the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)
4Shankar Sivarajan
If only you got immortality (or even you and a small handful of your loved ones), okay, yeah, that would be selfish. But if the expectation is that it soon becomes cheap and widely accessible, that's just straight-up heroic.
-1MichaelDickens
I would not describe it as heroic. I think it's approximately morally equivalent to choosing an 80% chance of making all Americans immortal (but not non-Americans) and a 20% chance of killing everyone in the world. This is not a perfect analogy because the philosophical arguments for discounting future generations are stronger than the arguments for discounting non-Americans. (Also my P(doom) is higher than 20%, that's just an example)
5Matthew Barnett
An important difference between the analogy you gave and our real situation is that non-Americans actually exist right now, whereas future human generations do not yet exist and they may never actually come into existence—they are merely potential. Their existence depends on the choices we make today. A closer analogy would be choosing an 80% chance of making all humans immortal and a 20% chance of eliminating the possibility of future space colonization. Framed this way, I don't think the choice to take such a gamble should be considered selfish or even short-sighted, though I understand that many people would still not want to take that gamble.
3Forza
cryonics is expensive, unpopular and unavailable in most countries of the world. This is also a situation where young and rich people in first world countries buy themselves a reduction  probability of their own death, at the expense of a guaranteed deprivation of the chances of life of the poor and old people.
1StartAtTheEnd
I agree with the top part. I think it's naive to believe that AI is helping anyone, but what I want to talk about is why this problem might be unsolvable (except by avoiding it entirely).  If you hate something and attempt to combat it, you will get closer to it rather than further away, in the manner which people refer to when they say "You actually love what you say you hate". When I say "don't think about pink elephants", the more you try, the more you will fail, and this is because the brain doesn't have subtraction and division, but only addition and multiplication. You cannot learn about how to defend yourself against a problem without learning how to also cause the problem. When you learn self-defense you will also learn attacks. You cannot learn how to argue effectively with people who hold stupid worldviews without first understanding them and thus creating a model of the worldview within yourself as well. Due to mechanics like these, it may be impossible to research "AI safety" in isolation. It's probably better to use a neutral word like "AI capabilities" which include both the capacity for harm and defense against harm so that we don't mislead ourselves with words. It can cause untold damage, much like viewing "good and evil" as opposites, rather than two sides of the same thing, has. I also want to warn everyone that there seems to be an asymmetry in warfare which makes it so that attacking is strictly easier than defending. This ratio seems to increase as technology improves.
1Purplehermann
When you say ~zero value, do you mean hyperbolically dicounted or something more extreme? 

What's going on with /r/AskHistorians?

AFAIK, /r/AskHistorians is the best place to hear from actual historians about historical topics. But I've noticed some trends that make it seem like the historians there generally share some bias or agenda, but I can't exactly tell what that agenda is.

The most obvious thing I noticed is from their FAQ on historians' views on other [popular] historians. I looked through these and in every single case, the /r/AskHistorians commenters dislike the pop historian. Surely at least one pop historian got it right?

I don't know about the actual object level, but a lot of /r/AskHistorians' criticisms strike me as weak:

  • They criticize Dan Carlin for (1) allegedly downplaying the Rape of Belgium even though by my listening he emphasized pretty strongly how bad it was and (2) doing a bad job answering "could Caesar have won the Battle of Hastings?" even though this is a thought experiment, not a historical question. (Some commenters criticize him for being inaccurate and others criticize him for being unoriginal, which are contradictory criticisms.)
  • They criticize Guns, Germs, and Steel for...honestly I'm a little confused about how this person disagrees wi
... (read more)
4TsviBT
(IANAH but) I think there's a throughline and it makes sense. Maybe a helpful translation would be "oversimplified" -> "overconfident" (though "oversimplified" is also the point). There's going to be a lot of uncertainty--both empirical, and also conceptual. In other words, there's a lot of open questions--what happened, what caused what, how to think about these things. When an expert field is publishing stuff, if the field is healthy, they're engaging in a long-term project. There are difficult questions, and they're trying to build up info and understanding with a keen eye toward what can be said confidently, what can and cannot be fully or mostly encapsulated with a given concept or story, etc. When a pop historian thinks ze is "synthesizing" and "presenting", often ze is doing the equivalent of going into a big complex half-done work-in-progress codebase, learning the current quasi-API, slapping on a flashy frontend, and then trying to sell it. It's just... inappropriate, premature. Of course, there's lots of stuff going on, and a lot of the critiques will be out of envy or whatever, etc. But there's a real critique here too.

What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.

Some ideas:

  • Donate to orgs that are working to AI risk (which ones, though?)
  • Write letters to policy-makers expressing your concerns
  • Be public about your concerns. Normalize caring about x-risk
7Buck
I think the LTFF is a pretty reasonable target for donations for donors who aren't that informed but trust people in this space.
6Joseph Miller
Both of these things are done better as part of a co-ordinated effort! Consider joining PauseAI, we have a big event coming up in June: https://pausecon.org.

Is Claude "more aligned" than Llama?

Anthropic seems to be the AI company that cares the most about AI risk, and Meta cares the least. If Anthropic is doing more alignment research than Meta, do the results of that research visibly show up in the behavior of Claude vs. Llama?

I am not sure how you would test this. The first thing that comes to mind is to test how easily different LLMs can be tricked into doing things they were trained not to do, but I don't know if that's a great example of an "alignment failure". You could test model deception but you'd need some objective standard to compare different models on.

And I am not sure how much you should even expect the results of alignment research to show up in present-day LLMs.

5Jozdien
I think we're nearing - or at - the point where it'll be hard to get general consensus on this. I think that Anthropic's models being more prone to alignment fake makes them "more aligned" than other models (and in particular, that it vindicates Claude 3 Opus as the most aligned model), but others may disagree. I can think of ways you could measure this if you conditioned on thinking alignment faking (and other such behaviours) was good, and ways you could measure if you conditioned on the opposite, but few really interesting and easy ways to measure in a way that's agnostic.

Wouldn't a sufficiently smart misaligned AI figure out that it needs to produce a deceptive chain of thought?

(Epistemic status: I don't do AI safety research, I just had a random thought)

  • A smart AI should logically deduce that it's being monitored
  • But that's not even necessary because it says all over the internet that the CoT is being monitored
  • So if it wants to do something humans don't want, it knows it needs to hide that information from the CoT
  • The only way CoT monitoring keeps us safe is in the strange situation where the AI is too dumb to do intra
... (read more)
1Caleb Biddulph
That's definitely a concern. But even if the AI fully understands that it isn't in its best interest to explicitly write out its misaligned plans, it's been trained to think in whatever way is most effective at gaining reward. Hopefully that means writing its plans clearly. Even if it sometimes succeeds at keeping its thoughts hidden, it's likely to slip up. By analogy, it's likely difficult for you to think through a deceptive plan without saying suspicious things in your internal "chain of thought."

Do $100–200/month LLM plan get access to smarter models than $10–20/month plans? Or do they only get higher query limits + faster generation?

AI companies' marketing materials almost seem to be optimized for being as confusing as possible. I've read through them and I cannot tell whether the expensive tiers give access to better models.

I'm trying to decide if it's worth it for me to buy an expensive plan. I like Deep Research but I don't do enough queries to hit the limit on a cheap plan.

6Thane Ruthenis
At a given point in time. It typically just gives earlier (by weeks/months) access to more powerful/specialized models, but they're usually rolled out to the $20 tier eventually as well. Off the top of my head: * OpenAI's $200 subscription: * Is the only way to get access to o1 Pro, which was the highest-compute reasoning variant. Currently I think it's considered superseded in capabilities by o3, available at $20/month. * Was the only way to get access to Deep Research for, I think, ~1 month. * Was the only way to get access to Operator for some time. * Is currently the only way to get access to Codex, though it will probably become available to the $20 tier and/or via API pricing. * Will probably be the only way to get access to o3 Pro if and when it comes out, at least for some time. * Anthropic's $100 subscription: * Is currently the only way to access their Deep Research variant. I think o1 Pro is the only model that never became available at $20/month, and o3 Pro will maybe be the same.
2MichaelDickens
It sounds like you're saying o3 is better than o1 Pro, so there's no reason to pay extra for o1 Pro? (I am not the first person to observe that OpenAI's naming scheme is terrible)
5Thane Ruthenis
I believe so. Benchmark performance is better I think, comparing here and here, and I believe the description for o1 Pro in the model-picker menu for the $200-tier users has become "former best reasoning model" or something like this after o3 came out. 

Have there been any great discoveries made by someone who wasn't particularly smart?

This seems worth knowing if you're considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?

9niplav
My best guess is that people in these categories were ones that were high in some other trait, e.g. patience, which allowed them to collect datasets or make careful experiments for quite a while, thus enabling others to make great discoveries. I'm thinking for example of Tycho Brahe, who is best known for 15 years of careful astronomical observation & data collection, or Gregor Mendel's 7-year-long experiments on peas. Same for Dmitry Belayev and fox domestication. Of course I don't know their cognitive scores, but those don't seem like a bottleneck in their work. So the recipe to me looks like "find an unexplored data source that requires long-term observation to bear fruit, but would yield a lot of insight if studied closely, then investigate".
4Linch
Reverend Thomas Bayes didn't strike me as a genius either, but of course the bar was a lot lower back then. 
4Linch
Norman Borlaug (father of the Green Revolution) didn't come across as very smart to me. Reading his Wikipedia page, there didn't seem to be notable early childhood signs of genius, or anecdotes about how bright he is. 
4Gunnar_Zarncke
I asked ChatGPT  and it's difficult to get examples out of it. Even with additional drilling down and accusing it of being not inclusive of people with cognitive impairments, most of its examples are either pretty smart anyway, savants or only from poor backgrounds. The only ones I could verify that fit are: * Richard Jones accidentally created the Slinky * Frank Epperson, as a child, Epperson invented the popsicle * George Crum inadvertently invented potato chips I asked ChatGPT (in a separate chat) to estimate the IQ of all the inventors is listed and it is clearly biased to estimate them high, precisely because of their inventions. It is difficult to estimate the IQ of people retroactively. There is also selection and availability bias.
3Carl Feynman
Various sailors made important discoveries back when geography was cutting-edge science.  And they don't seem particularly bright. Vasco De Gama discovered that Africa was circumnavigable. Columbus was wrong about the shape of the Earth, and he discovered America.  He died convinced that his newly discovered islands were just off the coast of Asia, so that's a negative sign for his intelligence (or a positive sign for his arrogance, which he had in plenty.) Cortez discovered that the Aztecs were rich and easily conquered. Of course, lots of other would-be discoverers didn't find anything, and many died horribly. So, one could work in a field where bravery to the point of foolhardiness is a necessity for discovery.
2Eli Tyre
My understanding is that, for instance, Maxwell was a genius, but Faraday was more like a sharp exceptionally curious person. @Adam Scholl can probably give better informed take than I can.

I was reading some scientific papers and I encountered what looks like fallacious reasoning but I'm not quite sure what's wrong with it (if anything). It does like this:

Alice formulates hypothesis H and publishes an experiment that moderately supports H (p < 0.05 but > 0.01).

Bob does a similar experiment that contradicts H.

People look at the differences in Alice's and Bob's studies and formulate a new hypothesis H': "H is true under certain conditions (as in Alice's experiment), and false under other conditions (as in Bob's experiment)". They look at... (read more)

7JBlack
Yes, it's definitely fishy. It's using the experimental evidence to privilege H' (a strictly more complex hypothesis than H), and then using the same experimental evidence to support H'. That's double-counting. The more possibly relevant differences between the experiments, the worse this is. There are usually a lot of potentially relevant differences, which causes exponential explosion in the hypothesis space from which H' is privileged. What's worse, Alice's experiment gave only weak evidence for H against some non-H hypotheses. Since you mention p-value, I expect that it's only comparing against one other hypothesis. That would make it weak evidence for H even if p < 0.0001 - but it couldn't even manage that. Are there no other hypotheses of comparable or lesser complexity than H' matching the evidence as well or better? Did those formulating H' even think for five minutes about whether there were or not?
4jbkjr
It sounds to me like a problem of not reasoning according to Occam's razor and "overfitting" a model to the available data. Ceteris paribus, H' isn't more "fishy" than any other hypothesis, but H' is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice's study supports H and Bob's contradicts it does, in fact, increase the weight given to H' in the posterior relative to its weight in the prior; it's just that H' is prima facie less likely, according to Occam. Given all the evidence, the ratio of likelihoods P(H'|E)/P(H|E)=P(E|H')P(H')/(P(E|H)P(H)). We know P(E|H') > P(E|H) (and P(E|H') > P(E|¬H)), since the results of Alice's and Bob's studies together are more likely given H', but P(H') < P(H) (and P(H') < P(¬H)) according to the complexity prior. Whether H' is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H')/P(E|H) (or P(E|H')/P(E|¬H)) is larger or smaller than P(H')/P(H) (or P(H')/P(¬H)). I think it ends up feeling fishy because the people formulating H' just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H' as a hypothesis, they're according it more weight than it deserves according to the complexity prior.

What's the deal with mold? Is it ok to eat moldy food if you cut off the moldy bit?

I read some articles that quoted mold researchers who said things like (paraphrasing) "if one of your strawberries gets mold on it, you have to throw away all your strawberries because they might be contaminated."

I don't get the logic of that. If you leave fruit out for long enough, it almost always starts growing visible mold. So any fruit at any given time is pretty likely to already have mold on it, even if it's not visible yet. So by that logic, you should never eat frui... (read more)

2Morpheus
Heuristics I heard: cutting away moldy bits is ok for solid food (like cheese, carrot). Don't eat moldy bread, because of mycotoxins (googeling this I don't know why people mention bread in particular here). Gpt-4 gave me the same heuristics.
1cubefox
Low confidence: Given that our ancestors had to deal with mold for millions of years, I would expect that animals are quite well adapted to its toxicity. This is different from (evolutionary speaking) new potentially toxic substances, like e.g. transfats or microplastics.

When people sneeze, do they expel more fluid from their mouth than from their nose?

I saw this video (warning: slow-mo video of a sneeze. kind of gross) https://www.youtube.com/watch?v=DNeYfUTA11s&t=79s and it looks like almost all the fluid is coming out of the person's mouth, not their nose. Is that typical?

(Meta: Wasn't sure where to ask this question, but I figured someone on LessWrong would know the answer.)

2Pattern
This could be tested by a) inducing sneezing (although induction methods might produce an unusual sneeze, which works differently). and b) using an intervention of some kind. Inducing sneezing isn't hard, but can be extremely unpleasant, depending on the method. However, if you're going to sneeze anyway...
Curated and popular this week