This is a special post for quick takes by MichaelDickens. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
76 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I get the sense that we can't trust Open Philanthropy to do a good job on AI safety, and this is a big problem. Many people would have more useful things to say about this than I do, but I still feel that I should say something.

My sense comes from:

  • Open Phil is reluctant to do anything to stop the companies that are doing very bad things to accelerate the likely extinction of humanity, and is reluctant to fund anyone who's trying to do anything about it.
  • People at Open Phil have connections with people at Anthropic, a company that's accelerating AGI and has a track record of (plausibly-deniable) dishonesty. Dustin Moskovitz has money invested in Anthropic, and Open Phil employees might also stand to make money from accelerating AGI. And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.

A lot of people (including me as of ~one year ago) consider Open Phil the gold standard for EA-style analysis. I think Open Phil is actually quite untrustworthy on AI safety (but probably still good on other causes).

I don't know what to do with this information.

[-]habryka20571

Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.

I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will s... (read more)

Reply7733
[-]Akash9028

Adding my two cents as someone who has a pretty different lens from Habryka but has still been fairly disappointed with OpenPhil, especially in the policy domain. 

Relative to Habryka, I am generally more OK with people "playing politics". I think it's probably good for AI safely folks to exhibit socially-common levels of "playing the game"– networking, finding common ground, avoiding offending other people, etc. I think some people in the rationalist sphere have a very strong aversion to some things in this genre, and labels like "power-seeking" and "deceptive" get thrown around too liberally. I also think I'm pretty with OpenPhil deciding it doesn't want to fund certain parts of the rationalist ecosystem (and probably less bothered than Habryka about how their comms around this wasn't direct/clear).

In that sense, I don't penalize OP much for trying to "play politics" or for breaking deontological norms. Nonetheless, I still feel pretty disappointed with them, particularly for their impact on comms/policy. Some thoughts here:

  • I agree with Habryka that it is quite bad that OP is not willing to fund right-coded things. Even many of the "bipartisan" things funded by OP are quite l
... (read more)

This should be a top-level post.

8MichaelDickens
What are the norms here? Can I just copy/paste this exact text and put it into a top-level post? I got the sense that a top-level post should be more well thought out than this but I don't actually have anything else useful to say. I would be happy to co-author a post if someone else thinks they can flesh it out. Edit: Didn't realize you were replying to Habryka, not me. That makes more sense.

It feels sorta understandable to me (albeit frustrating) that OpenPhil faces these assorted political constraints.  In my view this seems to create a big unfilled niche in the rationalist ecosystem: a new, more right-coded, EA-adjacent funding organization could optimize itself for being able to enter many of those blacklisted areas with enthusiasm.

If I was a billionare, I would love to put together a kind of "completion portfolio" to complement some of OP's work.  Rationality community building, macrostrategy stuff, AI-related advocacy to try and influence republican politicians, plus a big biotechnology emphasis focused on intelligence enhancement, reproductive technologies, slowing aging, cryonics, gene drives for eradicating diseases, etc.  Basically it seems like there is enough edgy-but-promising stuff out there (like studying geoengineering for climate, or advocating for charter cities, or just funding oddball substack intellectuals to do their thing) that you could hope to create a kind of "alt-EA" (obviously IRL it shouldn't have EA in the name) where you batten down the hatches, accept that the media will call you an evil villain mastermind forever, and hop... (read more)

[-]Buck212

not even ARC has been able to get OP funding (in that case because of COIs between Paul and Ajeya)

As context, note that OP funded ARC in March 2022.

I think OP has funded almost everyone I have listed here in 2022 (directly or indirectly), so I don't really think that is evidence of anything (though it is a bit more evidence for ARC because it means the COI is overcomable).

6David Hornbein
Hm, this timing suggests the change could be a consequence of Karnofsky stepping away from the organization. Which makes sense, now that I think about it. He's by far the most politically strategic leader Open Philanthropy has had, so with him gone, it's not shocking they might revert towards standard risk-averse optionality-maxxing foundation behavior.

Isn't it just the case that OpenPhil just generally doesn't fund that many technical AI safety things these days? If you look at OP's team on their website, they have only two technical AI safety grantmakers. Also, you list all the things OP doesn't fund, but what are the things in technical AI safety that they do fund? Looking at their grants, it's mostly MATS and METR and Apollo and FAR and some scattered academics I mostly haven't heard of. It's not that many things. I have the impression that the story is less like "OP is a major funder in technical AI safety, but unfortunately they blacklisted all the rationalist-adjacent orgs and people" and more like "AI safety is still a very small field, especially if you only count people outside the labs, and there are just not that many exciting funding opportunities, and OpenPhil is not actually a very big funder in the field". 

[-]Buck2114

A lot of OP's funding to technical AI safety goes to people outside the main x-risk community (e.g. applications to Ajeya's RFPs).

Open Phil is definitely by far the biggest funder in the field.  I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn't constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy). 

AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.

6David Matolcsi
Yeah, I agree, and I don't know that much about OpenPhil's policy work, and their fieldbuilding seems decent to me, though maybe not from you perspective. I just wanted to flag that many people (including myself until recently) overestimate how big a funder OP is in technical AI safety, and I think it's important to flag that they actually have pretty limited scope in this area.
5habryka
Yep, agree that this is a commonly overlooked aspect (and one that I think sadly has also contributed to the dominant force in AI Safety researchers becoming the labs, which I think has been quite sad).

what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations

is there a list of these somewhere/details on what happened?

[-]habryka5516

You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt 

The current list of areas that I know about are: 

  • Anything to do with the rationality community ("Rationality community building")
  • Anything to do with moral relevance of digital minds
  • Anything to do with wild animal welfare and invertebrate welfare
  • Anything to do with human genetic engineering and reproductive technology
  • Anything that is politically right-leaning

There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible: 

  • Most forms of broad public communication about AI (where you would need to align very closely with OP goals to get any funding)
  • Almost any form of macrostrategy work of the kind that FHI used to work on (i.e. Eternity in Six Hours and stuff like that)
  • Anything about acausal trade of cooperation in large worlds (and more broadly anything that is kind of weird game theory)

Huh, are there examples of right leaning stuff they stopped funding? That's new to me

6Xodarap
You said I'm wondering if you have a list of organizations where Open Phil would have funded their other work, but because they withdrew from funding part of the organization they decided to withdraw totally. This feels very importantly different from good ventures choosing not to fund certain cause areas (and I think you agree, which is why you put that footnote).
[-]habryka15-1

I don't have a long list, but I know this is true for Lightcone, SPARC, ESPR, any of the Czech AI-Safety/Rationality community building stuff, and I've heard a bunch of stories since then from other organizations that got pretty strong hints from Open Phil that if they start working in an area at all, they might lose all funding (and also, the "yes, it's more like a blacklist, if you work in these areas at all we can't really fund you, though we might make occasional exceptions if it's really only a small fraction of what you do" story was confirmed to me by multiple OP staff, so I am quite confident in this, and my guess is OP staff would be OK with confirming to you as well if you ask them).

1Xodarap
Thanks!
[-]evhub10-54

Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity.

Concretely, the current presidential election seems extremely important to me from an AI safety perspective, I expect that importance to only go up in future elections, and I think OpenPhil is correct on what candidates are best from an AI safety perspective. Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs, and probably the most important other stuff to do is governance and policy work, which this strategy seems helpful for.

I don't know the actual marginal increase in political influence that they're buying here, but my guess would be that the numbers pencil and OpenPhil is making the right call.

I cannot think of anyone

... (read more)

Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs,

It might be the case that most of the quality weighted safety research involving working with large models is happening at labs, but I'm pretty skeptical that having this mostly happen at labs is the best approach and it seems like OpenPhil should be actively interested in building up a robust safety research ecosystem outside of labs.

(Better model access seems substantially overrated in its importance and large fractions of research can and should happen with just prompting or on smaller models. Additionally, at the moment, open weight models are pretty close to the best models.)

(This argument is also locally invalid at a more basic level. Just because this research seems to be mostly happening at large AI companies (which I'm also more skeptical of I think) doesn't imply that this is the way it should be and funding should try to push people to do better stuff rather than merely reacting to the current allocation.)

7evhub
Yeah, I think that's a pretty fair criticism, but afaict that is the main thing that OpenPhil is still funding in AI safety? E.g. all the RFPs that they've been doing, I think they funded Jacob Steinhardt, etc. Though I don't know much here; I could be wrong.
[-]kave103

Wasn't the relevant part of your argument like, "AI safety research outside of the labs is not that good, so that's a contributing factor among many to it not being bad to lose the ability to do safety funding for governance work"? If so, I think that "most of OpenPhil's actual safety funding has gone to building a robust safety research ecosystem outside of the labs" is not a good rejoinder to "isn't there a large benefit to building a robust safety research ecosystem outside of the labs?", because the rejoinder is focusing on relative allocations within "(technical) safety research", and the complaint was about the allocation between "(technical) safety research" vs "other AI x-risk stuff".

[-]habryka4318

Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity.

Yeah, I currently think Open Phil's policy activism has been harmful for the world, and will probably continue to be, so by my lights this is causing harm with the justification of causing even more harm. I agree they will probably get the bit right about what major political party would be better, but sadly the effects of policy work are much more nuanced and detailed than that, and also they will have extremely little influence on who wins the general elections.

We could talk more about this sometime. I also have some docs with more of my thoughts here (which I maybe already shared with you, but would be happy to do so if not).

Separately, this is just obviously false. A lot of the old AI safety people just don't need OpenPhil funding anymore because they're working at labs or governments

... (read more)

sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade

Sacrificing half of it to avoid things associated with one of the two major political parties and being deceptive about doing this is of course not equal to half the cost of sacrificing all of such funding, it is a much more unprincipled and distorting and actively deceptive decision that messes up everyone’s maps of the world in a massive way and reduces our ability to trust each other or understand what is happening.

8gw
Thanks for sharing, I was curious if you could elaborate on this (e.g. if there are examples of AI policy work funded by OP that come to mind that are clearly left of center). I am not familiar with policy, but my one data point is the Horizon Fellowship, which is non-partisan and intentionally places congressional fellows in both Democratic and Republican offices. This straightforwardly seems to me like a case where they are trying to engage with people on the right, though maybe you mean not-right-of-center at the organizational level? In general though, (in my limited exposure) I don't model any AI governance orgs as having a particular political affiliation (which might just be because I'm uninformed / ignorant).

Yep, my model is that OP does fund things that are explicitly bipartisan (like, they are not currently filtering on being actively affiliated with the left). My sense is in-practice it's a fine balance and if there was some high-profile thing where Horizon became more associated with the right (like maybe some alumni becomes prominent in the republican party and very publicly credits Horizon for that, or there is some scandal involving someone on the right who is a Horizon alumni), then I do think their OP funding would have a decent chance of being jeopardized, and the same is not true on the left.

Another part of my model is that one of the key things about Horizon is that they are of a similar school of PR as OP themselves. They don't make public statements. They try to look very professional. They are probably very happy to compromise on messaging and public comms with Open Phil and be responsive to almost any request that OP would have messaging wise. That makes up for a lot. I think if you had a more communicative and outspoken organization with a similar mission to Horizon, I think the funding situation would be a bunch dicier (though my guess is if they were competent, an or... (read more)

6MichaelDickens
Thanks for the reply. When I wrote "Many people would have more useful things to say about this than I do", you were one of the people I was thinking of. Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it's generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I've criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they've gotten worse on AI safety (and weird causes in general).
5Eli Tyre
Nitpick, but this statement seems obviously false given what I understand your views to be? Paul, Carl, Buck, for starters. [edit: I now see that Oliver had already made a footnote to that effect.]
6habryka
(I like Buck, but he is one generation later than the one I was referencing. Also, I am currently like 50/50 whether Buck would indeed be blacklisted. I agree that Carl is a decent counterexample, though he is a bit of a weirder case)
8Buck
I agree that I didn’t really have much of an effect on this community’s thinking about AIS until like 2021.
4Eli Tyre
Jessica Taylor seems like she's also second generation?
4habryka
I remember running into her a bunch before I ran into Buck. Scott/Abram are also second generation. Overall, seems reasonable to include Buck (but communicating my more complicated epistemic state with regard to him would have been harder).
4yc
Out of curiosity - “it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded” Are these projects related to AI safety or just generally? And what are some examples?
2habryka
I am not sure I am understanding your question. Are you asking about examples of left-leaning projects that Dustin is involved in, or right-leaning projects that cannot get funding? On the left, Dustin is one of the biggest donors to the democratic party (with Asana donating $45M and him donating $24M to Joe Biden in 2020).
2yc
Examples of right leaning projects that got rejected by him due to his political affiliation, and if these examples are AI safety related
2habryka
I don't currently know of any public examples and feel weird publicly disclosing details about organizations that I privately heard about. If more people are interested I can try to dig up some more concrete details (but can't make any promises on things I'll end up able sharing).
1yc
No worries; thanks!
3ROM
Can you elaborate on what you mean by this?  OP appears to have been one of FHI's biggest funders according to Sandberg:[1] The hiring (and fundraising) freeze imposed by Oxf began in 2020.  1. ^ See page 15
3habryka
In 2023/2024 OP drastically changed it's funding process and priorities (in part in response to FTX, in part in response to Dustin's preferences). This whole conversation is about the shift in OPs giving in this recent time period. See also: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures 
-1ROM
I agree with the claim you're making: that if FHI still existed and they applied for a grant from OP it would be rejected. This seems true to me. I don't mean to nitpick, but it still feels misleading to claim "FHI could not get OP funding" when they did in fact get lots of funding from OP. It implies that FHI operated without any help from OP, which isn't true. 
2habryka
The "could" here is (in context) about "could not get funding from modern OP". The whole point of my comment was about the changes that OP underwent. Sorry if that wasn't as clear, it might not be as obvious to others that of course OP was very different in the past.
0ROM
I understand the claim you were making now and I hope the nitpicking isn't irritable. 
2MichaelDickens
If Open Phil is unwilling to fund some/most of the best orgs, that makes earning to give look more compelling. (There are some other big funders in AI safety like Jaan Tallinn, but I think all of them combined still have <10% as much money as Open Phil.)
[-]Wei Dai4126

And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.

No kidding. From https://www.openphilanthropy.org/grants/openai-general-support/:

OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.

Wish OpenPhil and EAs in general were more willing to reflect/talk publicly about their mistakes. Kind of understandable given human nature, but still... (I wonder if there are any mistakes I've made that I should reflect more on.)

"Open Phil higher-ups being friends with Anthropic higher-ups" is an understatement. An Open Philanthropy cofounder (Holden Karnofsky) is married to an Anthropic cofounder (Daniela Amodei). It's a big deal!

[-]Raemon3215

I want to add the gear of "even if it actually turns out that OpenPhil was making the right judgment calls the whole time in hindsight, the fact that it's hard from the outside to know that has some kind of weird Epistemic Murkiness effects that are confusing to navigate, at the very least kinda suck, and maybe are Quite Bad." 

I've been trying to articulate the costs of this sort of thing lately and having trouble putting it into words, and maybe it'll turn out this problem was less of a big deal than it currently feels like to me. But, something like the combo of

a) the default being for many people to trust OpenPhil

b) many people who are paying attention think that they should at least be uncertain about it, and somewhere on a "slightly wary" to "paranoid" scale. and...

c) this at least causes a lot of wasted cognitive cycles

d) it's... hard to figure out how big a deal to make of it. A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently. Some of those concerns are, indeed, overly paranoid, but, like, it wasn't actually reasonable to calibrate the wariness/conflict-theory-detector to zero, you have to make guesses. Thi... (read more)

9habryka
I am actually curious if you have any overly paranoid predictions from me. I was today lamenting that despite feeling paranoid on this stuff all the time, I de-facto have still been quite overly optimistic in almost all of my predictions on this topic (like, I only gave SPARC a 50% chance of being defunded a few months ago, which I think was dumb, and I was not pessimistic enough to predict the banning of all right-associated project, and not pessimistic enough to predict a bunch of other grant decisions that I feel weird talking publicly about). 
6Raemon
The predictions that seemed (somewhat) overly paranoid of yours were more about Anthropic than OpenPhil, and the dynamic seemed similar and I didn't check that hard while writing the comment. (maybe some predictions about how/why the OpenAI board drama went down, which was at the intersection of all three orgs, which I don't think have been explicitly revealed to have been "too paranoid" but I'd still probably take bets against) (I think I agree that overall you were more like "not paranoid enough" than "too paranoid", although I'm not very confident)

My sense is my predictions about Anthropic have also not been pessimistic enough, though we have not yet seen most of the evidence. Maybe a good time to make bets.

7Raemon
I kinda don't want to litigate it right now, but, I was thinking "I can think of one particular Anthropic prediction Habryka made that seemed false and overly pessimistic to me", which doesn't mean I think you're overall uncalibrated about Anthropic, and/or not pessimistic enough. And (I think Habryka got this but for benefit of others), a major point of my original comment was not just "you might be overly paranoid/pessimistic in some cases", but, ambiguity about how paranoid/pessimistic is appropriate to be results in some kind of confusing, miasmic social-epistemic process (where like maybe you are exactly calibrated on how pessimistic to be, but it comes across as too aggro to other people, who pushback). This can be bad whether you're somewhat-too-pessimistic, somewhat-too-optimistic, or exactly calibrated. 
6Ben Pace
My recollection is that Habryka seriously considered hypotheses that involved worse and more coordinated behavior than reality, but that this is different from "this was his primary hypothesis that he gave the most probability mass to". And then he did some empiricism and falsified the hypotheses and I'm glad those hypotheses were considered and investigated. Here's an example of him giving 20-25% to a hypothesis about conspiratorial behavior that I believe has turned out to be false.
2habryka
Yep, that hypothesis seems mostly wrong, though I more feel like I received 1-2 bits of evidence against it. If the board had stabilized with Sam being fired, even given all I know, I would have still thought a merger with Anthropic to be like ~5%-10% likely.
4MichaelDickens
My impression is that those people are paying a social cost for how willing they are to bring up perceived concerns, and I have a lot of respect for them because of that.
2Noosphere89
As someone who has disagreed quite a bit with Habryka in the past, endorsed. They are absolutely trying to solve a frankly pretty difficult problem, where there's a lot of selection for more conflict than is optimal, and also selection for being more paranoid than is optimal, because they have to figure out if a company or person in the AI space is being shady or outright a liar, which unfortunately has a reasonable probability, but there's also a reasonable probability of them being honest but them failing to communicate well. I agree with Raemon that you can't have your conflict theory detectors set to 0 in the AI space.

Maybe make a post on the EA forum?

2MichaelDickens
I've been avoiding LW for the last 3 days because I was anxious that people were gonna be mad at me for this post. I thought there was a pretty good chance I was wrong, and I don't like accusing people/orgs of bad behavior. But I thought I should post it anyway because I believed there was some chance lots of people agreed with me but were too afraid of social repercussions to bring it up (like I almost was).
1MichaelDickens
I should add that I don't want to dissuade people from criticizing me if I'm wrong. I don't always handle criticism well, but it's worth the cost to have accurate beliefs about important subjects. I knew I was gonna be anxious about this post but I accepted the cost because I thought there was a ~25% chance that it would be valuable to post.

What's going on with /r/AskHistorians?

AFAIK, /r/AskHistorians is the best place to hear from actual historians about historical topics. But I've noticed some trends that make it seem like the historians there generally share some bias or agenda, but I can't exactly tell what that agenda is.

The most obvious thing I noticed is from their FAQ on historians' views on other [popular] historians. I looked through these and in every single case, the /r/AskHistorians commenters dislike the pop historian. Surely at least one pop historian got it right?

I don't know about the actual object level, but a lot of /r/AskHistorians' criticisms strike me as weak:

  • They criticize Dan Carlin for (1) allegedly downplaying the Rape of Belgium even though by my listening he emphasized pretty strongly how bad it was and (2) doing a bad job answering "could Caesar have won the Battle of Hastings?" even though this is a thought experiment, not a historical question. (Some commenters criticize him for being inaccurate and others criticize him for being unoriginal, which are contradictory criticisms.)
  • They criticize Guns, Germs, and Steel for...honestly I'm a little confused about how this person disagrees wi
... (read more)
4TsviBT
(IANAH but) I think there's a throughline and it makes sense. Maybe a helpful translation would be "oversimplified" -> "overconfident" (though "oversimplified" is also the point). There's going to be a lot of uncertainty--both empirical, and also conceptual. In other words, there's a lot of open questions--what happened, what caused what, how to think about these things. When an expert field is publishing stuff, if the field is healthy, they're engaging in a long-term project. There are difficult questions, and they're trying to build up info and understanding with a keen eye toward what can be said confidently, what can and cannot be fully or mostly encapsulated with a given concept or story, etc. When a pop historian thinks ze is "synthesizing" and "presenting", often ze is doing the equivalent of going into a big complex half-done work-in-progress codebase, learning the current quasi-API, slapping on a flashy frontend, and then trying to sell it. It's just... inappropriate, premature. Of course, there's lots of stuff going on, and a lot of the critiques will be out of envy or whatever, etc. But there's a real critique here too.

I was reading some scientific papers and I encountered what looks like fallacious reasoning but I'm not quite sure what's wrong with it (if anything). It does like this:

Alice formulates hypothesis H and publishes an experiment that moderately supports H (p < 0.05 but > 0.01).

Bob does a similar experiment that contradicts H.

People look at the differences in Alice's and Bob's studies and formulate a new hypothesis H': "H is true under certain conditions (as in Alice's experiment), and false under other conditions (as in Bob's experiment)". They look at... (read more)

7JBlack
Yes, it's definitely fishy. It's using the experimental evidence to privilege H' (a strictly more complex hypothesis than H), and then using the same experimental evidence to support H'. That's double-counting. The more possibly relevant differences between the experiments, the worse this is. There are usually a lot of potentially relevant differences, which causes exponential explosion in the hypothesis space from which H' is privileged. What's worse, Alice's experiment gave only weak evidence for H against some non-H hypotheses. Since you mention p-value, I expect that it's only comparing against one other hypothesis. That would make it weak evidence for H even if p < 0.0001 - but it couldn't even manage that. Are there no other hypotheses of comparable or lesser complexity than H' matching the evidence as well or better? Did those formulating H' even think for five minutes about whether there were or not?
4jbkjr
It sounds to me like a problem of not reasoning according to Occam's razor and "overfitting" a model to the available data. Ceteris paribus, H' isn't more "fishy" than any other hypothesis, but H' is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice's study supports H and Bob's contradicts it does, in fact, increase the weight given to H' in the posterior relative to its weight in the prior; it's just that H' is prima facie less likely, according to Occam. Given all the evidence, the ratio of likelihoods P(H'|E)/P(H|E)=P(E|H')P(H')/(P(E|H)P(H)). We know P(E|H') > P(E|H) (and P(E|H') > P(E|¬H)), since the results of Alice's and Bob's studies together are more likely given H', but P(H') < P(H) (and P(H') < P(¬H)) according to the complexity prior. Whether H' is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H')/P(E|H) (or P(E|H')/P(E|¬H)) is larger or smaller than P(H')/P(H) (or P(H')/P(¬H)). I think it ends up feeling fishy because the people formulating H' just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H' as a hypothesis, they're according it more weight than it deserves according to the complexity prior.

Have there been any great discoveries made by someone who wasn't particularly smart?

This seems worth knowing if you're considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?

8niplav
My best guess is that people in these categories were ones that were high in some other trait, e.g. patience, which allowed them to collect datasets or make careful experiments for quite a while, thus enabling others to make great discoveries. I'm thinking for example of Tycho Brahe, who is best known for 15 years of careful astronomical observation & data collection, or Gregor Mendel's 7-year-long experiments on peas. Same for Dmitry Belayev and fox domestication. Of course I don't know their cognitive scores, but those don't seem like a bottleneck in their work. So the recipe to me looks like "find an unexplored data source that requires long-term observation to bear fruit, but would yield a lot of insight if studied closely, then investigate".
4Linch
Reverend Thomas Bayes didn't strike me as a genius either, but of course the bar was a lot lower back then. 
4Linch
Norman Borlaug (father of the Green Revolution) didn't come across as very smart to me. Reading his Wikipedia page, there didn't seem to be notable early childhood signs of genius, or anecdotes about how bright he is. 
4Gunnar_Zarncke
I asked ChatGPT  and it's difficult to get examples out of it. Even with additional drilling down and accusing it of being not inclusive of people with cognitive impairments, most of its examples are either pretty smart anyway, savants or only from poor backgrounds. The only ones I could verify that fit are: * Richard Jones accidentally created the Slinky * Frank Epperson, as a child, Epperson invented the popsicle * George Crum inadvertently invented potato chips I asked ChatGPT (in a separate chat) to estimate the IQ of all the inventors is listed and it is clearly biased to estimate them high, precisely because of their inventions. It is difficult to estimate the IQ of people retroactively. There is also selection and availability bias.
3Carl Feynman
Various sailors made important discoveries back when geography was cutting-edge science.  And they don't seem particularly bright. Vasco De Gama discovered that Africa was circumnavigable. Columbus was wrong about the shape of the Earth, and he discovered America.  He died convinced that his newly discovered islands were just off the coast of Asia, so that's a negative sign for his intelligence (or a positive sign for his arrogance, which he had in plenty.) Cortez discovered that the Aztecs were rich and easily conquered. Of course, lots of other would-be discoverers didn't find anything, and many died horribly. So, one could work in a field where bravery to the point of foolhardiness is a necessity for discovery.
2Eli Tyre
My understanding is that, for instance, Maxwell was a genius, but Faraday was more like a sharp exceptionally curious person. @Adam Scholl can probably give better informed take than I can.

What's the deal with mold? Is it ok to eat moldy food if you cut off the moldy bit?

I read some articles that quoted mold researchers who said things like (paraphrasing) "if one of your strawberries gets mold on it, you have to throw away all your strawberries because they might be contaminated."

I don't get the logic of that. If you leave fruit out for long enough, it almost always starts growing visible mold. So any fruit at any given time is pretty likely to already have mold on it, even if it's not visible yet. So by that logic, you should never eat frui... (read more)

2Morpheus
Heuristics I heard: cutting away moldy bits is ok for solid food (like cheese, carrot). Don't eat moldy bread, because of mycotoxins (googeling this I don't know why people mention bread in particular here). Gpt-4 gave me the same heuristics.
1cubefox
Low confidence: Given that our ancestors had to deal with mold for millions of years, I would expect that animals are quite well adapted to its toxicity. This is different from (evolutionary speaking) new potentially toxic substances, like e.g. transfats or microplastics.

When people sneeze, do they expel more fluid from their mouth than from their nose?

I saw this video (warning: slow-mo video of a sneeze. kind of gross) https://www.youtube.com/watch?v=DNeYfUTA11s&t=79s and it looks like almost all the fluid is coming out of the person's mouth, not their nose. Is that typical?

(Meta: Wasn't sure where to ask this question, but I figured someone on LessWrong would know the answer.)

2Pattern
This could be tested by a) inducing sneezing (although induction methods might produce an unusual sneeze, which works differently). and b) using an intervention of some kind. Inducing sneezing isn't hard, but can be extremely unpleasant, depending on the method. However, if you're going to sneeze anyway...