Austin Chen

Hey there~ I'm Austin, currently building https://manifold.markets. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !

Wiki Contributions

Comments

Sorted by

I mean, it's obviously very dependent on your personal finance situation but I'm using $100k as an order of magnitude proxy for "about a years salary". I think it's very coherent to give up a year of marginal salary in exchange for finding the love of your life, rather than like $10k or ~1mo salary.

Of course, the world is full of mispricings, and currently you can save a life for something like $5k. I think these are both good trades to make, and most people should have a portfolio that consists of both "life partners" and "impact from lives saved" and crucially not put all their investment into just one or the other.

Mm I think it's hard to get optimal credit allocation, but easy to get half-baked allocation, or just see that it's directionally way too low? Like sure, maybe it's unclear whether Hinge deserves 1% or 10% or ~100% of the credit but like, at a $100k valuation of a marriage, one should be excited to pay $1k to a dating app.

Like, I think matchmaking is very similarly shaped to the problem of recruiting employees, but there corporations are more locally rational about spending money than individuals, and can do things like pay $10k referral bonuses, or offer external recruiters 20% of their referee's first year salary.

Basically: I don't blame founders or companies for following their incentive gradients, I blame individuals/society for being unwilling to assign reasonable prices to important goods.

I think the bad-ness of dating apps is downstream of poor norms around impact attribution for matches made. Even though relationships and marriages are extremely valuable, individual people are not in the habit of paying that to anyone.

Like, $100k or a year's salary seems like a very cheap value to assign to your life partner. If dating apps could rely on that size of payment when they succeed, then I think there could be enough funding for something at least a good small business. But I've never heard of anyone actually paying anywhere near that. (myself included - though I paid a retroactive $1k payment to the person who organized the conference I met my wife at)

I think keeper.ai tries to solve this with large bounties on dating/marriages, it's one of the things I wish we pushed for more on Manifold Love. It seems possible to build one for the niche of "the ea/rat community"; Manifold Love, the checkboxes thing, dating docs got pretty good adoption for not that much execution.

(Also: be the change! I think building out OKC is one of the easiest "hello world" software projects one could imagine, Claude could definitely make a passable version in a day. Then you'll discover a bunch of hard stuff around getting users, but it sure could be a good exercise.)

Reply2111

Thanks for forwarding my thoughts!

I'm glad your team is equipped to do small, quick grants - from where I am on the outside, it's easy to accidentally think of OpenPhil as a single funding monolith, so I'm always grateful for directional updates that help the community understand how to better orient to y'all.

I agree that 3months seems reasonable when 500k+ is at stake! (I think, just skimming the application, I mentally rounded off "3 months or less" to "about 3 months", as kind of a learned heuristic on how orgs relate to timelines they publish.)

As another data point from the Survival and Flourishing Funds, turnaround (from our application to decision) was about 5 months this year, for an ultimately 90k grant (we were applying for up to 1.2m). I think this year they were unusually slow due to changing over their processes; in past years it's been closer to 2-3 months.

Our own philosophy at Manifund does emphasize "moving money quickly", to almost a sacred level. This comes from watching programs like Fast Grants and Future Fund, and also our own lived experience as grantees. For grantees, knowing 1 month sooner that money is coming, often means that one can start hiring and executing 1 month sooner - and the impact of executing even 1 day sooner can sometimes be immense (see: https://www.1daysooner.org/about/ )

@Matt Putz thanks for supporting Gavin's work and letting us know; I'm very happy to hear that my post helped you find this!

I also encourage others to check out OP's RFPs. I don't know about Gavin, but I was peripherally aware of this RFP, and it wasn't obvious to me that Gavin should have considered applying, for these reasons:

  1. Gavin's work seems aimed internally towards existing EA folks, while this RFP's media/comms examples (at a glance) seems to be aimed externally for public-facing outreach
  2. I'm not sure what the typical grant size that the OP RFP is targeting, but my cached heuristic is that OP tends to fund projects looking for $100k+ and that smaller projects should look elsewhere (eg through EAIF or LTFF), due to grantmaker capacity constraints on OP's side
  3. Relatedly, the idea of filling out an OP RFP seems somewhat time-consuming and burdensome (eg somewhere between 3 hours and 2 days), so I think many grantees might not consider doing so unless asking for large amounts
  4. Also, the RFP form seems to indicate a turnaround time of 3 months, which might have seemed too slow for a project like Gavin's

I'm evidently wrong on all these points given that OP is going to fund Gavin's project, which is great! So I'm listing these in the spirit of feedback. Some easy wins to encourage smaller projects to apply might be to update the RFP page to 1. list some example grants and grant sizes that were sourced through this, and 2. describe how much time you expect an applicant to take to fill out the form (something EA Funds does, which I appreciate, even if I invariably take much more time than they state).

Do you not know who the living Pope is, while still believing he's the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?

I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don't feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can't name my senator or representative and barely know what Biden stands for, but I think I'm reasonably American.

All the people I know who worked on those trips (either as an organiser or as a volunteer) don't think it helped their epistemics at all, compared to e.g. reading the literature on development economics.

I went on one of those trips as a middle schooler (to Mexico, not Africa). I don't know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn't have gotten otherwise and no amount of research literature reading would replicate.

I don't literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there's an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.

Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values. 

I appreciate you drawing the distinction! The bit about "bad people" was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.

There's a lot of massively impactful difference in culture and values

Mm, I think if the question is "what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements" I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.

(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like "what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement"; I feel like I'm lowkey failing an ITT here...)

Mm I basically agree that:

  • there are real value differences between EA folks and rationalists
  • good intentions do not substitute for good outcomes

However:

  • I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
  • I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously choose 2 zombies over 1 agent" - I think ascribing bad intentions to individuals ends up pretty mindkilly

Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.

Mm I'm extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as "I want recruits at all costs". I predict that if you talk to one and asking them about it, you'd find the same.

I do think that it's easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy -- but I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.

Some notes from the transcript:

I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can't filter and inform the way healthy recruiting needs to.  And they're funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.

Enjoyed this point -- I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned.  Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don't see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.

Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.

Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let's give this up entirely forever. So we started doing more research, and I posted about, a farm a couple hours away that did live tours, and that seemed like a reasonable thing to learn, like, a limited but useful thing.

Definitely think that on the margin, more "directly verifying base reality with your own eyes" would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; "obviously you should just send cash!" But now I'm much more sympathetic.

This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don't really have much ability to personally verify that the stuff we funded is any good.

The natural life cycle of movements and institutions is to get captured and be pretty undifferentiated from other movements in their larger cultural context. They just get normal because normal is there for a reason and normal is easiest.  And if you want to do better than that, if you want to keep high epistemics, because normal does not prioritize epistemics, you need to be actively fighting for it, and bringing a high amount of skill to it. I can't tell you if EA is degrading at like 5 percent a year or 25 percent a year, I can tell you that it is not self correcting enough to escape this trap.

I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice and

What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA. 

It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what "Post EA" looks like.

I hear that as every true wizard must test the integrity of their teacher or of their school, Hogwarts, whatever the thing is. The reason you don't get to graduate until you actually test the integrity of the school is because if you're just taking it on its own word, then you could become a villain.

You have to respect your own moral compass to be able to be trusted.

Really liked this analogy!

Which EA leaders do you most resonate with?

I would suggest that if you don't care about the movement leaders who have any steering power, you're not in that movement.

I like this as a useful question to keep in mind, though I don't think it's totally explanatory. I think I'm reasonably Catholic, even though I don't know anything about the living Catholic leaders.

Timothy: Give me a vision of a different world where ea would be better served by the by having leadership that actually was willing to own their power more 

Elizabeth: which you'll notice even holden won't do 

Timothy: yeah, he literally doesn't want the power.

Elizabeth: Yeah, none of them do. CEA doesn't want it. 

I've been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we'd get to debate stuff like optimal voting systems and enfranchisement -- my kind of catnip)

Load More