All of Austin Chen's Comments + Replies

I think credit allocation is extremely important to study and get right, because it tells you who to trust, who to grant resources to. For example, I think much of the wealth of modern society is downstream of sensible credit allocation between laborers, funders, and corporations in the form of equity and debt, allowing successful entrepreneurs and investors to have more funding to reinvest into good ideas. Another (non-monetary) example is authorship in scientific papers; there, correct credit allocation helps people in the field understand which research... (read more)

Makes sense, thanks.

FWIW, I really appreciated that y'all posted this writeup about mentor selection -- choosing folks for impactful, visible, prestigious positions is a whole can of worms, and I'm glad to have more public posts explaining your process & reasoning.

Curious, is the list of advisors public?

4Ryan Kidd
Not currently. We thought that we would elicit more honest ratings of prospective mentors from advisors, without fear of public pressure or backlash, if we kept the list of advisors internal to our team, similar to anonymous peer review.

Thanks for writing this, David! Your sequence of notes on virtues is one of my favorites on this site; I often find myself coming back to them, to better understand what it might mean to eg Love. As someone who's spent a lot of time now in EA, I appreciated that this piece was especially detailed, going down all kinds of great rabbitholes. I hope to leave more substantive thoughts at some future time, but for now: thank you again for your work on this.

How does Lightcone think about credit allocation to donors vs credit to the core team? For example, from a frame of impact certs or startup equity, would you say that eg $3m raised now should translate to 10% of the credit/certs of Lightcone's total impact (for a "postmoney impact valuation" of $30m)? or 5%, or 50%? Or how else would you frame this (eg $3m = 30% of credit for 2025?)

I worry this ask feels like an isolated demand for rigor; almost all other charities elide this question today. To be clear, I like Lightcone a lot, think they are very impactfu... (read more)

3gcghhb
IMO counterfactuals in history are not really worth studying, unless you have some special insights into it and really want to study it.  Both funders and employees are ultimately influenced by memes. Human behaviour is hard to predict. It’s difficult to get the outcome in practice of an alternate history where a meme was less influential or more influential.  (And yes IMO you may want to be thinking in terms of “which memes should I fund” not “which projects should I fund”. Actually persuading people of an idea is often more scarce than funding, you can’t pay someone to actually care about an idea, let alone to care enough that they work full time on it out of choice. Memes can self-replicate once you provide the initial fuel for them to take off from the ground.)

I would think to approach this by figuring something like the Shapley value of the involved parties, by answering the questions "for a given amount of funding, how many people would have been willing to provide this funding if necessary" and "given an amount of funding, how many people would have been willing and able to do the work of the Lightcone crew to produce similar output."

I don't know much about how Lightcone operates, but my instinct is that the people are difficult to replace, because I don't see many other very similar projects to Lighthaven an... (read more)

I mean, it's obviously very dependent on your personal finance situation but I'm using $100k as an order of magnitude proxy for "about a years salary". I think it's very coherent to give up a year of marginal salary in exchange for finding the love of your life, rather than like $10k or ~1mo salary.

Of course, the world is full of mispricings, and currently you can save a life for something like $5k. I think these are both good trades to make, and most people should have a portfolio that consists of both "life partners" and "impact from lives saved" and crucially not put all their investment into just one or the other.

Mm I think it's hard to get optimal credit allocation, but easy to get half-baked allocation, or just see that it's directionally way too low? Like sure, maybe it's unclear whether Hinge deserves 1% or 10% or ~100% of the credit but like, at a $100k valuation of a marriage, one should be excited to pay $1k to a dating app.

Like, I think matchmaking is very similarly shaped to the problem of recruiting employees, but there corporations are more locally rational about spending money than individuals, and can do things like pay $10k referral bonuses, or offer external recruiters 20% of their referee's first year salary.

1ProgramCrafter
I've started writing a small research paper on this, using mathematical framework, and understood that I had long conflated Shapley values with ROSE values. Here's what I found, having corrected that error. ROSE bargaining satisfies Efficiency, Pareto Optimality, Symmetry*, Maximin Dominance and Linearity - a bunch of important desiderata. Shapley values, on other hand, don't satisfy Maximin Dominance so someone might unilaterally reject cooperation; I'll explore ROSE equilibrium below. 1. Subjects: people and services for finding partners. 2. By Proposition 8.2, ROSE value remains same if moves transferring money within game are discarded. Thus, we can assume no money transfers. 3. By Proposition 11.3, ROSE value for dating service is equal or greater than its maximin. 4. By Proposition 12.2, ROSE value for dating service is equal or less than its maximum attainable value. 5. There's generally one move for a person to maximize their utility: use the dating service with highest probability of success (or expected relationship quality) available. 6. There are generally two moves for a service: to launch or not to launch. First involves some intrinsic motivation and feeling of goodness minus running costs, the second option has value of zero exactly. 7. For a large service, running costs (including moderation) exceed much realistic motivation. Therefore, maximum and maximin values for it are both zero. 8. From (7), (3) and (4), ROSE value for large dating service is zero. 9. Therefore, total money transfers to a large dating service equal its total costs. So, why yes or why no? ---------------------------------------- By the way, Shapley values suggest paying a significant sum! Given value of a relationship of $10K (can be scaled), and four options for finding partners (0:p0=0.03 -- self-search, α:pα=0.09 -- friend's help, β:pβ=0.10 -- dating sites, γ:pγ=0.70 -- the specialized project suggested up the comments), the Shapley-fair price per success would
-3ProgramCrafter
I don't think one can coherently value marriage 20 times as much as than a saved life ($5k as GiveWell says)? Indeed there is more emotional attachment to a person who's your partner (i.e. who you are emotionally attached to) than to a random human in the world, but surely not that much? And if a marriage is valued at $10k, then the credit assignment 1%/10% would make the allocation $100/$1000 - and it seems that people really want to round the former towards zero
7Alexander Gietelink Oldenziel
(Expensive) Matchmaking services already exist - what's your reading on why they're not more popular?

Basically: I don't blame founders or companies for following their incentive gradients, I blame individuals/society for being unwilling to assign reasonable prices to important goods.

I think the bad-ness of dating apps is downstream of poor norms around impact attribution for matches made. Even though relationships and marriages are extremely valuable, individual people are not in the habit of paying that to anyone.

Like, $100k or a year's salary seems like a very cheap value to assign to your life partner. If dating apps could rely on that size of payment ... (read more)

Reply2111
9kave
I think the credit assignment is legit hard, rather than just being a case of bad norms. Do you disagree?

Thanks for forwarding my thoughts!

I'm glad your team is equipped to do small, quick grants - from where I am on the outside, it's easy to accidentally think of OpenPhil as a single funding monolith, so I'm always grateful for directional updates that help the community understand how to better orient to y'all.

I agree that 3months seems reasonable when 500k+ is at stake! (I think, just skimming the application, I mentally rounded off "3 months or less" to "about 3 months", as kind of a learned heuristic on how orgs relate to timelines they publish.)

As anoth... (read more)

@Matt Putz thanks for supporting Gavin's work and letting us know; I'm very happy to hear that my post helped you find this!

I also encourage others to check out OP's RFPs. I don't know about Gavin, but I was peripherally aware of this RFP, and it wasn't obvious to me that Gavin should have considered applying, for these reasons:

  1. Gavin's work seems aimed internally towards existing EA folks, while this RFP's media/comms examples (at a glance) seems to be aimed externally for public-facing outreach
  2. I'm not sure what the typical grant size that the OP RFP is ta
... (read more)
7Matt Putz
Thanks for the feedback! I’ll forward it to our team. I think I basically agree with you that from reading the RFP page, this project doesn’t seem like a central example of the projects we’re describing (and indeed, many of the projects we do fund through this RFP are more like the examples given on the RFP page).  Some quick reactions: * FWIW, our team generally makes a lot of grants that are <$100k (much more so than other Open Phil teams). * I agree the application would probably take most people longer than the description that Gavin gave on Manifund. That said, I think it’s still relatively lean considering the distribution of projects we fund, though I agree it’s slightly long for projects as small as this one (but I think Gavin could have filled it out in <<2 days). For reference, this is our form. * Regarding turnaround time, my guess is for this project, we would have taken significantly less than 3 months, especially if they had indicated that receiving a decision was time-sensitive. For reference, the form currently says:  * For $500k+ projects, I think a 3-month turnaround time is more defensible, though I do personally wish we generally had faster response times. 

Do you not know who the living Pope is, while still believing he's the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?

I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don't feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can't name my senator or representative and barely know what Biden... (read more)

3Lorenzo
That makes sense, thanks. I would say that compared to Catholicism, in EA you have much less reason to care about the movement leaders, as them having authority to rule over EA is not part of its beliefs.   For what it's worth, I've talked with several people I've met through EA who regularly "break" into factory farms[1] or who regularly work in developing countries. It's definitely possible that it should be more, but I would claim that the percentage of people doing this is much higher than baseline among people who know about EA, and I think it can have downsides for the reasons mentioned in 'Against Empathy.' 1. ^ They claim that they enter them without any breaking, I can't verify that claim, but I can verify that they have videos of themselves inside factory farms.

Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values. 

I appreciate you drawing the distinction! The bit about "bad people" was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.

There's a lot of massively impactful difference in culture and values

Mm, I think if the question is "what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements" I would assign credit in the ratio of ~1:3 to di... (read more)

4Ben Pace
I can think about that question if it seems relevant, but the initial claim of Elizabeth's was "I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible". So I was trying to give an account of the root cause there. Also — and I recognize that I'm saying something relatively trivial here — the root cause of a problem in a system can of course be any seemingly minor part of it. Just because I'm saying one part of the system is causing problems (the culture's values) doesn't mean I'm saying that's what's primarily responsible for the output. The current cause of a software company's current problems might be the slow speed with which PR reviews are happening, but this shouldn't be mistaken for the claim that the credit allocation for the company's success is primarily that it can do PR reviews fast. So to repeat, I'm saying that IMO the root cause of irresponsible movement growth and ponzi-scheme-like recruitment strategies was a lack of IMO very important values like dialogue and candor and respecting other people's sense-making and courage and so on, rather than an explanation more like 'those doing recruitment had poor feedback loops so had a hard time knowing what tradeoffs to make' (my paraphrase of your suggestion).  I would have to think harder about which specific values I believe caused this particular issue, but that's my broad point.

Mm I basically agree that:

  • there are real value differences between EA folks and rationalists
  • good intentions do not substitute for good outcomes

However:

  • I don't think differences in values explain much of the differences in results - sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
  • I'm pushing back against Tsvi's claims that "some people don't care" or "EA recruiters would consciously choose 2 zombies over 1 agent" - I think ascribing bad int
... (read more)
4ChristianKl
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow. 
9TsviBT
Ben's responses largely cover what I would have wanted to say. But on a meta note: I wrote specifically I do also think the hypothesis is true (and it's reasonable for this thread to discuss that claim, of course). But the reason I said it that way, is that it's a relatively hard hypothesis to evaluate. You'd probably have to have several long conversations with several different people, in which you successfully listen intensely to who they are / what they're thinking / how they're processing what you say. Probably only then could you even have a chance at reasonably concluding something like "they actually don't care about X", as distinct from "they know something that implies X isn't so important here" or "they just don't get that I'm talking about X" or "they do care about X but I wasn't hearing how" or "they're defensive in this moment, but will update later" or "they just hadn't heard why X is important (but would be open to learning that)", etc. I agree that it's a potentially mindkilly hypothesis. And because it's hard to evaluate, the implicature of assertions about it is awkward--I wanted to acknowledge that it would be difficult to find a consensus belief state, and I wanted to avoid implying that the assertion is something we ought to be able to come to consensus about right now. And, more simply, it would take substantial work to explain the evidence for the hypothesis being true (in large part because I'd have to sort out my thoughts). For these reasons, my implied request is less like "let's evaluate this hypothesis right now", and more like "would you please file this hypothesis away in your head, and then if you're in a long conversation, on the relevant topic with someone in the relevant category, maybe try holding up the hypothesis next to your observations and seeing if it explains things or not". In other words, it's a request for more data and a request for someone to think through the hypothesis more. It's far from perfectly neutral--if so

Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.

Insofar as you're thinking I said bad people, please don't let yourself make that mistake, I said bad values. 

There are occasional bad people like SBF but that's not what I'm talking about here. I'm talking about a lot of perfectly kind people who don't hold the values of integrity and truth-seeking as part of who they are, and who couldn't give a good account for why many rationalists value those things so much (and might well call rationalist... (read more)

Mm I'm extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as "I want recruits at all costs". I predict that if you talk to one and asking them about it, you'd find the same.

I do think that it's easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy -- but I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.

I haven't grokked the notion of "an addiction to steam" yet, so I'm not sure whether I agree with that account, but I have a feeling that when you write "I'd encourage y'all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned" you are papering over real values differences.

Tons of EAs will tell you that honesty and integrity and truth-seeking are of course 'important', but if you observe their behavior they'll trade them off pretty harshly with PR concerns or QALYs bought or plan-changes. I think th... (read more)

Some notes from the transcript:

I believe there are ways to recruit college students responsibly. I don't believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can't filter and inform the way healthy recruiting needs to.  And they're funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.

Enjoyed this point -- I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned.  T... (read more)

3Lorenzo
  This might be a bit off-topic, but I'm very confused by this. I was raised Catholic, and the Wikipedia description matches my understanding of Catholicism (compared to other Christian denominations) Do you not know who the living Pope is, while still believing he's the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church? Or do you disagree with the Wikipedia and the Catholic Church definitions of the core beliefs of Catholicism?   I'm confused by this as well. All the people I know who worked on those trips (either as an organiser or as a volunteer) don't think it helped their epistemics at all, compared to e.g. reading the literature on development economics. I definitely think on the ground experience is extremely valuable (see this recent comment and this classic post) but I think watching vegan documentaries, visiting farms, and doing voluntourism are all bad ways to improve the accuracy of your map of actual reality.
2Screwtape
Counterargument, I think there's enough different streams of EA that this would not be especially helpful. There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there's a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as 'in the movement' enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to. 

Was there ever a time where CEA was focusing on truth-alignment? 

I doesn't seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment. 

It's quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned. 

Edit:

Julia Wise messa... (read more)

don't see the downstream impacts of their choices,

This could be part of it... but I think a hypothesis that does have to be kept in mind is that some people don't care. They aren't trying to follow action-policies that lead to good outcomes, they're doing something else. Primarily, acting on an addiction to Steam. If a recruitment strategy works, that's a justification in and of itself, full stop. EA is good because it has power, more people in EA means more power to EA, therefore more people in EA is good. Given a choice between recruiting 2 agents and... (read more)

I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice.

I feel ambivalent about this.  On one hand, yes, you need to have standards, and I think EA's move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with... (read more)

Reply1111

Hm, I expect the advantage of far UV is that many places where people want to spend time indoors are not already well-ventilated, or that it'd be much more expensive to modify existing hvac setups vs just sticking a lamp on a wall.

I'm not at all familiar with the literature on safety; my understanding (based on this) is that no, we're not sure and more studies would be great, but there's a vicious cycle/chicken-and-egg problem where the lamps are expensive, so studies are expensive, so there aren't enough studies, so nobody buys lamps, so lamp companies don't stay in business, so lamps are expensive.

Another similar company I want someone to start is one that produces inexpensive, self-installable far UV lamps. My understanding is that far UV is safe to shine directly on humans (as opposed to standard UV), meaning that you don't need high ceilings or special technicians to install the lamp. However, it's a much newer technology with not very much adoption or testing, I think because of a combination of principal/agent problems and price; see this post on blockers to Far UV adoption.

Beacon does produce these $800 lamps, which are consumer friendly-ish. ... (read more)

I'm not convinced that far-UVC is safe enough around humans to be a good idea. It's strongly absorbed by proteins so it doesn't penetrate much, but:

  • It can make reactive compounds from organic compounds in air.
  • It can produce ozone, depending on the light. (That's why mercury vapor lamps block the 185nm emission.)
  • It could potentially make toxic compounds when it's absorbed by proteins in skin or eyes.
  • It definitely causes degradation of plastics.

And really, what's the point? Why not just have fans sending air to (cheap) mercury vapor lamps in a contained area where they won't hit people or plastics?

(maybe the part that seems unrealistic is the difficulty of eliciting values for the power set of possible coalitions, as generating a value for any one coalition feels like an expensive process, and the size of a power set grows exponentially with the number of players)

3James Stephen Brown
Thanks Austin, yes—the weeks I've spent trying to really understand why Shapley uses such a complicated method to calculate the possible coalitions, has left me feeling that it is actually prohibitively cumbersome for most applications. It has been popular in machine learning algorithms, but faces the problem that it is computationally expensive. I created a comparison calculator to show Shapley next to my own method that simply weights by dividing all the explicit marginal values by the total of all the explicit marginal values and multiplying that by the grand coalition value. I found that, for realistic values being entered, it yields very similar results to Shapley, and yet is easy to calculate on a spare napkin. It also satisfies Shapley's 4 axioms, and seems more intuitive to me at least. There might be an issue with mine that you need the total of all marginal values (which is involved in the weighting) to find any one weighted value, whereas Shapley can be used to calculate each weighted marginal value in isolation. Regardless... who am I to argue with a Nobel Prize winning economist? But I can't be accused of not trying to get on board :) I like the look of Quadratic Funding, perhaps for a future post.

This is extremely well produced, I think it's the best introduction to Shapley values I've ever seen. Kudos for the simple explanation and approachable designs!

(Not an indictment of this site, but with this as with other explainers, I still struggle to see how to apply Shapley values to any real world problems haha - unlike something like quadratic funding, which also sports fancy mechanism math but is much more obvious how to use)

3Austin Chen
(maybe the part that seems unrealistic is the difficulty of eliciting values for the power set of possible coalitions, as generating a value for any one coalition feels like an expensive process, and the size of a power set grows exponentially with the number of players)

Thanks for the correction! My own interaction with Lighthaven is event space foremost, then housing, then coworking; for the purposes of EA Community Choice we're not super fussed about drawing clean categories, and we'd be happy to support a space like Lighthaven for any (or all) of those categories.

For now I've just added the your existing project into EA Community Choice; if you'd prefer to create a subproject with a different ask that's fine too, I can remove the old one. I think adding the existing one is a bit less work for everyone involved -- especially since your initial proposal has a lot more room for funding. (We'll figure out how to do the quadratic match correctly on our side.)

I recommend adding "EA Community Choice" existing applications. I've done so for you now, so the project will be visible to people browsing projects in this round, and future donations made will count for quadratic funding match. Thanks for participating!

Welcome to the US; excited for your time at LessOnline (and maybe Manifest too?)

And re: 19., we're working on it![1]

  1. ^

    (Sorry, that was a lie too.)

One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time).


I think you're talking about me? I may have miscommunicated; I was ~zero anxious, instead trying to signal that I'd looked over the doc as requested, and poking some fun at the TODOs.

FWIW I appreciated your process for running criticism ahead of time (and especially enjoyed the back-and-forth comments on the doc; I'm noticing that those kinds of conversations on a private GDoc seem somehow more vibrant/nicer than the ones on LW or on a blog's comments.)

2Elizabeth
Well in that case I was the one who was unnecessarily anxious so still feels like a cost, although one well worth paying to get the information faster.

most catastrophes through both recent and long-ago history have been caused by governments

 

Interesting lens! Though I'm not sure if this is fair -- the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao's famine or Hitler's genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.

I'd be interested to hear how Austin has updated regarding Sam's trustworthine

... (read more)
2Garrett Baker
I do mostly[1] credit such things to governments, but the argument is about whether companies or governments are more liable to take on very large tail risks. Not about whether governments are generally good or bad. It may be that governments just like starting larger projects than corporations. But in that case, I think the claim that a greater percentage of those end in catastrophe than similarly large projects started by corporations still looks good. ---------------------------------------- 1. I definitely don't credit slavery abolition to governments, at least in America, since that industry was largely made possible in the first place by governments subsidizing the cost of chasing down runaway slaves. I'd guess general decline of violence is more attributable to generally increasing affluence, which has a range of factors associated with it, than government intervention so directly. But I'm largely ignorant on that particular subject. The "mostly" here means "I acknowledge governments do some good things". ↩︎
3habryka
Wait, to be clear, are you saying that you think it would be to Sam's credit to learn that he forced employees to sign NDAs by straightforwardly lying to them about their legal obligations, using extremely adversarial time pressure tactics and making very intense but vague threats?  This behavior seems really obviously indefensible. I don't have a strong take on the ScarJo thing. I don't really see how it would be to his credit, my guess is he straightforwardly lied about his intention to make the voice sound like ScarJo, but that's of course very hard to verify, and it wouldn't be a big deal either way IMO.

Ah interesting, thanks for the tips.

I use filler a lot so thought the um/ah removal was helpful (it actually cut down the recording by something like 10 minutes overall). It's especially good for making the transcript readable, though perhaps I could just edit the transcript without changing the audio/video.

3DanielFilan
I think I care about the video being easier to watch more than I care about missing the ums and ahs? But maybe I'm not appreciating how much umming you do.

Thanks for the feedback! I wasn't sure how much effort to put into this producing this transcript (this entire podcast thing is pretty experimental); good to know you were trying to read along.

It was machine transcribed via Descript but then I did put in another ~90min cleaning it up a bit, removing filler words and correcting egregious mistranscriptions. I could have spent another hour or so to really clean it up, and perhaps will do so next time (or find some scaleable way to handle it eg outsource or LLM). I think that put it in an uncanny valley of "almost readable, but quite a bad experience".

6DanielFilan
Yeah, sadly AFAICT it just takes hours of human time to produce good transcripts.

Yeah I meant her second post, the one that showed off the emails around the NDAs.

Hm, I disagree and would love to operationalize a bet/market on this somehow; one approach is something like "Will we endorse Jacob's comment as 'correct' 2 years from now?", resolved by a majority of Jacob + Austin + <neutral 3rd party>, after deliberating for ~30m.

4jacobjacob
Sure that works! Maybe use a term like "importantly misguided" instead of "correct"? (Seems easier for me to evaluate)

Starting new technical AI safety orgs/projects seems quite difficult in the current funding ecosystem. I know of many alumni who have founded or are trying to found projects who express substantial difficulties with securing sufficient funding.

Interesting - what's like the minimum funding ask to get a new org off the ground? I think something like $300k would be enough to cover ~9 mo of salary and compute for a team of ~3, and that seems quite reasonable to raise in this current ecosystem for pre-seeding a org.

2Ryan Kidd
Yeah, that amount seems reasonable, if on the low side, for founding a small org. What makes you think $300k is reasonably easy to raise in this current ecosystem? Also, I'll note that larger orgs need significantly more.

I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I've spent copious amounts of time shaping the Manifold community's discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).

So, I love Scott, consider CM's original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott's last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]

I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos's m... (read more)

7SebastianG
I for one am definitely worse off. 1. I now have to read Scott on Substack instead of SSC. 2. Scott doesn't write sweet things that could attract nasty flies anymore.

Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.

Didn't Scott quit his job as a result of this? I don't have high confidence on how bad things would have been if Scott hadn't taken costly actions to reduce the costs, but it seems that the evidence is mostly screened off by Scott doing a lot of stuff to make the consequences less bad and/or eating some of the costs in anticipation.

+1, I agree with all of this, and generally consider the SSC/NYT incident to be an example of the rationalist community being highly tribalist.

(more on this in a twitter thread, which I've copied over to LW here)

There were two issues: what is the cost of doxxing, and what is the benefit of doxxing. I think the main crux an equally important crux of disagreement is the latter, not the former. IMO the benefit was zero: it’s not newsworthy, it brings no relevant insight, publishing it does not advance the public interest, it’s totally irrelevant to the story. Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it (but of cours... (read more)

My friend Eric once proposed something similar, except where two charitable individuals just create the security directly. Say Alice and Bob both want to donate $7500 to Givewell; instead of doing so directly, they could create a security which is "flip a coin, winner gets $15000". They do so, Alice wins, waits a year and donates for $15000 of appreciated longterm gains and gets a tax deduction, while Bob deducts the $7500 loss.

This seems to me like it ought to work, but I've never actually tried this myself...

Warning: Dialogues seem like such a cool idea that we might steal them for Manifold (I wrote a quick draft proposal).

On that note, I'd love to have a dialogue on "How do the Manifold and Lightcone teams think about their respective lanes?"

2jacobjacob
lol, sure, invited you to a dialogue on that :)

Haha, this actually seems normal and fine. We who work on prediction markets, understand the nuances and implementation of these markets (what it means in mathematical terms when a market says 25%).  And Kevin and Casey haven't quite gotten it yet, based on a couple of days of talking to prediction markets enthusiasts.

But that's okay! Ideas are actually super hard to understand by explanation, and much easier to understand by experience (aka trial and error). My sense is that if Kevin follows up and bets on a few other markets, he'd start to wonder "h... (read more)

7cata
Never mind bettors -- part of my project for improving the world is, I want people like Casey to look at a prediction market and be like, "Oh, a prediction market. I take this probability seriously, because if it was obviously wrong, someone could come in and make money by fixing it, and then it would be right." If he doesn't understand that line of argument, then indeed, why is Casey ever going to take the probability any more seriously than a Twitter poll? I feel like right now he might have the vibe of that argument, even if he doesn't actually understand it? But I think you have to really comprehend the argument before you will take the prediction market more seriously than your own uninformed feeling about the topic, or your colleague's opinion, or one research paper you skimmed.

Yeah, I guess that's fair -- you have much more insight into the number of and viewpoints of Wave's departing employees than I do. Maybe "would be a bit surprised" would have cashed out to "<40% Lincoln ever spent 5+ min thinking about this, before this week", which I'd update a bit upwards to 50/50 based on your comment.

For context, I don't think I pushed back on (or even substantively noticed) the NDA in my own severance agreement, whereas I did push back quite heavily on the standard "assignment of inventions" thing they asked me to sign when I joined. That said, I was pretty happy with my time and trusted my boss enough to not expect for the NDA terms to matter.

4jefftk
Below you can see Elizabeth writing about how she successfully pushed back and got it removed from her agreement, so it does seem like my guess was correct! [EDIT: except nothing in her post mentions Lincoln, so probably not] (I didn't know about Elizabeth's situation before her post)

I definitely feel like "intentionally lying" is still a much much stronger norm violation than what happened here. There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?" I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history, as opposed to eg getting boilerplate legal contracts from their lawyers/an online form and then copying that for each severance agreement thereafter.

There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?"

Technically true, but also somewhat reminds me of this.

3Adam Zerner
Epistemic status: Thinking out loud. Overall I'm rather confused about what to think here. Yeah. And there is a Chesterton's Fence element here too. Like as CEO, if you really want to go with a non-standard legal thing, you probably would want to make sure you understand why the standard thing is what it is. Which, well, I guess you can just pay someone a few hundred dollars to tell you. Which I'd expect someone with the right kind of moral integrity to do. And I'd expect the answer to be something along the lines of: Although, perhaps it'd take a special lawyer to actually be frank with you and acknowledge all of that. And you'd probably want to get a second and third and fourth opinion too. But still, seeking that out seems like a somewhat obvious thing to do for someone with moral integrity. And if you do in fact get the response I described above, ditching the non-disparagement seems like a somewhat obvious way to respond.

I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history

I think it's pretty likely that at least one departing employee would have pushed back on it some, so I wouldn't be surprised?

Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!

5Ben Pace
Most people in the world lie from time to time, and are aware that their friends lie. Nonetheless I don't think that Lincoln would lie to me. As a result, I trust his word. Most CEOs get people who work for them to sign contracts agreeing that they won't share negative/critical information about the company. Nonetheless I didn't think that Lincoln would get people he works with to sign contracts not to share negative/critical information about Wave. As a result, I trusted the general perception I had of Wave. I currently feel a bit tricked, not dissimilar to if I found out Lincoln had intentionally lied to me on some minor matter. While it is common for people to lie, it's not the relationship I thought I had here.

On the Manifund regranting program: we've received 60 requests for funding in the last month, and have commited $670k to date (or about 1/3rd of our initial budget of $1.9m). My rough guess is we could productively distribute another $1m immediately, or $10m total by the end of the year.

I'm not sure if the other tallies are as useful for us -- in contrast to an open call, a regranting program scales up pretty easily; we have a backlog of both new regrantors to onboard and existing regrantors to increase budgets, and regrantors tend to generate opportunitie... (read more)

1porby
Thank you!

Thanks for the feedback! We're still trying to figure out what time period for our newsletter makes the most sense, haha.

The $400k regrantors were chosen by the donor; the $50k ones were chosen by the Manifund team.

I can't speak for other regrantors, but I'm personally very sympathetic to retroactive grants for impactful work that got less funding than was warranted; we have one example for Vipul Naik's Donations List Website and hope to publish more examples soon!

I'm generally interested in having a diverse range of regrantors; if you'd like to suggest names/make intros (either here, or privately) please let me know!

Thanks! We're likewise excited by Lightspeed Grants, and by ways we can work together (or compete!) to make the funding landscape good.

A similar calibration game I like to play with my girlfriend: one of us gives our 80% confidence interval for some quantity (eg "how long will it take us to get to the front of this line?") and the other offers to bet on the inside or the outside, at 4:1 odds.

I've learned that my 80% intervals are right like 50% of the time, almost always in favor of being too optimistic...

With my wife, I do it a little differently. Once a week or so, when the kids have fallen asleep, we’ll lie in separate beds—Johanna next to the baby, and me next to the 5-year-old. We’ll both be staring at our screens. Unlike the notes I keep with Torbjörn, these notes are shared. They are a bunch of Google docs.

 

This reminds me of the note-taking culture we have at Manifold, on Notion (which I would highly recommend as an alternative to Google docs -- much more structured, easier to navigate and link between things, prettier!)

For example, while we do... (read more)

1Henrik Karlsson
That's nice! And +1 on Google docs not being ideal. (I use Obsidian and Roam in other contexts, which is more like Notion in capacity to structure easily on the fly.)

Thanks for writing this up! I've just added AI Impacts to Manifold's charity list, so you can now donate your mana there too :)

I find the move from "website" to "wiki" very interesting. We've been exploring something similar for Manifold's Help & About pages. Right now, they're backed by an internal Notion wiki and proxied via super.so, but our pages are kind of clunky; plus we'd like to open it up to allow our power users to contribute. We've been exploring existing wiki solutions (looks like AI Impacts is on DokuWiki?) but it feels like most public w... (read more)

Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.

Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really go... (read more)

8johnswentworth
Yeah, I definitely think Manifold made the right tradeoffs (at least at current margins) in making question-creating as easy as possible. My actual hope for this post was that a few other people would read it, write down a list of questions like "how will I rank the importance of X in 3 years?", precommit to giving their own best-guess answers to the questions in a few years, and then set up a market on each question. My guess is that a relatively new person who expects to do alignment research for the next few years would be the perfect person for this, or better yet a few such people, and it would save me the effort.

Re your second point (score rather than ranking basketball players), Neel Nanda has the same advice which I've found fairly helpful for all kinds of assessment tasks: https://www.neelnanda.io/blog/48-rating

It makes me much more excited for eg 5-star voting instead of approval or especially ranked choice voting.

Big fan of the concept! Unfortunately, Manifold seems too dynamic for this extension (using the extension seems to break our site very quickly) but I really like the idea of temporarily hiding our market % so you can form an opinion before placing a bet:

1Stephen Bennett
Using javascript it'd be pretty easy to dynamically hide/show numbers depending on whether or not the text is a member of a certain javascript class. In this case, the class of interest is "mb-0.5 mr-0.5 text-2xl" (which doesn't show up anywhere else on the main page). More generally, the extension could allow users to mark numbers as "I want to see numbers that show up in places like this" or "I don't want to see numbers that show up in places like this" and then create a rule to show/hide all numbers in that class. If you create a central database of these votes, you can then extend this across users, so that when someone comes across a new website numbers that they want to see are shown automatically. Of course, sometimes classes aren't enough, but in the case of Manifold it seems like it'd be sufficient (users could vote on whether or not to see text with the class "text-base text-red-500" and green (for recent shifts in market probability). One downside to this approach is that it would break if the page updates its javascript classes, but if it's crowdsourced it would probably get fixed pretty quickly and only impact a minority of users.
Load More