Wiki Contributions

Comments

I'm sorry you feel that way. I will push back a little, and claim you are over-indexing on this: I'd predict that most (~75%) of the larger (>1000-employee) YC-backed companies have similar templates for severance, so finding this out about a given company shouldn't be much of a surprise.

I did a bit of research to check my intuitions + it does seem like non-disparagement is at least widely advised (for severance specifically and not general employment), e.g., found two separate posts on the YC internal forums regarding non-disparagement within severance agreements:

"For the major silicon valley law firms (Cooley, Fenwick, OMM, etc) non disparagement is not in the confidentiality and invention assignment agreement [employment agreement], and usually is in the separation and release [severance] template."

(^ this person also noted that it would be a red flag to find non-disparagement in the employment agreement.)

"One thing I’ve learned - even when someone has been terminated with cause, a separation agreement [which includes non-disparagement] w a severance can go a long way."

Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made.

I also would re-emphasize something Jeff said - that these things are quite common - if you just google for severance package standard terms, you'll find non-disparagement clauses in them. As far as I am aware, we don't ask current employees or employees who are quitting without severance to not talk about their experience at Wave.

Reply5411

In my view you have two plausible routes to overcoming the product problem, neither of which is solved (primarily) by writing code.

Route A would be social proof: find a trusted influencer who wants to do a project with DACs. Start by brainstorming various types of projects that would most benefit from DACs, aiming to find an idea which an (ideally) narrow group of people would be really excited about, that demonstrates the value of such contracts, led by a person with a lot of 'star power'. Most likely this would be someone who would be likely to raise quite a lot of money through a traditional donation/kickstarter-type drive, but instead they decide to demo the DAC (and in doing so make a good case for it).

Route B is to focus on comms. Iterate on the message. Start by explaining it to non-economist friends, then graduate to focus groups. It's crucial to try to figure out how to most simply explain the idea in a sentence or two, such that people understand and don't get confused by it.

I'm guessing you'll need to follow both these routes, but you can follow them simultaneously and hopefully learn cross-useful things while doing so.

I like the idea of getting more people to contribute to such contracts. Not thrilled about the execution. I think there is a massive product problem with the idea -- people don't understand it, think it is a scam, etc. If your efforts were more directed at the problem of getting people to understand and be excited about crowdfunding contracts like this, I would be a lot more excited.

Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.

Your question is coming from within a frame (I'll call it the "EY+LW frame") that I believe most of the DC people do not heavily share, so it is kind of hard to answer directly. But yes, to attempt an answer, I've seen quite a lot of interest (and direct policy successes) in reducing AI chips' availability and production in China (eg via both CHIPS act and export controls), which is a prerequisite for US to exert more regulatory oversight of AI production and usage. I think the DC folks seem fairly well positioned to give useful inputs into further AI regulation as well.

I've been in DC for ~ the last 1.5y and I would say that DC AI policy has a good amount of momentum, I doubt it's particularly visible on twitter but also it doesn't seem like there are any hidden/secret missions or powerful coordination groups (if there are, I don't know about it yet). I know ~10-20 people decently well here who work on AI policy full time or their work is motivated primarily by wanting better AI policy, and maybe ~100 who I have met once or twice but don't see regularly or often; most such folks have been working on this stuff since before 2022; they all have fairly normal-seeming thinktank- or government-type jobs.

They don't mostly spend time on LW (although certainly a few of them do). Many do spend time on Twitter, and they do read lots of AI related takes from LW-influenced folks. They have meetup groups related to AI policy. I guess it looks pretty much as I was expecting before I came here. Happy to answer further questions that don't identify specific people, just because I don't know how many of them want to be pointed-at on LW.

If you have energy for this, I think it would be insanely helpful!

Thanks for writing this. I think it's all correct and appropriately nuanced, and as always I like your writing style. (To me this shouldn't be hard to talk about, although I guess I'm a fairly recent vegan convert and haven't been sucked into whatever bubble you're responding to!)

Load More