I think I am, all things considered, sad about this. I think libel suits are really bad tools for limiting speech, and I declined being involved with them when some of the plaintiffs offered me to be involved on behalf of LW and Lightcone.
Appreciate you saying this. It raises my esteem for LW/Lightcone to hear that this is the route that you all choose. Perhaps that doesn't mean much since I largely agree with the view you express about defamation suits, but even for those that disagree, I think there is something to admire here in terms of sticking to principles even when it's people you strongly disagree with how are benefiting from those principles in a particular case.
I know I've responded to a lot of your comments, and I get the sense you don't want to keep engaging with me, so I'll try to keep it brief.
We both agree that details matter, and I think the details of what the actual problem is matter. If, at bottom, the thing that Epoch/these individuals have done wrong is recklessly accelerate AI, I think you should have just said that up top. Why all the "burn the commons", "sharing information freely", "damaging to trust" stuff? It seems like you're saying at the end of the day, those things aren't really the thing you have a problem with. On the other hand, I think invoking that stuff is leading you to consider approaches that won't necessarily help with avoiding reckless acceleration, as I hope my OpenAI example demonstrates.
I think its not all that uncommon for people who are highly competent in their current role to be passed over for promotion to leadership. LeBron James isn't guaranteed to job as the MBA commissioner just because he balls hard. Things like "avoid[ing] negative-EV projects" would be prime candidates for something like this. If you're amazing at executing technical work on your assigned projects but aren't as good at prioritizing projects or coming up with good ideas for projects, then I could definitely see that blocking a move to leadership even if you're considered insanely competent technically.
I largely agree with the underlying point here, but I don't think its quite correct that something like this only applies in specific professions. For example, I think every major company is going to expect employees to be careful about revealing internal info, and there are norms that apply more broadly (trade secrets, insider trading etc.).
As far as I can tell though, those are all highly dissimilar to this scenario because they involve an existing widespread expectation of not using information in a certain way. Its not even clear to me in this case what information was used in what way that is allegedly bad.
I just think it's really bad if people feel that they can't speak relatively freely with the forecasting organisations because they'll misuse the information.
To "misuse" to me implies taking a bad action. Can you explain what misuse occurred here? If we assume that people at OpenAI now feel less able to speak freely after things that ex-OpenAI employees have said/done would you likewise characterize those people as having "misused" information or experience they gained at OpenAI? I understand you don't have fully formed solutions and that's completely understandable, but I think my questions go to a much more fundamentally issue about what the underlying problem actually is. I agree it is worth discussing, but I think it would clarify the discussion to understand what the intent of such a norm would (and if achieving that intent would in fact be desirable).
(This is distinct from my separate point about it being a mistake to hire folk who do things like this. It is a mistake to have hired folks who act strongly against your interests even if they don't break any ethical injuctions)
If Coca-Cola hires someone who later leaves and goes to work for Pepsi because Pepsi offered them higher compensation, I'm not sure it would make sense for Coca-Cola to conclude that they should make big changes to their hiring process, other than perhaps increasing their own compensation if they determine that is a systematic issue. Coca-Cola probably needs to accept that "its not personal" is sometimes going to be the natural of the situation. Obviously details matter, so maybe this case is different, but I think working in an environment where you need to cooperate with other people/institutions means you also have to sometimes accept that people you work with will make decisions based on their own judgements and interests, and therefore may do things you don't necessarily agree with.
(You could say "disempowerment which is gradual" for clarity.)
I feel like there is a risk of this leading to a never-ending sequence of meta-communication concerns. For instance, what if a reader interprets "gradual" to mean taking more than 10 years, but the writer thought 5 would be sufficient for "gradual" (and see timelines discussions around stuff like continuity for how this keeps going). Or if the reader assumes "disempowerment" means complete disempowerment, but writer only meant some unspecified "significant amount" of disempowerment. Its definitely worthwhile to try to be clear initially, but I think we also have to accept that clarification may need to happen "on the backend" sometimes. This seems like a case where one could simply clarify they have a different understanding compared to the paper. In fact, Its not all that clear to me that people won't implicitly translate "disempowerment which is gradual" to "gradual disempowerment". It could be that the paper stands in just as much for the concept as for the literal words in people's minds.
But this only works if those less worried about AI risks who join such a collaboration don't use the knowledge they gain to cash in on the AI boom in an acceleratory way.
Can you state more specifically what the alleged bad actions are here? Based on some of the discussions under your post about professional norms surrounding information disclosure, I think it is worth distinguishing two cases.
First, consider a norm that limits the disclosure of some relatively specific and circumscribed pieces of information, such as a doctor not being allowed to reveal personal health information of patients outside of what is needed to provide care.
Second, a general norm that if you cooperate with someone and they provide you some info, you won't use that info contrary to their interests. Its not 100% clear to me, but your post sounds a lot like this second one.
I think the second scenario raises a lot of issues. Its seems challenging to enforce, hard to understand and navigate, costly for people to attempt to conform to, and potentially counterproductive for what seems to be your goal. You are considering a specific case at a specific point in time, but I don't think that gives the full picture of the impact of such a norm. For example, consider ex-OpenAI employees who left due to concerns about AI safety. Should the expectation be that they only use information and experience they gained at OpenAI in a way that OpenAI would approve of?
Now, if Epoch and/or specific individuals made commitments that they violated, that might be more like the first case, but its not clear that is what happened here. If it is, more explanation of how this is the case would be helpful, I think.
11 former OpenAI employees filed an amicus brief in the Musk vs. Altman lawsuit
If I'm reading the docket correctly, first amendment expert Eugene Volokh has entered an appearance on behalf of the ex-OpenAI amici. I don't want to read too much into that, but it is interesting to me in light of the information about OpenAI employees and NDAs that a first amendment expert is working with them.
A group of former OpenAI employees filed a proposed amicus brief in support of Musk’s lawsuit on the future of OpenAI’s for-profit transition. Meanwhile, OpenAI countersued Elon Musk.
I think this is the first time that the charter has been significantly highlighted in this case. My own personal view is the charter is one of the worst documents for OpenAI (therefore good for Musk), and having their own employees stating that it was emphasized a lot and treated as binding are very bad facts for OpenAI and associated defendants. The timeline for all this stuff isn't 100% clear to me, so I can imagine there being issues with whether the charter was timed such that it is relevant for Musk's own reliance, but the vibes of this for OpenAI are horrendous. Also raises an interesting possibility of whether the "merge-and-assist" part of the charter might be enforceable.
The docket seems to indicate that Eugene Volokh is representing the ex-OpenAI amici (in addition to Lawrence Lessig). To my understanding Volokh is a first amendment expert and has also done work on transparency in courts. The motion for leave to file also indicate that OpenAI isn't necessarily on board with the brief being filed. I wonder if they are possibly going to argue their ex-employees shouldn't be allowed to do what their doing (perhaps trying to enforce NDAs?), and Volokh is perhaps planning to weigh in on that issue?
In the context of AI safety views that are less correlated/more independent, I would personally bump the GDM work related to causality. I think GDM is the only major AI-related organization I can think of that seems to have a critical mass of interest in this line of research. A bit different since its not a full-on framework for addressing AGI, but I think it is a different (and in my view under-appreciated) line of work that has a different perspective and draws and different concepts/fields than a lot of other approaches.