orthonormal

Sequences

Staying Sane While Taking Ideas Seriously

Comments

Sorted by

Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.

Of course, it turned out that LLMs do this just fine, thank you.

intensional terms

Should probably link to Extensions and Intensions; not everyone reads these posts in order.

Mati described himself as a TPM since September 2023 (after being PM support since April 2022), and Andrei described himself as a Research Engineer from April 2023 to March 2024. Why do you believe either was not a FTE at the time?

And while failure to sign isn't proof of lack of desire to sign, the two are heavily correlated—otherwise it would be incredibly unlikely for the small Superalignment team to have so many members who signed late or not at all.

orthonormalΩ32812

With the sudden simultaneous exits of Mira Murati, Barret Zoph, and Bob McGrew, I thought I'd update my tally of the departures from OpenAI, collated with how quickly the ex-employee had signed the loyalty letter to Sam Altman last November. 

The letter was leaked at 505 signatures, 667 signatures, and finally 702 signatures; in the end, it was reported that 737 of 770 employees signed. Since then, I've been able to verify 56 departures of people who were full-time employees (as far as I can tell, contractors were not allowed to sign, but all FTEs were).

I still think I'm missing some, so these are lower bounds (modulo any mistakes I've made).

Headline numbers:

  • Attrition for the 505 OpenAI employees who signed before the letter was first leaked: at least 24/505 = 4.8%
  • Attrition for the next 197 to sign (it was leaked again at 667 signatures, and one last time at 702): at least 13/197 = 6.6%
  • Attrition for the (reported) 68 who had not signed by the last leak: at least 19/68 = 27.9%.

Reportedly, 737 out of the 770 signed in the end, and many of the Superalignment team chose not to sign at all.

Below are my current tallies of some notable subsets. Please comment with any corrections!

People from the Superalignment team who never signed as of the 702 leak (including some policy/governance people who seem to have been closely connected) and are now gone:

  • Carroll Wainwright
  • Collin Burns
  • Cullen O'Keefe
  • Daniel Kokotajlo
  • Jan Leike (though he did separately Tweet that the board should resign)
  • Jeffrey Wu
  • Jonathan Uesato
  • Leopold Aschenbrenner
  • Mati Roy
  • William Saunders
  • Yuri Burda

People from the Superalignment team (and close collaborators) who did sign before the final leak but are now gone:

  • Jan Hendrik Kirchner (signed between 668 and 702)
  • Steven Bills (signed between 668 and 702)
  • John Schulman (signed between 506 and 667)
  • Sherry Lachman (signed between 506 and 667)
  • Ilya Sutskever (signed by 505)
  • Pavel Izmailov (signed by 505)
  • Ryan Lowe (signed by 505)
  • Todor Markov (signed by 505)

Others who didn't sign as of the 702 leak (some of whom may have just been AFK for the wrong weekend, though I doubt that was true of Karpathy) and are now gone:

  • Andrei Alexandru (Research Engineer)
  • Andrej Karpathy (Co-Founder)
  • Austin Wiseman (Finance/Accounting)
  • Girish Sastry (Policy)
  • Jay Joshi (Recruiting)
  • Katarina Slama (Member of Technical Staff)
  • Lucas Negritto (Member of Technical Staff, then Developer Community Ambassador)
  • Zarina Stanik (Marketing)

Notable other ex-employees:

  • Barrett Zoph (VP of Research, Post-Training; signed by 505)
  • Bob McGrew (Chief Research Officer; signed by 505)
  • Chris Clark (Head of Nonprofit and Strategic Initiatives; signed by 505)
  • Diane Yoon (VP of People; signed by 505)
  • Gretchen Krueger (Policy; signed by 505; posted a significant Twitter thread at the time she left)
  • Mira Murati (CTO; signed by 505)

EDIT: On reflection, I made this a full Shortform post.

With the sudden simultaneous exits of Mira Murati, Barret Zoph, and Bob McGrew, I thought I'd do a more thorough scan of the departures. I still think I'm missing some, so these are lower bounds (modulo any mistakes I've made).

Headline numbers:

  • Attrition for the 505 OpenAI employees who signed before the letter was first leaked: at least 24/505 = 4.8%
  • Attrition for the next 197 to sign (it was leaked again at 667 signatures, and one last time at 702): at least 13/197 = 6.6%
  • Attrition for the (reported) 68 who had not signed by the last leak: at least 19/68 = 27.9%.

Reportedly, 737 out of the 770 signed in the end, and many of the Superalignment team chose not to sign at all.

Below are my current tallies of some notable subsets. Please comment with any corrections!

People from the Superalignment team who never signed as of the 702 leak (including some policy/governance people who seem to have been closely connected) and are now gone:

  • Carroll Wainwright
  • Collin Burns
  • Cullen O'Keefe
  • Daniel Kokotajlo
  • Jan Leike (though he did separately Tweet that the board should resign)
  • Jeffrey Wu
  • Jonathan Uesato
  • Leopold Aschenbrenner
  • Mati Roy
  • William Saunders
  • Yuri Burda

People from the Superalignment team (and close collaborators) who did sign before the final leak but are now gone:

  • Jan Hendrik Kirchner (signed between 668 and 702)
  • Steven Bills (signed between 668 and 702)
  • John Schulman (signed between 506 and 667)
  • Sherry Lachman (signed between 506 and 667)
  • Ilya Sutskever (signed by 505)
  • Pavel Izmailov (signed by 505)
  • Ryan Lowe (signed by 505)
  • Todor Markov (signed by 505)

Others who didn't sign as of the 702 leak (some of whom may have just been AFK for the wrong weekend, though I doubt that was true of Karpathy) and are now gone:

  • Andrei Alexandru (Research Engineer)
  • Andrej Karpathy (Co-Founder)
  • Austin Wiseman (Finance/Accounting)
  • Girish Sastry (Policy)
  • Jay Joshi (Recruiting)
  • Katarina Slama (Member of Technical Staff)
  • Lucas Negritto (Member of Technical Staff, then Developer Community Ambassador)
  • Zarina Stanik (Marketing)

Notable other ex-employees:

  • Barrett Zoph (VP of Research, Post-Training; signed by 505)
  • Bob McGrew (Chief Research Officer; signed by 505)
  • Chris Clark (Head of Nonprofit and Strategic Initiatives; signed by 505)
  • Diane Yoon (VP of People; signed by 505)
  • Gretchen Krueger (Policy; signed by 505; posted a significant Twitter thread at the time)
  • Mira Murati (CTO; signed by 505)

CDT agents respond well to threats

Might want to rephrase this as "CDT agents give in to threats"

If families are worried about the cost of groceries, they should welcome this price discrimination. The AI will realize you are worried about costs. It will offer you prime discounts to win your business. It will know you are willing to switch brands to get discounts, and use this to balance inventory.

Then it will go out and charge other people more, because they can afford to pay. Indeed, this is highly progressive policy. The wealthier you are, the more you will pay for groceries. What’s not to love?

A problem is that this is not only a tax on indifference, but also a tax on innumeracy and on lack of leisure time. Those who don't know how to properly comparison shop are likely to be less wealthy, not more; same with those who don't have the spare time to go to more than one store.

Re: experience machine, Past Me would have refused it and Present Me would take it. The difference is due to a major (and seemingly irreversible) deterioration in my wellbeing several years ago, but not only because that makes the real world less enjoyable. 

Agency is another big reason to refuse the experience machine; if I think I can make a difference in the base-level world, I feel a moral responsibility towards it. But I experience significantly less agency now (and project less agency in the future), so that factor is diminished for me.

The main factor that's still operative is epistemics: I would much rather my beliefs be accurate than be deceived about the world. But it's hard for that to outweigh the unhappiness at this point.

So if a lot of people would choose the Experience Machine, that suggests they are some combination of unhappy, not confident in their agency, and not obsessed with their epistemics. (Which does, I think, operationalize your "something is very wrong".)

  1. Thanks—I didn't recall the content of Yglesias' tweet, and I'd noped out of sorting through his long feed. I suspect Yglesias didn't understand why the numbers were weird, though, and people who read his tweet were even less likely to get it. And most significantly, he tries to draw a conclusion from a spurious fact!
  2. Allowing explicitly conditional markets with a different fee structure (ideally, all fees refunded on the counterfactual markets) could be an interesting public service on Manifold's part.
  3. The only part of my tone that worries me in retrospect is that I should have done more to indicate that you personally were trying to do a good thing, and I'm criticizing the deference to conditional markets rather than criticizing your actions. I'll see if I can edit the post to improve on that axis.
  4. I think we still differ on that. Even though the numbers for the main contenders were just a few points apart, there was massive jockeying to put certain candidates at the top end of that range, because relative position is what viewers noticed.
Load More