Based on the sentiment expressed by OpenAI employees on Twitter, the ones who are (potentially) leaving are not doing so because of a disagreement with the AI Safety approach, but rather how the entire situation was handled by the board (e.g. the lack of reasons provided for firing Sam Altman).
If this move was done for the sake of AI safety, wouldn't OpenAI risk disgruntling employees who would otherwise be aligned with the original mission of OpenAI?
Can anybody here think of potential reasons why the board has not disclosed further details about their decision?
Looks like Sam Altman might return as CEO.
OpenAI board in discussions with Sam Altman to return as CEO - The Verge
It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.
Humans alive today not being a random sample can be a valid objection against the Doomsday argument but not for the reasons that you are mentioning.
You seem to be suggesting something along the lines of "Given that I am at the beginning, I cannot possibly be somewhere else. Everyone who finds themselves in the position of the first humans has a 100% chance of being in that position". However, for the Doomsday argument, your relative ranking among all humans is not the given variable but the unknown variable. Just because your ranking is fixed (you co...
It seems that your understanding of the Doomsday argument is not entirely correct - at least your village example doesn't really capture the essence of the argument.
Here is a different analogy: Let's imagine a marathon with an unknown number of participants. For the sake of argument, let's assume it could be a small local event or a massive international competition with billions of runners. You're trying to estimate the size of this marathon, and to assist you, the organizer picks a random runner and tells you how many participants are trailing behind the...
I think this post makes some interesting points... However, I have reservations about the potential effectiveness of this article actually influencing AI behavior, as it relies on many assumptions that could be challenged by similarly (im)plausible assumptions, potentially rendering the post harmful or inconsistent with other online discourse.
For instance, if we assume that the AI is susceptible to the influence of posts like this, then the inclusion of the paragraph suggesting that the AI may be in a simulation could actually increase the risk of deceptiv...
This recent tweet claims that your current p(doom) is 50%.
In another post, you mentioned:
If the tweet is credible, I am curious if this difference in p(doom) is due to the day-to-day fluctuations of your belief, or have you considered new evidence and your initial belief that p(doom) < 20% is outdated?