On first order, this might have a good effect on safety.
On second order, it might have negative effects, because it increases the risk of and therefor lowers the rate of such companies hiring people openly worrying about AI X-Risk.
Someone serious about alignment seeing dangers better do what is save and not be influenced by a non-disparagement agreement. It might lose them some job prospects and have money and possible lawsuit costs, but if history on earth is on the line? Especially since such a known AI genius would find plenty support from people who supported such open move.
So I hope he assumes talking right NOW it not considered strategically worth it. E.g. He might want to increase his chance to be hired by semi safety serious company (more serious than Open AI, but not enough to hire a proven whistleblower), where he can use his position better.
I agree with the premise, but not the conclusion of your last point. Any OpenSource development, that will significantly lower the resource requirements can also be used by closed models to just increased their model/training size for the same cost, thus keeping the gap.
The first graph is supposed to show " BMI at age 50 for white, high-school educated American men born in various years", but goes up 1986. But People born in 1986 are only 38 right now, so we cannot know their BMI at 50 years old. Something is wrong.
The former boards only power was to agree/fire to new board members and CEOs.
Pretty sure they only Let Altman back as CEO under the condition of having a strong influence over the new board.
Just in 5h ago:
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.
Could relate to this Q* for Deep Learning heuristics:
https://x.com/McaleerStephen/status/1727524295377596645?s=20
I have been working all weekend with the OpenAI leadership team to help with this crisis
Jan Leike
Nov 20
I think the OpenAI board should resign
https://x.com/janleike/status/1726600432750125146?s=20
Actually a great example of people using the voting system right. It does not contribute anything substantial to the conversation, but just express something most of us feel obviously.
I had to order the 2 votes into the 4 prototypes to makes sure I voted sensibly:
High Karma - Agree: A well expressed opinion I deeply share
High Karma - Disagree: A well argued counterpoint that I would never use myself / It did not convince me.
Low Karma - Agree: Something obvious/trivial/repeated that I agree with, but not worth saying here.
Low Karma - Disagree: low quality rest bucket
Also pure factual statement contribution (helpful links, context etc.) should get Karma votes only, as no opinion to disagree with is expressed.