I agree that a skilled murderer could try make the death look like suicide, but each of the places where the murderer would need to make the death look like a suicide would add an additional failure point with a greater chance of producing some inconsistency.
On Suchir being drunk, according to his parents, he came back from a birthday trip to LA with his friends on Friday. So, this might explain why he was drunk. We don't know exactly when he got back though / whether he was drunk when he got back / whether he got drunk afterwards.
Originally, the parents visited the apartment with George Webb, who was an independent journalist and a bit of a conspiracy theorist. On that visit, George Webb said that there was hair under the doorframe in a way that was unusual. Suchir's mother also discusses it in the Tucker Carlson interview (starting at 26:00).
There was no mention of a wig in the police report, though.
The Suchir Balaji Autopsy Report came out recently
Overall, I think it’s quite compelling evidence it was a suicide.
The points against it being suicide are:
The points for it are:
I'm generally favorable to see more of these published. Mainly, because I think these are going to end up being the basis for an industry audit standard and then a law.
The more similar they are, they easier it will be for them to converge.
I worried this was a loophole: "the Trust Agreement also authorizes the Trust to be enforced by the company and by groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time." An independent person told me it's a normal Delaware law thing and it's only relevant if the Trust breaks the rules. Yay! This is good news, but I haven't verified it and I'm still somewhat suspicious (but this is mostly on me, not Anthropic, to figure out).
The Trust Agreement bit sounds like it makes sense to me.
Other thoughts:
So, my take is that the long term public benefit trust probably has the 3/5 board members now since they have raised over $6 billion dollars.
Here is the definition of the "Final Phase-In Date":
(VIII)(6)(iv) "Final Phase-In Date" means the earlier to occur of (l) the close of business on May 24, 2027 or (Il) eight months following the date on which the Board of Directors determines that the Corporation has achieved $6 billion in Total Funds Raised;
From Google:
As of November 2024, Anthropic, an artificial intelligence (AI) startup, has raised a total of $13.7 billionin venture capital: [1, 2, 3]
Just a collection of other thoughts:
I feel like there are two things going on here:
But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations.
The idea that Anthropic or OpenAI... (read more)
I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership.
Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.
Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.
I think the main issue is that inquiry generally follows two directions:
Pretrained LLMs seem to have been somewhat unexpected as the probable path to AGI so there isn't a large historical cultural discussion around more advanced variants of them / their systematic interaction.
And, there are not yet systems of interacting LLM agents so they cannot be an easy topic of study due to a plethora of available examples.
I think that's basically why you don't see more about this - but in like a year - when they start to emerge - you'll see the conversation shift to them - because they will be easier to study.