I have dealt with both greedy founders/execs and excessively cooperative decision-making. I think the cooperative model is slightly better overall for almost everyone inside and outside the organization. It is less fit though; co-ops can't as easily merge or raise money etc. There is also a ratchet issue, where it is harder to cooperatize what is private than to privatize what is coopt. Curious if you have any ideas how coop-like stuff could be more stable / low-energy.
You could make something like Blind (the big tech employee anon social net) for unionizing/coordinating AI lab employees. "I'll delay my project if you delay yours." Or anon danger polls of employees at labs. I'm not sure of the details but I suspect there is fruit. ("Your weekly update: 25% of your coworkers expect to see 2035.")
Makes sense. Those were real questions, to be clear.
I'm guessing you live in a country with a US military base? Are you more free than the average Chinese citizen?
It deliberately doesn’t assume anything especially different or weird happens, only that trend lines keep going.
Of course they are fitting an exponential curve, and only one thing happens when you do that. (Newborn on track to swallow the sun by 2040.) You can get a hyperbolic curve to fit about equally as well [citation needed] and predict negative infinity resources on Jan 2 2028. I wish they had defended this choice a bit more clearly. Like plot binomial and sigmoid best fit for comparison, to show it really does look like an exponential. (Y axis can be something arbitrary, like the price of land measured in gold.) An exponential makes sense, when an output is an input, so I would agree with it, but you can say the same thing about a puppy's cells & organs.
Almost every time I use Claude Code (3.7 I think) it ends up cheating at the goal. Optimizing performance by replacing the API function with a constant, deleting test cases, ignoring runtime errors with silent try catch, etc. It never mentions these actions in the summary. In this narrow sense, 3.7 is the most misaligned model I have ever used.
I think Alibaba has not made any crazy developments yet. So let's consider DeepSeek. I think almost nobody had heard of DeepSeek before v3. Before v3, predicting strong AI progress in China would probably sound like "some AI lab in China will appear from nowhere and do something great. I don't know who or what or when or where, but it will happen soon." That was roughly my opinion, at least in my memory. Maybe making that kind of prediction does not match the tastes of people who are good at predicting things? Awfully vague claim to make I guess.
There was time between v3 and r1 where folks could have more loudly commented DeepSeek was ascendant. What would this have accomplished? I suppose it would have shown some commitment to truth and awareness of reality. I am guessing people who are against the international AI race are a bit lazy to point out stuff that would accelerate the race. I guess at some point the facts can't be avoided.
Don't expect LW to be neutral or anything. Think of it as the town bar, not the town square.