All of thedudeabides's Comments + Replies

It’s a very simple question and speaks to the heart of the post, which he decided to comment on.

@Connor Leahy have you updated or not?


Saying you don’t like my tone is an ad hominem.  It’s not rational. 

2garrison
Sorry, lot on my plate. You're basically asking how we'd operationalize the claim that either the USG or PRC are "racing toward AGI"? Probably would involve some dramatic action like consolidating large amounts of compute into projects that are either nationalized or contracted to the govt (like this part of AI-2027: "A Centralized Development Zone (CDZ) is created at the Tianwan Power Plant (the largest nuclear power plant in the world) to house a new mega-datacenter for DeepCent, along with highly secure living and office spaces to which researchers will eventually relocate. Almost 50% of China’s AI-relevant compute is now working for the DeepCent-led collective,38 and over 80% of new chips are directed to the CDZ.39 At this point, the CDZ has the power capacity in place for what would be the largest centralized cluster in the world." Do you want to suggest specific thresholds or modifications? 
1dirk
His lack of reply probably means he doesn't want to engage with you, likely due to what he described as "your combative and sensationalistic attitude."

FYI this was used yesterday, in this post.

https://www.aipanic.news/p/the-doomers-dilemma

2Bryce Robertson
That person appears to be quite a fan of the map :)

maybe we can ask @gwern

Gwern, at what point would you say you were 'wrong' and how would that make you 'update'?

what line would you agree on today @garrison ?  At what point would you actually 'update'?

So far, am seeing a lot of people contesting the 'object' and not a lot of people updating.  Which is kinda my point.  Concordance with the group consensus seems to have become a higher priority than rationalism on this forum.

1thedudeabides
@garrison ping

There are many math and coding benchmarks where models from DeepSeek, Ali baba and tencent are now leading, and definitely leading what was SOTA a year ago.  If you don’t want to take my word for it I can dig them up. 

2Jonas Hallgren
No, we good. I was just operating under the assumption that deepseek was just doing distilling of OpenAI but it doesnt seem to be the only good ML company from China. There’s also a bunch of really good ML researchers from China so I agree at this point.

The fact that their models are on par with openAI and anthropic but it’s open source.


There are people from the safety community arguing for jail for folks who download open source models. 

You can’t have it both ways.  Either open source is risky and an acceleration and should be limited/punished, or there is no acceptable change to timelines from open source AI and hence it doesn’t need to be regulated.  

Does that make sense? 

3Mateusz Bagiński
This is perfectly consistent with my You can totally want to have fancy LLMs while not believe in AGI/ASI/singularity. Who? What proportion of the community are they? Also, all open-source models? Jail for downloading GPT2? It seems to me like you're making a move from "there are people in the AI safety community who hold one view and some who hold the other view" to "the AI safety community holds both of these views"?

ok so what criteria would you use to suggest that your statements/gwern’s statements were falisified?


What line can we agree on today, while it feels uncertainty, so that later we’re not still fighting over terminology and more working off the same ground truth?

1thedudeabides
what line would you agree on today @garrison ?  At what point would you actually 'update'? So far, am seeing a lot of people contesting the 'object' and not a lot of people updating.  Which is kinda my point.  Concordance with the group consensus seems to have become a higher priority than rationalism on this forum.

Sorry for my tone.  Yours reads as very defensive. 

So you admit you were wrong?


How have you updated your views on China or what we should do as a result? 

-17thedudeabides

Do you disagree that entities in China are now pushing the state of the art in an open source way?

If you disagree, then sure, you don't have to update.  But I'd argue you aren't paying attention.

If you agree, then how did you update?

If your point is that using 'use vs them' framing makes thing worse, that may or may not be correct, but from the perspective of existential risk the object level determination re China is irrelevant, vs what "they" represent.  A repeated game where defection by anyone one of N players leads to ruin (from the doomer perspective) and where folks in China just represent one of a very large set.

Does that make sense?

1Jonas Hallgren
So I guess the point then more becomes about general open source development of other countries where China is part of it and that people did not correctly predict this as something that would happen.  Something like distillation techniques for LLMs would be used by other countries and then profilerated and that the rationality community as a whole did not take this into account?  I'll agree with you that Bayes points should be lost in prediction of theory of mind of nation states, it is quite clear that they would be interested in this from a macro-analysis perspective (I say in hindsight of course.) I'm not sure that Deepseek is SOTA in terms of inherent development, it seems to me that they're using some of the existing work from OpenAI, Deepmind & Anthropic but I might be wrong here, is there anything else that you're pointing at?

are you saying they accept your frame?  because it appears they do not.

ok I will moderate my tone.  I was a competitive debator and irrationality makes me upset.  I thought this was a safe space for high standards wrt logic, but I can modulate.  Thank you for the feebback.

There is a narrow point - people were wrong about this narrow prediction - "the ccp is scared of AI"

The broader point is that I perceive, and could be wrong, there is epistemic rot if a community dedicated to rationalism is incapable of updating.  The comments I've seen so far are by and large consistent with that intuition.  Folks s... (read more)

3Knight Lee
:) thank you for saying you'll moderate your tone. It's rare I manage to criticize someone and they reply with "ok" and actually change what they do. My first post on LessWrong was A better “Statement on AI Risk?”. I felt it was a very good argument for the government to fund AI alignment, and I tried really hard to convince people to turn it into an open letter. Some people told me the problem with my idea is that asking for more AI alignment funding is the wrong strategy. The right strategy is to slow down and pause AI. I tried to explain that: when politicians reject pausing AI, they just need the easy belief of "China must not win," or "if we don't do it someone else will." But for politicians to reject my open letter, they need the difficult belief of being 99.999% sure of no AI catastrophe and 99.95% sure most experts are wrong. But I felt my argument fell on deaf ears because the community was so deadset on pausing AI, they don't even want to waste their time on anything other than pausing AI. It was very frustrating. (I was talking to people in private messages and emails) I feel you and I are on the same team haha. Maybe we might even work together sometime. Thanks

"Various things" - ugh

ok how about this ONE.

"No...There is no appreciable risk from non-Western countries whatsover" - @Connor Leahy 

At the time of that recording, the race to AGI was being stoked exclusively by actors in the USA (Anthropic, OpenAI, etc).

It is now 2 years later. They got what they asked for, and now other actors have joined the race as well.

idk what to tell you man, you picked one line in one podcast from two years ago. I go on a lot of podcasts, I think out-loud a lot, and I update a lot. Do you want people to have to qualify every comment they make on a podcast with "In my current best estimate, at this specific point in time, which may be invalidated or updated withi... (read more)

and the response to 'shut it down' has always "what about China, or India, or the UAE, or Europe to which the response was...they want to pause bc XYZ

Well, you not have proof, not speculation, that they are not pausing.  They don't find your arguments pursuasive. What to do?!?

Which is why the original post was about updating.  Something you don't seem very interested doing.  Which is irrational. So is this forum about rationality or about AI risk?  I would think the later flows from the former, but I don't see much evidence of the former.

Cope.  Leadership in AI has been an explicit policy goal since "Made in China 2025".  The predictions were that "the CCP prioritizes stability", and "the CCP prioritizes censorship" and "China is behind in AI".  Are you willing to admit that these are all demonstrably untrue as of today?  Let's start there.

Here's an article from 2018(!) in the South China Morning Post.  

"Artificial intelligence (AI) has come to occupy an important role in Beijing’s ‘Made in China 2025’ blueprint. China wants to become a global leader in the field b... (read more)

1thedudeabides
maybe we can ask @gwern.  Gwern, at what point would you say you were 'wrong' and how would that make you 'update'?

The argument has historically been that existential risk from AI came from some combination of a) SOTA models, and b) open source.

China is now publishing SOTA open source models.  Oh and they found a way to optimize around their lack of GPUs.

Are you sure you aren't under the influence of cognitive dissonance/selective memory? 

4Dana
I think LW consensus has been that the main existential risk is AI development in general. The only viable long-term option is to shut it all down. Or at least slow it down as much as possible until we can come up with better solutions. DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has. Anyway, I don't see how this relates to these predictions. The predictions are about China's interest in racing to AGI. Do you believe China would now rather have an AGI race with USA than agree to a pause?

"My first thought is, it's not clear why you care about this. This is your first post ever, and your profile has zero information about you. Do you consider yourself a Less Wrong rationalist? Are you counting on the rationality community to provide crucial clarity and leadership regarding AI and AI policy? "

I tried posting in the past but was limited because of the karma wall, but thanks for questioning my motives.

I am a game theorist and researcher, and yes, I consider myself broadly aligned with rationalism, though with a strong preference for skept... (read more)

2Mitchell_Porter
OK, thanks for the information! By the way, I would say that most people active on Less Wrong, disagree with some of the propositions that are considered to be characteristic of the Less Wrong brand of rationalism. Disagreement doesn't have to be a problem. What set off my alarms was your adversarial debut - the rationalists are being irrational! Anyway, my opinion on that doesn't matter since I have no authority here, I'm just another commenter.  It was. It still has influence, but e/acc is in charge now. That's my take.  If they actually saw AI as the creation of a rival to the human race, they might have a different attitude. Then again, it's not as if that's why the Democrats favored regulation, either.  I feel like Qwen is being hyped. And isn't Manus just Claude in a wrapper? But fine, maybe I should put Alibaba next to DeepSeek in my growing list of contenders to create superintelligence, which is the thing I really care about.  But back to the actual topic. If Gwern or Zvi or Connor Leahy want to comment on why they said what they did, or how their thinking has evolved, that would have some interest. It would also be of interest to know where certain specific framings, like "China doesn't want to race, so it's up to America to stop and make a deal", came from. I guess it might have come from politically minded EAs, rather than from rationalism per se, but that's just a guess. It might even come from somewhere entirely outside the EA/LW nexus. 

How are we felling about this prediction now?