All of wobblz's Comments + Replies

wobblz10

My point is that, as you said, you take the safest route when not knowing what others will do - do whatever is best for you and, most importantly, guaranteed. You take some years, and yes, you lose the opportunity to walk out of doing any time, but at least you're in complete control of your situation. Just imagine a PD with 500 actors... I know what I'd pick. 

wobblz51

"The moratorium on new large training runs needs to be indefinite and worldwide."

Here lies the crux of the problem. Classical prisoners' dilemma, where individuals receive the greatest payoffs if they betray the group rather than cooperate. In this case, a bad actor will have the time to leapfrog the competition and be the first to cross the line to super-intelligence. Which, in hindsight, would be an even worse outcome.

The genie is out of the bottle. Given how (relatively) easy it is to train large language models, it is safe to assume that this whole fie... (read more)

8Rob Bensinger
In this case, "defecting" gives lower payoffs to the defector -- you're shooting yourself in the foot and increasing the risk that you die an early death. The situation is being driven mostly by information asymmetries (not everyone appreciates the risks, or is thinking rationally about novel risks as a category), not by deep conflicts of interest. Which makes it doubly important not to propagate the meme that this is a prisoner's dilemma: one of the ways people end up with a false belief about this is exactly that people round this situation off to a PD too often!
1[anonymous]
A temporary state of affairs. Asml is only the single point of failure because of economics. Chinese government funded equipment vendors would eventually equal asmls technology today and probably slowly catch up. Enormously faster if a party gets even a little help from AGI.