Why wouldn't their leadership be capable of personally evaluating arguments that this community has repeatedly demonstrated can be compressed into sub 10 minute nontechnical talks? And why assume whichever experts they're taking advice from would uniformly interpret it as "craziness" especially when surveys show most AI researchers in the west are now taking existential risk seriously? It's really not such a difficult or unintuitive concept to grasp that building a more intelligent species could go badly.
My take is the lack of AI safety activity in China i...
This post is quite strange and at odds with your first one. Your own point 5 contradicts your point 6. If they're so good at taking ideas seriously, why wouldn't they respond to coherent reasoning presented by a US president? Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but "superiority" in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force see...
Points 7 and 8 just read like hysterical Orientalist Twitter China Watcher nonsense, to be quite frank. There is absolutely nothing substantiating that China would recklessly pursue nothing but "superiority" in AI at all costs (up to and including national suicide) beyond simplistic narratives of the CCP being a cartoon evil force seeking world domination and such.
I have the experience of living in a strongly anti-West country ruled by the same guy for 10+ years (the Putin's Russia). The list of similarities to Xi's China includes the Shameful Period of Hu...
I really don't know what Beijing is going to do. Sometimes it makes really weird decisions, like not importing better COVID vaccines before the COVID restrictions were removed. There is no law of physics that says Beijing will take X seriously if Washington presents a good argument, or if Washington believes it hard enough. Beijing can be influenced by rational arguments, but mostly by Chinese experts. Right now, the Chinese space isn't taking AI safety seriously. There is no Chinese Eliezer Yudkowsky. If the US in 2001 was approached by Beijing asking for...
There's a new forum for this that seeks to increase discussion & coordination, reddit.com/r/sufferingrisk.
Not sure if he took him up on that (or even saw the tweet reply). Am just hoping we have someone more proactively reaching out to him to coordinate is all. He commands a lot of respect in this industry as I'm sure most know.
I think people in the LW/alignment community should really reach out to Hinton to coordinate messaging now that he's suddenly become the most high profile and credible public voice on AI risk. Not sure who should be doing this specifically, but I hope someone's on it.
Yup. I commented on how outreach pieces are generally too short on their own and should always be leading to something else here.
I'm pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the ...
The downvotes on my comment reflect a threat we all need to be extremely mindful of: people who are so terrified of death that they'd rather flip the coin on condemning us all to hell, than die. They'll only grow ever more desperate & willing to resort to more hideously reckless hail marys as we draw closer.
Never even THINK ABOUT trying a hail mary if it also comes with an increased chance of s-risk. I'd much rather just die.
Speaking of which, one thing we should be doing is keeping a lookout for opportunities to reduce s-risk (with dignity) ... I haven't yet been convinced that s-risk reduction is intractable.
Just reposting this good resource for people on places potentially hit in the US. The one I linked is his version for a full countervalue attack with 2000 warheads but he has scenarios for counterforce/mixed etc too.
I don't think anything similar exists for China yet but in the meantime a good assumption is just cities ordered by descending population. So, possibly similar to the linked one but with fewer smaller cities hit for now, until China has reached a similar quantity of warheads as Russia later this decade.
ETA: An interesting thing I found on US ta...
The Open Source RISOP by David Teter is a good resource for a non-exhaustive but still fairly comprehensive list of possible Russian targets in the US, btw.
I don't know that that's true everywhere. Airbursts (detonation mode for cities) generally don't produce much fallout. Probably good advice if you're downwind of hardened targets like the 3 clusters of Minuteman silos in the Midwest though which will produce a fuckton of fallout as they're all hit with surface detonations. But the Russians/Chinese may not hit them at all if they know all those silos have been fired already.
One thing I realized is that it'll likely be near impossible to travel long distances by car in the post-attack aftermath as everyone with a gun who runs out of gas would be setting up roadblocks to rob travellers of the gas in their cars + other supplies. Interstates would probably thus quickly become unusable. So you probably shouldn't expect to reach some cross-country rendezvous after the fact if you didn't get there beforehand.
Also x-posting my more lengthy comment on this post from EAF.
I wrote about this on EA Forum a few days ago. I'm glad others are starting to think about this. I do think archiving all existing alignment work is very important and perhaps equally important as efforts to keep alive people who represent existing experts & talent in the field. It would be much better for them to be able to continue their work than for new people to attempt to pick off where they left off, especially since many things like intuitions honed over time etc. may not be readily learnable.
I'm increasingly inclined to think that a massive "s...
but it's still the case that I don't expect to survive a full-scale nuclear exchange.
There's no reason whatsoever to expect you can't easily survive a full exchange with a few simple preparations as long as you were outside the immediate urban blast radii. Nuclear winter is effectively a myth. I'm both astounded and dismayed by the amount of misinformation and misconceptions surrounding nuclear issues within the "rationalist" community.
Nukes aren't remotely inescapable Armageddon in the same way unaligned AGI is, and people really need to stop the silly...
I said that about New Zealand (and probably countries outside of NATO, Russia and China in general). Canada may well have law and order intact as well, if we don't get hit or only by a few warheads. I think commercial food availability might be restored before a decade, especially since we have more agricultural production capacity than we need, but it's just to be on the safer side, especially since stockpiling non-perishable food really doesn't cost much. Being so close to the US and sharing a massive border, we may be more destabilized than other non-at...
Remember the things that ALL have to be true for a "nuclear winter" to happen at all. I'm not gonna say it's a completely debunked myth, but to me the probability is clearly low enough that I mostly ignore it in my planning. Governments have moved on from it too after the initial Soviet politically-motivated hysteria surrounding it during the 80s.
Surviving a full-scale countervalue exchange even within the US or Canada isn't hard. The most crucial thing is to preemptively relocate so you aren't caught and killed in the initial detonation. Anywhere outside ...
The Open Source RISOP by David Teter is a good resource for a non-exhaustive but still fairly comprehensive list of possible Russian targets in the US, btw.
Oh and also, there's potential for this to lead to a coup/domestic upheaval/regime change in Russia which would be an exceptionally volatile situation, kind of like having 6000 loose nukes until whoever takes power consolidates control including over the strategic forces again. So factoring that in, it should perhaps be over 5%. But again there should be advance warning for those developments inside Russia.
5% would be by the end of all this. Most of that probability comes from things developing in an unfortunate direction as I said, which would mean it goes against the current indications we have of neither the US nor NATO intervening militarily. This could be either them changing their minds, perhaps due to unexpectedly brutal Russian conduct during the war leading to a decision to impose a no-fly zone or something like that, or a cycle of retaliatory escalation due to unintended spillover of the war like I illustrated. Neither is too likely imo, and both w...
I'm not overly concerned with the news from this morning. In fact I expected them to raise the nuclear force readiness prior to or simultaneously to commencing the invasion, not now, which is expected going into a time of conflict/high tension from normal peacetime readiness. I had about a 5% chance this will escalate to a nuclear war going into it, and it's not much different now, certainly not above 10% (For context, my odds of escalation to full countervalue exchange in a US intervention in a Taiwan reunification campaign would be about 75%). Virtually ...
The most interesting thing out of this is Russia's threat to pull out of New START in retaliation for US sanctions, as well as Biden's decision to cut off arms control talks. Pulling out all the stops on the US-Russia nuclear competition is dangerous enough already, but this will most likely kick off a renewed all-out three-way nuclear arms race, which is of course less strategically stable than the bilateral nuclear dynamic during the Cold War. China is already expanding its nuclear arsenal to parity, which if New START were still in effect, would've been...
Redwood research
In which way does this news "favour Paul-verse"?
MIRI had a strategic explanation in their 2017 fundraiser post which I found very insightful. This was called the "acute risk period".
Yes, but I think much more useful might be for someone to do this for Chinese.
Those 3 new silo fields are the most visible but I'd guess China is expanding the mobile arm of its land-based DF-41 force (TELs) a similar amount. You just don't see that on satellite images. The infrastructure enabling Launch on Warning is also being implemented which will make those silos much more survivable, though this also of course greatly increases the risk of accidental nuclear war. I'd argue that those silo fields are destabilizing, especially if China decides to deploy the majority of their land-based force that way, because even with a Launch ...
Can you give some examples of who in the "rationalist-adjacent spheres" are discussing it?
I'm aware. I'm just saying a new effort is still needed because his thoughts on alignment/AI risk are still clearly very misguided listening to all his recent public comments on the topic and what he's trying to do with Neuralink etc. so someone really needs to reach out and set him straight.
Agree with we should reach out to him & the community is connected enough to do so. If he's concerned about AI risk but either being misguided or doing harm (see e.g. here/here and here), then someone should just... talk to him about it? The richest man in the world can do a lot either way. (Especially someone as addicted to launching things as him, who knows what detrimental thing he might do next if we're not more proactive.)
I get the impression the folks at FLI are closest to him so maybe they are the best ones to do that.
Blow up in their faces?