Wiki Contributions

Comments

Answer by davekasten10

Forgive me if this is obvious, but have you done the following three things:

1.  Go through the list of resources on 211 Arizona (Note: 211 is an emerging, but not yet nationally-adopted, standard for a first-point-of-entry on social services, just like 911 is for emergency services) -- see https://211arizona.org/ .

Your goal here is to do a breadth-first search: look for things that you haven't yet applied for, plausibly might get, and can get quickly.  Don't go too deep down a rabbit hole, but rather try to quickly sort and validate or reject various ideas on there. 

2.  Reach out to your local Congressional office for help -- ask them if there are any programs they know of that can help, especially as a survivor of domestic violence.  

3.  Also, if you haven't gone to your local food bank, please, please consider this SOCIAL PERMISSION TO GO TO YOUR LOCAL FOOD BANK.  It literally exists for exactly this purpose.

It may be worth noting that, at least anecdotally, when you talk about AI development processes with DoD policy people, they assume that SCIFs will be used at some point.  

YMMV on whether that's their pattern-matching or hard-earned experience speaking, but I think worth noting. 

Hypothesis, super weakly held and based on anecdote:
One big difference between US national security policy people and AI safety people is that the "grieving the possibility that we might all die" moment happened, on average, more years ago for the national security policy person than the AI safety person. 

This is (even more weakly held) because the national security field has existed for longer, so many participants literally had the "oh, what happens if we get nuked by Russia" moment in their careers in the Literal 1980s...

"Achievable goal" or "plausible outcome", maybe?

These are plausible concerns, but I don't think they match what I see as a longtime DC person.  

We know that the legislative branch is less productive in the US than it has been in any modern period, and fewer bills get passed (many different metrics for this, but one is https://www.reuters.com/graphics/USA-CONGRESS/PRODUCTIVITY/egpbabmkwvq/) .  Those bills that do get passed tend to be bigger swings as a result -- either a) transformative legislation (e.g., Obamacare, Trump tax cuts and COVID super-relief, Biden Inflation Reduction Act and CHIPS) or b) big omnibus "must-pass" bills like FAA reauthorization, into which many small proposals get added in. 

I also disagree with the claim that policymakers focus on credibility and consensus generally, except perhaps in the executive branch to some degree.  (You want many executive actions to be noncontroversial "faithfully executing the laws" stuff, but I don't see that as "policymaking" in the sense you describe it.)

In either of those, it seems like the current legislative "meta" favors bigger policy asks, not small wins, and I'm having trouble of thinking of anyone I know who's impactful in DC who has adopted the opposite strategy.  What are examples of the small wins that you're thinking of as being the current meta?

Like, I hear you, but that is...also not how they teach gun safety.  Like, if there is one fact you know about gun safety, it's that the entire field emphasizes that a gun is inherently dangerous towards anything it is pointed towards.

I largely agree, but think given government hiring timelines, there's no dishonor in staying at a lab doing moderately risk-reducing work until you get a hiring offer with an actual start date.  This problem is less bad for the special hiring authorities being used for AI stuff oftentimes, but it's still not ideal.

davekasten12-2

Oh wow, this is...not what I thought people meant when they say "unconditional love." 

In my circles, "conditional love" is about love with lots of threats and demands that the other person change, and if failing to do so, they would be told they're unworthy of being loved by their partner.

As you know from our conversations, I'm largely in the same camp as you on this point.  

But one point I'd make incrementally is this: USG folks are also concerned about warning shots of the nature, "The President's Daily Brief ran an article 6 months ago saying warning signs for dangerous thing X would be events W, Y, and Z, and today the PDB had an article saying our intelligence agencies assess that Y and Z have happened due to super secret stuff".

If rationalists want rationalist warning shots to be included, they need to convince relevant government analytic stakeholders of their relevance. 

That's probably true if the takeover is to maximize the AI's persistence.  You could imagine a misaligned AI that doesn't care about its own persistence -- e.g., an AI that got handed a misformed min() or max() that causes it to kill all humans instrumental to its goal (e.g., min(future_human_global_warming))

Load More