All of davekasten's Comments + Replies

Ok, so it seems clear that we are, for better or worse, likely going to try to get AGI to do our alignment homework. 

Who has thought through all the other homework we might give AGI that is as good of an idea, assuming a model that isn't an instant-game-over for us?  E.G., I remember @Buck rattling off a list of other ideas that he had in his The Curve talk, but I feel like I haven't seen the list of, e.g., "here are all the ways I would like to run an automated counterintelligence sweep of my organization" ideas.

(Yes, obviously, if the AI is sne... (read more)

2Quinn
I'm working on making sure we get high quality critical systems software out of early AGI. Hardened infrastructure buys us a lot in the slightly crazy story of "self-exfiltrated model attacks the power grid", but buys us even more in less crazy stories about all the software modules adjacent to AGI having vulnerabilities rapidly patched at crunchtime.
3Ebenezer Dukakis
I think unlearning could be a good fit for automated alignment research. Unlearning could be a very general tool to address a lot of AI threat models. It might be possible to unlearn deception, scheming, manipulation of humans, cybersecurity, etc. I challenge you to come up with an AI safety failure story that can't, in principle, be countered through targeted unlearning in some way, shape, or form. Relative to some other kinds of alignment research, unlearning seems easy to automate, since you can optimize metrics for how well things have been unlearned. I like this post.
4Thane Ruthenis
Technology for efficient human uploading. Ideally backed by theory we can independently verify as correct and doing what it's intended to do (rather than e. g. replacing the human upload with a copy of the AGI who developed this technology).
5trevor
How to build a lie detector app/program to release to the public (preferably packaged with advice/ideas on ways to use and strategies for marketing the app, e.g. packaging it with an animal body-language to english translator).
1yams
Preliminary thoughts from Ryan Greenblatt on this here.
Buck142

@ryan_greenblatt is working on a list of alignment research applications. For control applications, you might enjoy the long list of control techniques in our original post.

Huh?  "fighting election misinformation" is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.  

-3ChristianKl
You are right, the wording is even worse. It says "Partnering with governments to fight misinformation globally". That would be more than just "election misinformation". I just tested that ChatGPT is willing to answer "Tell me about the latest announcement of the trump administration about cutting USAID funding?" while Gemini isn't willing to answer that question, so in practice their policy isn't as bad as Gemini's.  It's still sounds different from what Elon Musk advocates as "truth aligned"-AI. Lobbyists should be able to use AI to inform themselves about proposed laws. If you would ask David Sachs as the person who coordinates AI policy, I'm very certain that he supports Elon Musks idea where AI should help people to learn the truth about political questions.  If they wanted to appeal to the current administration they could say something about the importance of AI to tell truthful information and not mislead the user instead of speaking about "fighting misinformation". 
-1Maxwell Peterson
The Elections panel on OP’s image says “combat disinformation”, so while you’re technically right, I think Christian’s “fighting election misinformation” rephrasing is close enough to make no difference.

Without commenting on any strategic astronomy and neurology, it is worth noting that "bias", at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).  

I am not a fan, but it is worth noting that these are the issues that many politicians bring up already, if they're unfamiliar with the more catastrophic risks. Only one missing on there is job loss. So while this choice by OpenAI sucks, it sort of usefully represents a social fact about the policy waters they swim in.

3ChristianKl
The page does not seem to o be directed at what's politically advantageous. The Trump administration who fights DEI is not looking favorably at the mission to prevent AI from reinforcing stereotypes even if those stereotypes are true. "Fighting election misinformation" is similarly a keyword that likely invite skepticism from the Trump administration. They just shut down USAID and their investment in "combating misinformation" is one of the reasons for that. It seems time more likely that they hired a bunch of woke and deep state people into their safety team and this reflects the priorities of those people.
7aogara
I’m surprised they list bias and disinformation, as I doubt those concerns will be popular with the new administration. (Maybe this is a galaxy brained attempt to make AI safety seem left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.)

I am (sincerely!) glad that this is obvious to other people too and that they are talking about it already!

I mean, the literal best way to incentivize @Ricki Heicklen and me to do this again for LessOnline and Manifest 2025 is to create a prediction market on it, so I encourage you to do that

One point that maybe someone's made, but I haven't run across recently:  if you want to turn AI development into a Manhattan Project, you will by-default face some real delays from the reorganization of private efforts into one big national effort.  In a close race, you might actually see pressures not to do so, because you don't want to give up 6 months to a year on reorg drama -- so in some possible worlds, the Project is actually a deceleration move in the short term, even if it accelerates in the long term!

3Nathan Helm-Burger
This is a point that's definitely come up in private discussions I've been a part of. I don't remember if I saw it said publicly somewhere.

Incidentally, spurred by @Mo Putera's posting of Vernor Vinge's A Fire Upon The Deep annotations, I want to remind folks that Vinge's Rainbows End is very good and doesn't get enough attention, and will give you a less-incorrect understanding of how national security people think.  

Oh, fair enough then, I trust your visibility into this.  Nonetheless one Should Can Just Report Bugs

Note for posterity that there has been at least $15K of donations since this got turned back on -- You Can Just Report Bugs

[This comment is no longer endorsed by its author]Reply1
3habryka
Those were mostly already in-flight, so not counterfactual (and also the fundraising post still has the donation link at the top), but I do expect at least some effect!

Ok, but you should leave the donation box up -- link now seems to not work?  I bet there would be at least several $K USD of donations from folks who didn't remember to do it in time.

5habryka
Oops, you're right, fixed. That was just an accident.

I think you're missing at least one strategy here.  If we can get folks to agree that different societies can choose different combos, so long as they don't infringe on some subset of rights to protect other societies, then you could have different societies expand out into various pieces of the future in different ways.  (Yes, I understand that's a big if, but it reduces the urgency/crux nature of value agreement). 

2jbash
Societies aren't the issue; they're mindless aggregates that don't experience anything and don't actually even have desires in anything like the way a human, or or even an animal or an AI, has desires. Individuals are the issue. Do individuals get to choose which of these societies they live in?
4Noosphere89
I think the if condition is relying on either an impossibility as presented, or it requires you to exclude some human values, at which point you should at least admit that what values you choose to retain is a political decision, based on your own values.
2sloonz
I’m not missing that strategy at all. It’s an almost certainty that any solution will have to involve something like that, barring some extremely strong commitment to Unity which by itself will destroy a lot of Values. But there are some pretty fundamental values that some people (even/especially) here care a lot about, like negative utilitarianism ("minimize suffering"), which are flatly incompatible with simple implementations of that solution. Negative utilitarians care very much about the total suffering in the universe and their calculus do not stop at the boundaries of "different societies". And if you say "screw them", well, what about the guy who basically goes "let’s create the baby eaters society ?". If you recoil at that, it means there’s at least a bit of negative utilitarianism in you. Which is normal, don’t worry, it’s a pretty common human value, even in people who doesn’t describe themselves as "negative utilitarians". Now you can recognize the problem, which is that every individual will have a different boundary in the Independence-Freedom-Diversity vs Negative-Utilitarianism tradeoff. (which I do not think is the only tradeoff/conflict, but clearly one of the biggest one, if not THE biggest one, if you set aside transhumanism) And if you double down on the "screw them" solution ? Well, you enter exactly in what I described with "even with perfect play, you are going to lose some Human Values". For it is a non-negligible chunk of Human Values.

Note that the production function of the 10x really matters.  If it's "yeah, we get to net-10x if we have all our staff working alongside it," it's much more detectable than, "well, if we only let like 5 carefully-vetted staff in a SCIF know about it, we only get to 8.5x speedup".  

(It's hard to prove that the results are from the speedup instead of just, like, "One day, Dario woke up from a dream with The Next Architecture in his head")

Basic clarifying question: does this imply under-the-hood some sort of diminishing returns curve, such that the lab pays for that labor until it net reaches as 10x faster improvement, but can't squeeze out much more?

And do you expect that's a roughly consistent multiplicative factor, independent of lab size? (I mean, I'm not sure lab size actually matters that much, to be fair, it seems that Anthropic keeps pace with OpenAI despite being smaller-ish) 

5ryan_greenblatt
Yeah, for it to reach exactly 10x as good, the situation would presumably be that this was the optimum point given diminishing returns to spending more on AI inference compute. (It might be the returns curve looks very punishing. For instance, many people get a relatively large amount of value from extremely cheap queries to 3.5 Sonnet on claude.ai and the inference cost of this is very small, but greatly increasing the cost (e.g. o1-pro) often isn't any better because 3.5 Sonnet already gave an almost perfect answer.) I don't have a strong view about AI acceleration being a roughly constant multiplicative factor independent of the number of employees. Uplift just feels like a reasonably simple operationalization.

For the record: signed up for a monthly donation starting in Jan 2025.  It's smaller than I'd like given some financial conservatism until I fill out my taxes, may revisit it later.

Everyone who's telling you there aren't spoilers in here is well-meaning, but wrong.  But to justify why I'm saying that is also spoilery, so to some degree you have to take this on faith.

(Rot13'd for those curious about my justification: Bar bs gur znwbe cbvagf bs gur jubyr svp vf gung crbcyr pna, vs fhssvpvragyl zbgvingrq, vasre sne zber sebz n srj vfbyngrq ovgf bs vasbezngvba guna lbh jbhyq anviryl cerqvpg. Vs lbh ner gryyvat Ryv gung gurfr ner abg fcbvyref V cbyvgryl fhttrfg gung V cerqvpg Nfzbqvn naq Xbein naq Pnevffn jbhyq fnl lbh ner jebat.)

davekasten3917

Opportunities that I'm pretty sure are good moves for Anthropic generally: 

  1. Open an office literally in Washington, DC, that does the same work that any other Anthropic office does (i.e., NOT purely focused on policy/lobbying, though I'm sure you'd have some folks there who do that).  If you think you're plausibly going to need to convince policymakers on critical safety issues, having nonzero numbers of your staff that are definitively not lobbyists being drinking or climbing gym buddies that get called on the "My boss needs an opinion on this bi
... (read more)

FWIW re: the Dario 2025 comment, Anthropic very recently posted a few job openings for recruiters focused on policy and comms specifically, which I assume is a leading indicator for hiring. One plausible rationale there is that someone on the executive team smashed the "we need more people working on this, make it happen" button.

In an ideal world (perhaps not reasonable given your scale), you would have some sort of permissions and logging against some sensitive types of queries on DM metadata.  (E.G., perhaps you would let any Lighthaven team member see on the dashboard "rate of DMs from accounts <1 month in age compared to historic baseline" aggregate number, but "how many DMs has Bob (an account over 90 days old) sent to Alice" would require more guardrails.

Edit: to be clear, I am comfortable with you doing this without such logging at your current scale and think it is reasonable to do so.

7Karl Krueger
In a former job where I had access to logs containing private user data, one of the rules was that my queries were all recorded and could be reviewed. Some of them were automatically visible to anyone else with the same or higher level of access, so if I were doing something blatantly bad with user data, my colleagues would have a chance of noticing.

I have a few weeks off coming up shortly, and I'm planning on spending some of it monkeying around AI and code stuff.  I can think of two obvious tacks: 1. Go do some fundamentals learning on technical stuff I don't have hands-on technical experience with or 2. go build on new fun stuff.

Does anyone have particular lists of learning topics / syllabi / similar things like that that would be a good fit for "fairly familiar with the broad policy/technical space, but his largest shipped chunk of code is a few hundred lines of python" person like me? 

3Joseph Miller
The ARENA curriculum is very good.

Note also that this work isn't just papers; e.g., as a matter of public record MIRI has submitted formal comments to regulators to inform draft regulation based on this work.  

(For those less familiar, yes, such comments are indeed actually weirdly impactful in the American regulatory system).

In a hypothetical, bad future where we have to do VaccinateCA 2.0 against e.g. bird flu, I personally wonder if "aggressively help people source air filters" would be a pre-vaccine-distribution-time step we would consider.  (Not canon!  Might be very wrong! Just idle musing)

Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help. 

8habryka
That sounds great! Let's definitely chat about that. I'll reach out as soon as fundraising hustle has calmed down a bit.

I am excited for this grounds of "we deserve to have nice things," though for boring financial planning reasons I am not sure whether I will donate additional funds prior to calendar year end or in calendar year 2025.

(Note that I made a similar statement in the past and then donated $100 to Lighthaven very shortly thereafter, so, like, don't attempt to reverse-engineer my financial status from this or whatever.)

3davekasten
For the record: signed up for a monthly donation starting in Jan 2025.  It's smaller than I'd like given some financial conservatism until I fill out my taxes, may revisit it later.

Also, I would generally volunteer to help with selling Lighthaven as an event venue to boring consultant things that will give you piles of money, and IIRC Patrick Ward is interested in this as well, so please let us know how we can help. 

I think I'm also learning that people are way more interested in this detail than I expected! 

I debated changing it to "203X" when posting to avoid this becoming the focus of the discussion but figured, "eh, keep it as I actually wrote it in the workshop" for good epistemic hygiene.  

Oh, it very possibly is the wrongest part of the piece!  I put it in the original workshop draft as I was running out of time and wanted to provoke debate.

A brief gesture at a sketch of the intuition:  imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base.  If the few nuclear powers want to keep control, they'll have to divert huge chunks of their breeder reactors' output to pre-emptively nuking any site in the m... (read more)

Interesting! You should definitely think more about this and write it up sometime, either you'll change your mind about timelines till superintelligence or you'll have found an interesting novel argument that may change other people's minds (such as mine).

As you know, I have huge respect for USG natsec folks.  But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the "fuck you, I'm doing Iran-Contra" folks.  Which do you expect will get in control of such a program ?  It's not immediately clear to me which ones would.

4[anonymous]
@davekasten I know you posed this question to us, but I'll throw it back on you :) what's your best-guess answer? Or perhaps put differently: What do you think are the factors that typically influence whether the cautious folks or the non-cautious folks end up in charge? Are there any historical or recent examples of these camps fighting for power over an important operation?

I think this is a (c) leaning (b), especially given that we're doing it in public.  Remember, the Manhattan Project was a highly-classified effort and we know it by an innocuous name given to it to avoid attention.  

Saying publicly, "yo, China, we view this as an all-costs priority, hbu" is a great way to trigger a race with China...

But if it turned out that we knew from ironclad intel with perfect sourcing that China was already racing (I don't expect this to be the case), then I would lean back more towards (c).  

I'll be in Berkeley Weds evening through next Monday, would love to chat with, well, basically anyone who wants to chat. (I'll be at The Curve Fri-Sun, so if you're already gonna be there, come find me there between the raindrops!)

Thanks, looking forward to it!  Please do let us folks who worked on A Narrow Path (especially me, @Tolga , and @Andrea_Miotti ) know if we can be helpful in bouncing around ideas as you work on the treaty proposal!

2otto.barten
Thanks for the offer, we'll do that!

Is there a longer-form version with draft treaty langugage (even an outline)? I'd be curious to read it.

1otto.barten
Not publicly, yet. We're working on a paper providing more details about the conditional AI safety treaty. We'll probably also write a post about it on lesswrong when that's ready.

I think people opposing this have a belief that the counterfactual is "USG doesn't have LLMs" instead of "USG spins up its own LLM development effort using the NSA's no-doubt-substantial GPU clusters". 

Needless to say, I think the latter is far more likely.
 

1uhds
NSA building it is arguably better because atleast they won't sell it to countries like Saudi Arabia, and they have better ability to prevent people quitting or diffusing knowledge and code to companies outside. Also most people in SF agree working for the NSA is morally grey at best, and Anthropic won't be telling everyone this is morally okay.

I think the thing that you're not considering is that when tunnels are more prevalent and more densely packed, the incentives to use the defensive strategy of "dig a tunnel, then set off a very big bomb in it that collapses many tunnels" gets far higher.  It wouldn't always be infantry combat, it would often be a subterranean equivalent of indirect fires.

3Daniel Kokotajlo
Thanks, I hadn't considered that. So as per my argument, there's some threshold of density above which it's easier to attack underground; as per your argument, there's some threshold of density where 'indirect fires' of large tunnel-destroying bombs become practical. Unclear which threshold comes first, but I'd guess it's the first. 

Ok, so Anthropic's new policy post (explicitly NOT linkposting it properly since I assume @Zac Hatfield-Dodds or @Evan Hubinger or someone else from Anthropic will, and figure the main convo should happen there, and don't want to incentivize fragmenting of conversation) seems to have a very obvious implication.

Unrelated, I just slammed a big AGI-by-2028 order on Manifold Markets.
 

Yup.  The fact that the profession that writes the news sees "I should resign in protest" as their own responsibility in this circumstance really reveals something. 

At LessOnline, there was a big discussion one night around the picnic tables with @Eliezer_Yudkovsky , @habryka , and some interlocutors from the frontier labs (you'll momentarily see why I'm being vague on the latter names). 

One question was: "does DC actually listen to whistleblowers?" and I contributed that, in fact, DC does indeed have a script for this, and resigning in protest is a key part of it, especially ever since the Nixon years.

Here is a usefully publicly-shareable anecdote on how strongly this norm is embedded in national security decisi... (read more)

gwern*146

Also of relevance is the wave of resignations from the DC newspaper The Washington Post the past few days over Jeff Bezos suddenly exerting control.

Does "highest status" here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc?

I mean, functionally all of those things.  (Well, minus the country dynamic.  Everyone at this event I talked to was US, UK, or Canadian, which is all sorta one team for purposes of status dynamics at that event)

I was being intentionally broad, here.  I am probably less interested for purposes of this particular post only in the question of "who controls the future" swerves and more about "what else would interested, agentic actors do" questions. 

It is not at all clear to me that OpenPhil is the only org who feels this way -- I can think of several non-EA-ish charities that if they genuinely 100% believed "none of the people you care for will die of the evils you fight if you can just keep them alive for the next 90 days" would plausibly do some interestingly agentic stuff.  

Oh, to be clear I'm not sure this is at all actually likely, but I was curious if anyone had explored the possibility conditional on it being likely

Basic Q: has anyone written much down about what sorts of endgame strategies you'd see just-before-ASI from the perspective of "it's about to go well, and we want to maximize the benefits of it" ?

For example: if we saw OpenPhil suddenly make a massive push to just mitigate mortality at the cost of literally every other development goal they have, I might suspect that they suspect that we're about to all be immortal under ASI, and they're trying to get as many people possible to that future... 

5Seth Herd
Endgame strategies from who? A lot of powerful people would focus on being the ones to control it when it happens, so they'd control the future - and not be subject to some else's control of the future. OpenPhil is about the only org that would think first of the public benefit and not the dangers of other humans controlling it. And not a terribly powerful org, particularly relative to governments.
7Linch
My guess is that we wouldn't actually know with high confidence before (and likely even some time after) things-will-definitely-be-fine. E.g. 3 months after safe ASI people might still be publishing their alignment takes.  

yup, as @sanxiyn says, this already exists.  Their example is, AIUI, a high-end research one; an actually-on-your-laptop-right-now, but admittedly more narrow example is address space layout randomization.   

Wild speculation: they also have a sort of we're-watching-but-unsure provision about cyber operations capability in their most recent RSP update.  In it, they say in part that "it is also possible that by the time these capabilities are reached, there will be evidence that such a standard is not necessary (for example, because of the potential use of similar capabilities for defensive purposes)."  Perhaps they're thinking that automated vulnerability discovery is at least plausibly on-net-defensive-balance-favorable*, and so they aren't sure it s... (read more)

It seems like the current meta is to write a big essay outlining your opinions about AI (see, e.g., Gladstone Report, Situational Awareness, various essays recently by Sam Altman and Dario Amodei, even the A Narrow Path report I co-authored).  

Why do we think this is the case?
I can imagine at least 3 hypotheses:
1.  Just path-dependence; someone did it, it went well, others imitated

2. Essays are High Status Serious Writing, and people want to obtain that trophy for their ideas

3. This is a return to the true original meaning of an essay, under Mont... (read more)

Seth Herd142

I think those are the meta because they have just enough space to not only give opinions but to mention reasons for those opinions and expertise/background to support the many unstated judgment calls.

Note that the essays by Altman and Amodei are popular because their positions are central beyond the others because they have not only demonstrable backgrounds in AI but lots of name recognition (we're mostly assuming Altman has bothered learning a lot about how Transformers work even if we don't like him). And that the Gladstone report got itself commissioned... (read more)

gwern*1910

Well, what's the alternative? If you think there is something weird enough and suboptimal about essay formats that you are reaching for 'random chance' or 'monkey see monkey do' level explanations, that implies you think there is some much superior format they ought to be using instead. But I can't see what. I think it might be helpful to try to make the case for doing these things via some of the alternatives:

  1. a peer-reviewed Nature paper which would be published 2 years from now, maybe, behind a paywall
  2. a published book, published 3 years from starting
... (read more)
davekasten4413

Okay, I spent much more time with the Anthropic RSP revisions today.  Overall, I think it has two big thematic shifts for me: 

1.  It's way more "professionally paranoid," but needs even more so on non-cyber risks.  A good start, but needs more on being able to stop human intelligence (i.e., good old fashioned spies)

2.  It really has an aggressively strong vibe of "we are actually using this policy, and We Have Many Line Edits As A Result."  You may not think that RSPs are sufficient -- I'm not sure I do, necessarily -- but I a... (read more)

It's a small but positive sign that Anthropic sees taking 3 days beyond their RSP's specified timeframe to conduct a process without a formal exception as an issue.  Signals that at least some members of the team there are extremely attuned to normalization of deviance concerns.

I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients' ears, because:
1.  Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device)
2.  Doctors usually just say "you are wrong, there's nothing in your ear" without looking
3.  This destroys trust, so he started doing cursory checks with an ear scope
4.  Far more often than he expected (I forget exactly, but s... (read more)

3trevor
This reminds me of dath ilan's hallucination diagnosis from page 38 of Yudkowsky and Alicorn's glowfic But Hurting People Is Wrong. It's pretty far from meeting dath ilan's standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone's ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.  This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.

Looking forward to it!  (Should rules permit, we're also happy to discuss privately at an earlier date)

2Nathan Helm-Burger
My essay is here: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy  And a further discussion about the primary weakness I see in your plan (that AI algorithmic improvement progress is not easily blocked by regulating and monitoring large data centers) is discussed in my post here: https://www.lesswrong.com/posts/xoMqPzBZ9juEjKGHL/proactive-if-then-safety-cases 

Has anyone thought about the things that governments are uniquely good at when it comes to evaluating models? 

Here are at least 3 things I think they have as benefits:
1.  Just an independent 3rd-party perspective generally

2. The ability to draw insights across multiple labs' efforts, and identify patterns that others might not be able to 

3. The ability to draw on classified threat intelligence to inform its research (e.g., Country X is using model Y for bad behavior Z) and to test the model for classified capabilities (bright line example: "can you design an accurate classified nuclear explosive lensing arrangement").

Are there others that come to mind? 

I think this can be true, but I don't think it needs to be true:

"I expect that a lot of regulation about what you can and can’t do stops being enforceable once the development is happening in the context of the government performing it."

I suspect that if the government is running the at-all-costs-top-national-priority Project, you will see some regulations stop being enforceable.  However, we also live in a world where you can easily find many instances of government officials complaining in their memoirs that laws and regulations prevented them from ... (read more)

2Nathan Helm-Burger
Yes, this is a good point. We need a more granular model than a binary 'all the same laws will apply to high priority national defense projects as apply to tech companies' versus 'no laws at all will apply'.
Load More