Wikitag Contributions

Comments

Sorted by

Any 'safety' program will create a group of people who can use the tech basically with impunity (at least one poster here has claimed access to uncensored models at a major lab), and a much larger group that cannot.

This elite caste will be self-selected, and will select their successors using criteria that are basically arbitrary (for example, alignment with each other on ethical issues where there are substantial differences of opinion among humans on Earth).  

At what point in this scenario does an e/acc with authority require the NSA to provide bulk communications data from every system it is capable of targeting for AI training?  

What about when the PRC provides the totality of comms data on internal systems it is capable of monitoring (any communication system within the PRC) to DeepCent?

Quality training data as well as compute appears to be a limitation, so one of these happening or not happening may matter a great deal.

Also, Li Qiang is way smart, if politics and competition between nation states becomes a key element here, I'd just model that as them gaining an order of magnitude over where they would be otherwise at a key moment.  

I think tacit knowledge is severely underrated in discussions of AGI and ASI.

In HPMOR, there is a scene near the end of the book where our hero wins the day by using some magic that would be equivalent to flying a drone around an extremely complicated path involving lots of loops in places not directly observable for our hero.

Our hero has never once in the book practiced doing this.

In theory, if I possess a drone and have a flight path the drone is capable of flying, I can pick up the controller for the first time and make it happen.

In practice, I will fail spectacularly. A lot of writing in this space assumes that with sufficient 'thinking power', success on the first attempt is assured.

  1.  This has been an option for decades, a fully capable LLM does not meaningfully lower the threshold for this.  It's already too easy.
  2. This has been an option since the 1950s.  Any national medical system is capable of doing this, Project Coast could be reproduced by nearly any nation state.  

I'm not saying it isn't a problem, I'm just saying that the LLMs don't make it worse.

I have yet to find a commercial LLM that I can't make tell me how to build a working improvised explosive (I can grade the LLMs performance because I've worked with the USG on the issue and don't need a LLM to make evil).

Now is dramatically better than a year ago.  It's not even comparable.  Rewrite the cover sheet on your policy idea and ping your network. 

The incoming leadership has a massive amount of flexibility, given that they're fundamentally reshaping so many things at once, but in many cases just have vague ideas rather than specific programs.  Give them specific proposals that they can align with their vague pronouncements.

Bureaucrats are finding themselves taking on responsibilities for people who were shifted out the door in a hurry, and have incoming leadership who need staff support badly.  The survivors will likely have much more leeway than they did before to stop doing things they don't want to do, and start doing things they do.

Private sector actors are confronted with government agencies that are in disarray and distracted.  Now is a great time to take action.

Uncertainty creates a lot of anxiety, so if you're generally afraid of your own shadow, you'll turtle up and hope the storm passes.  Given that so many others are doing exactly that, someone ambitious has an opportunity to shape reality around themselves to a degree which was absolutely not possible last year.  This is a great time to get stuff done, as long as you're razor focused on the specific things you actually want.

That being said, if you haven't spent the last few years working on developing relationships with people in those groups, you might have a problem. They're probably not talking to anyone they didn't trust before all this chaos started.

The disarray within the executive branch right now has created an amazing window of opportunity. If you have a clear policy objective, you can probably find someone, somewhere to give you a fair hearing.

New officials looking to create radical departures from the previous admin's policies are one route. Career bureaucrats who have survived the cuts, and find themselves suddenly empowered because their supervision did not survive the curs are another.  And finally, actors within the private sector may discover that while the laws themselves have not changed, what is effectively enforced is likely to be different (some things more restrictive, some things much less).

Good luck!

Sounds like it's working well when you have a shared culture. The more you agree on norms of behavior in terms of what's appropriate for kids, how and when to discipline  how to speak to kids, etc the better it works.  Religion probably helped with this historically.

I wrote about something similar previously: https://www.lesswrong.com/posts/Ek7M3xGAoXDdQkPZQ/terrorism-tylenol-and-dangerous-information#a58t3m6bsxDZTL8DG

I agree that 1-2 logs isn't really in the category of xrisk.  The longer the lead time on the evil plan (mixing chemicals, growing things, etc), the more time security forces have to identify and neutralize the threat.  So all things being equal, it's probably better that a would be terrorist spends a year planning a weird chemical thing that hurts 10s of people, vs someone just waking up one morning and deciding to run over 10s of people with a truck.  

There's a better chance of catching the first guy, and his plan is way more expensive in terms of time, money, access to capital like LLM time, etc.  Sure someone could argue about pandemic potential, but lab origin is suspected for at least one influenza outbreak and a lot of people believe it about covid-19.  Those weren't terrorists.

I guess theoretically, there may be cyberweapons that qualify as wmd, but those will be because of the systems they interact with.  It's not the cyberweapon itself, it's the nuclear reactor accepting commands that lead to core damage.

This seems incredibly reasonable, and in light of this, I'm not really sure why anyone should embrace ideas like making LLMs worse at biochemistry in the name of things like WMDP: https://www.lesswrong.com/posts/WspwSnB8HpkToxRPB/paper-ai-sandbagging-language-models-can-strategically-1

Biochem is hard enough that we need LLMs at full capacity pushing the field forward.  Is it harmful to intentionally create models that are deliberately bad at this cutting edge and necessary science in order to maybe make it slightly more difficult for someone to reproduce cold war era weapons that were considered both expensive and useless at the time?

Do you think that crippling 'wmd relevance' of LLMs is doing harm, neutral, or good?

You sound really confident, can you elaborate on your direct lab experience with these weapons, as well as clearly define 'military grade' vs whatever the other thing was?

How does 'chem/bio' compare to high explosives in terms of difficulty and effect?

Load More