bgold

Sequences

Forecasting Infrastructure

Wiki Contributions

Comments

Sorted by
bgold10

Ah gotcha, yes lets do my $1k against your $10k.

bgold10

Given your rationale I'm onboard for 3 or more consistent physical instances of the lock have been manufactured. 

Lets 'lock' it in. 

bgold10

@Raemon works for me; and I agree with the other conditions.

bgold10

This seems mostly good to me, thank you for the proposals (and sorry for my delayed response, this slipped my mind).

OR less than three consistent physical instances have been manufactured. (e.g. a total of three including prototypes or other designs doesn't count) 

Why this condition? It doesn't seem relevant to the core contention, and if someone prototyped a single lock using a GS AI approach but didn't figure out how to manufacture it at scale, I'd still consider it to have been an important experiment.

Besides that, I'd agree to the above conditions!

bgold50
  • (8) won't be attempted, or will fail at some combination of design, manufacture, or just-being-pickable.  This is a great proposal and a beautifully compact crux for the overall approach. 

I agree with you that this feels like a 'compact crux' for many parts of the agenda. I'd like to take your bet, let me reflect if there's any additional operationalizations or conditioning.

However, I believe that the path there is to extend and complement current techniques, including empirical and experimental approaches alongside formal verification - whatever actually works in practice.

FWIW in Towards Guaranteed Safe AI I we endorse this: "Moreover, while we have argued for the need for verifiable quantitative safety guarantees, it is important to note that GS AI may not be the only route to achieving such guarantees.  An alternative approach might be to extract interpretable
policies from black-box algorithms via automated mechanistic interpretability... it is ultimately an empirical question whether it is easier to create interpretable world models or interpretable policies in a given domain of operation."

bgold82

I agree with this, I'd like to see AI Safety scale with new projects. A few ideas I've been mulling:

- A 'festival week' bringing entrepreneur types and AI safety types together to cowork from the same place, along with a few talks and lot of mixers.
- running an incubator/accelerator program at the tail end of a funding round, with fiscal sponsorship and some amount of operational support. 
- more targeted recruitment for specific projects to advance important parts of a research agenda.

 

It's often unclear to me whether new projects should actually be new organizations; making it easier to spin up new projects, that can then either join existing orgs or grow into orgs themselves, seems like a promising direction.

bgold71

First off thank you for writing this, great explanation.

  • Do you anticipate acceleration risks from developing the formal models through an open, multilateral process? Presumably others could use the models to train and advance the capabilities of their own RL agents. Or is the expectation that regulation would accompany this such that only the consortium could use the world model?
  • Would the simulations be exclusively for 'hard science' domains - ex. chemistry, biology - or would simulations of human behavior,  economics, and politics also be needed? My expectation is that it would need the latter, but I imagine simulating hundreds of millions of intelligent agents would dramatically (prohibitively?) increase the complexity and computational costs.
bgold30

This seems like an important crux to me, because I don't think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency. 

What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.

bgold10

I agree that competition with China is a plausible reason regulation won't happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I'm not sure, and currently don't think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the "centralization" of AI dynamic that I listed in the 'licensing' bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.

 

To your point on big tech's political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they've benefitted a lot from the R-D stalemate in DC; I'm positing that this will go around/through this stalemate, and I don't think they currently have the softpower to stop that.

bgold10

hah yes - seeing that great post from johnwentsworth inspired me to review my own thinking on RadVac. Ultimately I placed a lower estimate on RadVac being effective - or at least effective enough to get me to change my quarantine behavior - such that the price wasn't worth it, but I think I get a rationality demerit for not investing more in the collaborative model building (and collaborative purchasing) part of the process.

Load More