Anthony Bailey

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

It's plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation

...or by a further twist on liability.

Gabriel Well explored such an idea in https://axrp.net/episode/2024/04/17/episode-28-tort-law-for-ai-risk-gabriel-weil.html

The core is punitive damages for expected harms rather than those that manifested. When a non-fatal warning shot causes harm, then as well as suing for those damages that occurred, one assesses how much worse of an outcome was plausible and foreseeable given the circumstances, and awards damages in terms of the risk taken. We escaped what looks like 10% chance that thousands died? Pay 10% those costs.

What We’re Not Doing ... We are not investing in grass-roots advocacy, protests, demonstrations, and so on. We don’t think it plays to our strengths, and we are encouraged that others are making progress in this area.

Not speaking for the movement, but as a regular on Pause AI this makes sense to me. Perhaps we can interact more, though, and in particular I'd imagine we might collaborate on testing the effectiveness of content in changing minds.

Execution ... The main thing holding us back from realizing this vision is staffing. ... We hope to hire more writers ... and someone to specialize in social media and multimedia. Hiring for these roles is hard because [for the] first few key hires we felt it was important to check all the boxes.

I get the need for a high bar, but my guess is MIRI could try to grow ten times faster than the post indicates. More dakka: more and better content. If the community could provide necessary funding and quality candidate streams, would you be open to dialing the effort up like that?