Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

9 Post author: wallowinmaya 21 July 2016 08:22PM

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

Comments (5)

Comment author: Manfred 21 July 2016 09:48:54PM 13 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: Wei_Dai 22 July 2016 03:49:22PM 8 points [-]

That's funny. :) But these people actually sound remarkably sane. See here and here for example.

Comment author: The_Jaded_One 23 July 2016 12:58:00PM 7 points [-]

Just commenting to point out that I'm having a fabulous day, and have a very painless, enjoyable life. I struggle to even understand what suffering is, to be honest, so make a note of that any negative utilitarians who may be listening!

Comment author: [deleted] 22 July 2016 01:41:50PM *  7 points [-]

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.

Comment author: RomeoStevens 22 July 2016 08:34:33PM 2 points [-]

If we get only one thing right I think a plausible candidate is right to exit. (if you have limited optimization power narrow the scope of your ambition blah blah)