That would be closer to Nirvana fallacy, applied to activism. "People do something good. You criticize them for not doing something better instead." This argument happens all the time. See also The Copenhagen Interpretation of Ethics.
There is a standard solution S0 that almost everyone chooses. Someone chooses a better solution S1. They get attacked for not choosing even better solution S2.
The harmful part is that choosing S1 over S2 is socially punished, while choosing S0 over both S1 and S2 flies under the radar. If the reason for choosing S1 over S2 was that the solution S2 was too complicated or too expensive, we effectively teach people to choose S0 over S1 to avoid the punishment in the future.
(Specifically: S2 = reporting on Lebanon and Paris attacks appropriately; S1 = focusing on Paris; S0 = ignoring both.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.
Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.