loqi comments on How to get that Friendly Singularity: a minority view - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
No, sorry, the above comment was just my attempt to explain my objection as unambiguously as possible.
Yes, but your "N+1" hides some important detail: Our effective contribution to existential risk diminishes as N grows, while our contribution to safer outcomes stays constant or even grows (in the case that our work has a positive impact on someone else's "winning" project).
Since you were making the point that attempting to build Friendly AGI contributes to existential risk, I thought it fair to factor out other actions. The two strategies you outline above are entirely independent, so they should be evaluated separately. I read you as promoting the latter strategy independently when you say:
The choice under consideration is binary: Attempt a singleton or don't. Safety strategies may also be worthwhile, but I need a better reason than "they're working toward the same goal" to view them as relevant to the singleton question.