You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI - Less Wrong Discussion

1 Post author: chaosmage 20 September 2014 08:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 21 September 2014 06:56:41AM 15 points [-]

Therefore, certainty we can do that would eliminate much existential risk.

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory. Our estimates of existential risks are our map. If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.

Additionally, if we were to build a self-replicating spacecraft and become less worried about existential risks, that decreased worry might mean the risks would become greater because people would become less cautious. If the filter is early, that means we have no anthropic evidence regarding future existential risks... given an early filter, the sample size of civilizations that reach our ability level is small, and you can't make strong inferences from a small sample. So it's possible that people would become less cautious incorrectly.

Comment author: Algernoq 22 September 2014 02:19:49AM 2 points [-]

If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.

To take an extreme example, building a self-replicating spacecraft, copying it off a few million times, and sending people to other galaxies would, if successful, reduce existential risks. I agree that merely making theoretical arguments constitutes making maps, not changing the territory. I also tentatively agree that just building a prototype spacecraft and not actually using it probably won't reduce existential risk.

Comment author: Eniac 07 December 2014 04:23:16AM *  -1 points [-]

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory.

If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.