You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI - Less Wrong Discussion

1 Post author: chaosmage 20 September 2014 08:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread.

Comment author: [deleted] 20 September 2014 09:13:08PM 6 points [-]

You do not update anthropic reasoning based on self-generated evidence. That's bad logic. Making a space-faring self-replicating machine gives you no new information.

It is also incredibly dangerous. Actual robust self-replicating machines is basically a AGI-complete problem. You can't solve one without the other. What you are making is a paperclip maximizer, just with blueprints of itself instead of paperclips.

Comment author: Algernoq 21 September 2014 06:32:37AM 3 points [-]

Self-replication need not be autonomous, or use AGI. Factories run by humans self-replicate but are not threatening. Plants self-replicate but are not threatening. An AGI might increase performance but is not required or desirable. Add in error-checking to prevent evolution if that's a concern.

Comment author: [deleted] 21 September 2014 08:20:22AM *  2 points [-]

Building a self-replicating lunar mining & factory complex is one thing. Building a self-replicating machine that is able to operate effectively in any situation it encounters while expanding into the cosmos is another story entirely. Without knowing the environment in which it will operate, it'll have to be able to adapt to circumstances to achieve its replication goal in whatever situation it finds itself in. That's the definition of an AGI.

Comment author: Eniac 07 December 2014 04:39:27AM 1 point [-]

Bacteria perform quite well at expanding into an environment, and they are not intelligent.

Comment author: [deleted] 10 December 2014 03:48:06AM 1 point [-]

I would argue they are, for some level of micro-intelligence, but that's entirely beside the point. A bacteria doesn't know how to create tools or self-modify or purposefully engineer its environment in such a way as to make things more survivable.

Comment author: Kawoomba 21 September 2014 08:06:03AM 1 point [-]

You do not update anthropic reasoning based on self-generated evidence. That's bad logic.

I disagree. You don't disregard evidence because it is "self-generated". Can you explain your reasoning?

Comment author: [deleted] 21 September 2014 08:36:33AM 1 point [-]

In this case: can we build self-replicating machines? Yes. Is there any specific reason to think that the great filter might lie between now and deployment of the machines? No, because we've already had the capability for 35+ years, just not the political will or economic need. We could have made it already in an alternate history. So since we know the outcome (the universe permits self-replicating space-faring machines, and we have had the capability to build them for sufficient time), we can update based on that evidence now. Actually building the machines therefore provides zero new evideence.

In general: anthropic reasoning involves assuming that we are randomly selected from the space of all possible universes, according to some typically unspecified prior probability. If you change the state of the universe, that changed state is not a random selection against a universal prior. It's no longer anthropic reasoning.

Comment author: chaosmage 21 September 2014 07:53:20AM 1 point [-]

A paperclip maximizer decides for itself how to maximize paperclips; it can ignore human instructions. This SRS network can't: It receives instructions and updates and deterministically follows them. Hence the question around secure communication between SRS and colonies: a paperclip maximizer doesn't need that.

What is your distinction between "self-generated" evidence and evidence I can update anthropic reasoning on?

Comment author: Algernoq 21 September 2014 06:44:43AM 1 point [-]

Would using the spacefaring machine give new evidence? Presumably X-risk becomes lower as humanity disperses.