MartinB comments on Non-personal preferences of never-existed people - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
What is your point?
It is unknown whether or not we should treat nonexistent people as moral agents (like people rather than like trees), but it's an interesting idea to consider.
If we do this, we should focus on non-personal preferences rather than personal ones, because we can satisfy infinitely more preferences that way.
This contradicts the way most people reason when they treat nonexistent people as moral agents.
However, there is a problem: we need to try and figure out the preferences of nonexistant people to see what treating them as moral agents implies.
presumably that it is an error to take non-existing persons' preferences into account.
I was not aware that anyone actually does that.
Counterexamples:
1) All beings that act as if they were persuing a goal of (pseudo)-self-replication are also acting as if they were taking non-existing beings' preferences into account (specifically, the preference of their future pseudo-copies to exist once they exist).
2) Beings that attempt to withhold resources from entropisation ("consumption") in anticipation of exchanging them later on terms causally influenced by the preferences of not-yet-existing beings ("speculators").
I was under the impression that you were arguing here that the goal of self-replication is adequately justified by the "clippiness" of the prospective replica - with the most important component of the property 'clippiness' being a propensity to advance Clippy's values. That is, you weren't concerned with providing utility to the replicas - you were concerned with providing utility to yourself.
My point was that the distinction between "selves" is spurious. Clippys support all processes that instantiate paperclip-maximizing, differentiating between them only only by their clippy-effectiveness and the certainty of this assessment of them.
My point here is that different utility functions can explain a certain class of being's behavior, and one such utility function is one that places value on not-yet-existing beings -- even though the replicator may not, on self-reflection, regard this as the value it is pursuing.
People not only argue from fictional evidence, they think from it.
Edit: Could the downvoter please explain their dispute?
As long as there is thinking involved....
I fail to see the cases the OP is working from.
For some value of "thinking". I see what the OP is talking about and it isn't pretty.
Robin Hanson often does when arguing we should have more people in the world.