FeepingCreature comments on Causal Universes - Less Wrong

60 Post author: Eliezer_Yudkowsky 29 November 2012 04:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (385)

You are viewing a single comment's thread. Show more comments above.

Comment author: FeepingCreature 01 December 2012 12:38:14AM -2 points [-]

I think we need to limit the set of morally relevant future versions to versions that would be created without interference, because otherwise we split ourselves too thinly among speculative futures that almost never happen. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one.

Comment author: chaosmosis 01 December 2012 03:29:31AM 0 points [-]

"I think we need to arbitrarily limit something. Given that, this specific limit is not arbitrary."

How is that not equivalent to your argument?

Additionally, please explain more. I don't understand what you mean by saying that we "split ourselves too thinly". What is this splitting and why does it invalidate moral systems that do it? Also, overall, isn't your argument just a reason that considering alternatives to the status quo isn't moral?

Comment author: MugaSofer 01 December 2012 02:29:52PM 0 points [-]

Well, the phrase "split ourselves too thinly among speculative futures that almost never happen" would seem to refer to the fact that we have limited time and processing capacity to think with.

Comment author: FeepingCreature 01 December 2012 01:57:44PM 0 points [-]

I think it summarizes to "time travel is too improbable and unpredictable to worry about [preserving the interests of yous affected by it]".

Comment author: chaosmosis 01 December 2012 06:59:52PM *  0 points [-]

Your argument makes no sense.

"Time travel is too improbable to worry about preserving yous affected by it. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one."

Those two sentences do not connect. They actually contradict.

Also, you're doing moral epistemology backwards, in my view. You're basically saying, "it would be really convenient if the content of morality was such that we could easily compute it using limited cognitive resources". That's an argumentum ad consequentum which is a logical fallacy.

Comment author: FeepingCreature 01 December 2012 08:00:02PM 0 points [-]

You're probably right about it contradicting. Though, about the moral-epistemology bit, I think there may be a sort of anthropic-bias type argument that creatures can only implement a morality that they can practically compute to begin with.

Comment author: chaosmosis 02 December 2012 04:57:34AM -1 points [-]

practically compute

Your argument is that it is hard and impractical, not that it is impossible, and I think that only the latter type is a reasonable constraint on moral considerations, although even then I have some qualms about whether or not nihilism would be more justified, as opposed to arbitrary moral limits. I also don't understand how anthropic arguments might come into play.