lukstafi comments on Value Deathism - Less Wrong

26 Post author: Vladimir_Nesov 30 October 2010 06:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 31 October 2010 12:10:42AM 4 points [-]

Goertzel: Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology.

Agree, but the multiple different current forms of human values are the source of much conflict.

Hanson: Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors.

Agree again. And in honor of Robin's profession, I will point out that the multiple current forms of human values are the driving force causing trade, and almost all other economic activity.

Nesov: Change in values of the future agents, however sudden or gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. ... Regardless of difficulty of the challenge, it's NOT OK to lose the Future.

Strongly disagree. The future is not ours to lose. A growing population of enfranchised agents is going to be sharing that future with us. We need to discount our own interest in that future for all kinds of reasons in order to achieve some kind of economic sanity. We need to discount because:

  • We really do care more about the short-term future than the distant future.
  • We have better control over the short-term future than the distant future.
  • We expect our values to change. Change can be good. It would be insane to attempt to determine the distant future now. Better to defer decisions about the distant future until later, when that future eventually becomes the short-term future. We will then have a better idea what we want and a better idea how to achieve it.
  • As mentioned, an increasing immortal population means that our "rights" over the distant future must be fairly dilute.
  • If we don't discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS - Keep It Finite, Stupid.
Comment author: lukstafi 31 October 2010 10:01:15AM *  0 points [-]

I agree, but be careful with "We expect our values to change. Change can be good." Dutifully explain, that you are not talking about value change in the mathematical sense, but about value creation, i.e. extending valuation to novel situations that is guided by values of a meta-level with respect to values casually applied to remotely similar familiar situations.

Comment author: Perplexed 31 October 2010 01:53:36PM 2 points [-]

I beseech you, in the bowels of Christ, think it possible your fundamental values may be mistaken.

I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don't currently know how to handle this kind of upheaval mathematically.

If that is seen as a problem, then we better get started working on building better mathematics.

Comment author: lukstafi 31 October 2010 08:45:32PM *  1 point [-]

OK. I've been sympathetic with your view from the beginning, but haven't really thought through (so, thanks,) the formalization that puts values on epistemic level: distribution of believes over propositions "my-value (H, X)" where H is my history up to now and X is a preference (order over world states, which include me and my actions). But note that people here will call the very logic you use to derive such distributions your value system.

ETA: obviously, distribution "my-value (H1, X[H2])", where "X[H2]" is the subset of worlds where my history turns out to be "H2", can differ greatly from "my-value (H2, X[H2])", due to all sorts of things, but primarily due to computational constraints (i.e. I think the formalism would see it as computational constraints).

ETA P.S.: let's say for clarity, that I meant "X[H2]" is the subset of world-histories where my history has prefix "H2".

Comment author: timtyler 31 October 2010 02:53:18PM *  1 point [-]

I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don't currently know how to handle this kind of upheaval mathematically.

What we may need more urgently is the maths for agents who have "got religion" - because we may want to build that type of agent - to help to ensure that we continue to receive their prayers and supplications.