Perplexed comments on Value Deathism - Less Wrong

26 Post author: Vladimir_Nesov 30 October 2010 06:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread.

Comment author: Perplexed 31 October 2010 12:10:42AM 4 points [-]

Goertzel: Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology.

Agree, but the multiple different current forms of human values are the source of much conflict.

Hanson: Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors.

Agree again. And in honor of Robin's profession, I will point out that the multiple current forms of human values are the driving force causing trade, and almost all other economic activity.

Nesov: Change in values of the future agents, however sudden or gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. ... Regardless of difficulty of the challenge, it's NOT OK to lose the Future.

Strongly disagree. The future is not ours to lose. A growing population of enfranchised agents is going to be sharing that future with us. We need to discount our own interest in that future for all kinds of reasons in order to achieve some kind of economic sanity. We need to discount because:

  • We really do care more about the short-term future than the distant future.
  • We have better control over the short-term future than the distant future.
  • We expect our values to change. Change can be good. It would be insane to attempt to determine the distant future now. Better to defer decisions about the distant future until later, when that future eventually becomes the short-term future. We will then have a better idea what we want and a better idea how to achieve it.
  • As mentioned, an increasing immortal population means that our "rights" over the distant future must be fairly dilute.
  • If we don't discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS - Keep It Finite, Stupid.
Comment author: timtyler 31 October 2010 10:22:31AM *  5 points [-]

If we don't discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS - Keep It Finite, Stupid.

http://lesswrong.com/lw/n2/against_discount_rates/

The idea is not really that you care equally about future events - but rather that you don't care about them to the extent that you are uncertain about them; that you are likely to be unable to influence them; that you will be older when they happen - and so on.

It is like in chess: future moves are given less consideration - but only because they are currently indistinct low probability events - and not because of some kind of other intrinsic temporal discounting of value.

Comment author: Vladimir_Nesov 19 December 2010 12:16:58AM 4 points [-]

We really do care more about the short-term future than the distant future.

How do you know this? It feels this way, but there is no way to be certain.

We have better control over the short-term future than the distant future.

That we probably can't have something doesn't imply we shouldn't have it.

We expect our values to change. Change can be good.

That we expect something to happen doesn't imply it's desirable that it happens. It's very difficult to arrange so that change in values is good. I expect you'd need oversight from a singleton for that to become possible (and in that case, "changing values" won't adequately describe what happens, as there are probably better stuff to make than different-valued agents).

As mentioned, an increasing immortal population means that our "rights" over the distant future must be fairly dilute.

Preference is not about "rights". It's merely game theory for coordination of satisfaction of preference.

If we don't discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS - Keep It Finite, Stupid.

God does not care about our mathematical difficulties. --Einstein.

Comment author: Perplexed 19 December 2010 05:45:28AM 0 points [-]

We really do care more about the short-term future than the distant future.

How do you know this? It feels this way, but there is no way to be certain.

Alright. I shouldn't have said "we". I care more about the short term. And I am quite certain. WAY!

We have better control over the short-term future than the distant future.

That we probably can't have something doesn't imply we shouldn't have it.

Huh? What is it that you are not convinced we shouldn't have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don't know enough. But there is reason to think that our descendants and/or future selves will be better informed.

God does not care about our mathematical difficulties.

Then lets make sure not to hire the guy as an FAI programmer.

Comment author: Vladimir_Nesov 19 December 2010 01:36:10PM *  2 points [-]

We really do care more about the short-term future than the distant future.

How do you know this? It feels this way, but there is no way to be certain.

Alright. I shouldn't have said "we". I care more about the short term. And I am quite certain. WAY!

I believe you know my answer to that. You are not licensed to have absolute knowledge about yourself. There are no human or property rights on truth. How do you know that you care more about short term? You can have beliefs or emotions that suggest this, but you can't know what all the stuff you believe and all the moral arguments you respond to cash out into on reflection. We only ever know approximate answers, and given the complexity of human decision problem and sheer inadequacy of human brains, any approximate answers we do presume to know are highly suspect.

Huh? What is it that you are not convinced we shouldn't have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don't know enough. But there is reason to think that our descendants and/or future selves will be better informed.

That we aren't qualified doesn't mean that we shouldn't have that control. Exercising this control through decisions made with human brains is probably not it of course, we'd have to use finer tools, such as FAI or upload bureaucracies.

God does not care about our mathematical difficulties.

Then lets make sure not to hire the guy as an FAI programmer.

Don't joke, it's serious business. What do you believe on the matter?

Comment author: Perplexed 19 December 2010 03:11:19PM *  -1 points [-]

God does not care about our mathematical difficulties.

Then lets make sure not to hire the guy as an FAI programmer.

Don't joke, it's serious business. What do you believe on the matter?

I am not the person who initiated this joke. Why did you mention God? If you don't care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?

Comment author: Vladimir_Nesov 19 December 2010 03:20:09PM 2 points [-]

I am not the person who initiated this joke. Why did you mention God?

Einstein mentioned God, as a stand-in for Nature.

If you don't care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?

I didn't say I don't care for discounting. I said that I believe that we must be uncertain about this question. That I don't have solutions doesn't mean I must discard the questions as answered negatively.

Comment author: Nick_Tarleton 19 December 2010 05:50:40AM *  2 points [-]

We are completely unqualified to exercise that kind of control. We don't know enough. But there is reason to think that our descendants and/or future selves will be better informed.

Yes. So, for "our values", read "our extrapolated volition".

It's not clear to me how much you and Nesov actually disagree about "changing" values, vs. you meaning by "change" the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.

Comment author: Perplexed 19 December 2010 06:26:03AM 1 point [-]

I do not mean "reflective refinement" if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The "values of mankind" are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.

VN and I are in real disagreement, as far as I can tell.

Comment author: Vladimir_Nesov 19 December 2010 01:40:17PM 1 point [-]

This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.

Comment author: Perplexed 19 December 2010 03:02:29PM 1 point [-]

It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values - different aspirations for the future.

Comment author: Vladimir_Nesov 19 December 2010 03:16:29PM 1 point [-]

It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values - different aspirations for the future.

Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don't converge on agreement because of differences in our values, given both yours and mine preferred interpretation of "values". But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of "values", the normative and not factual one, which I fail to accomplish.

Comment author: Perplexed 19 December 2010 03:41:55PM 0 points [-]

Any adequate disagreement must be about different assignment of truth values to the same meaning.

I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of "truth values" here may be driving us into a diversion from the main issue. Because I maintain that simple "ought" sentences do not have truth values. Only "is" sentences can be analyzed as true or false in Tarskian semantics.

But that is a diversion. I look forward to your explanation of your sense of the word "value" - a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a "value space" of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.

Comment author: Vladimir_Nesov 19 December 2010 04:00:03PM *  1 point [-]

But your mention of "truth values" here may be driving us into a diversion from the main issue.

I gave up on the main issue, and so described my understanding of the reasons that justify giving up.

Because I maintain that simple "ought" sentences do not have truth values. Only "is" sentences can be analyzed as true or false in Tarskian semantics.

Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn't manage to communicate to you the sense I see. At this point, I can only refer you to "metaethics sequence", which I know is not very helpful.

One last attempt, using an intuition/analogy dump not carefully explained.

Where do the objective conclusions about "is" statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven't experienced this process yet. And so you can reason about the truth without having it immediately available.

If you understand the way previous paragraph explains the truth of "is" questions, you can apply exactly the same explanation to "ought" questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should've chosen differently, that your decision could've been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don't know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don't know which is which.

Comment author: timtyler 19 December 2010 11:42:07AM *  -2 points [-]

It's very difficult to arrange so that change in values is good. I expect you'd need oversight from a singleton for that to become possible (and in that case, "changing values" won't adequately describe what happens, as there are probably better stuff to make than different-valued agents).

We do seem to have an example of systematic positive change in values - the history of the last thousand years. No doubt some will argue that our values only look "good" because they are closest to our current values - but I don't think that is true. Another possible explanation is that material wealth lets us show off our more positive values more frequently. That's a harder charge to defend against, but wealth-driven value changes are surely still value changes.

Systematic, positive changes in values tend to suggest a bright future. Go, cultural evolution!

Comment author: timtyler 19 December 2010 05:35:05PM 1 point [-]

If we don't discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS - Keep It Finite, Stupid.

Too much discounting runs into problems with screwing the future up, to enjoy short-term benefits. With 5-year political horizons, that problem seems far more immediate and pressing than the problems posed by discounting too little. From the point of view of those fighting the evils that too much temporal discounting represents, arguments about mathematical infinity seem ridiculous and useless. Since such arguments are so feeble, why even bother mentioning them?

Comment author: lukstafi 31 October 2010 10:01:15AM *  0 points [-]

I agree, but be careful with "We expect our values to change. Change can be good." Dutifully explain, that you are not talking about value change in the mathematical sense, but about value creation, i.e. extending valuation to novel situations that is guided by values of a meta-level with respect to values casually applied to remotely similar familiar situations.

Comment author: Perplexed 31 October 2010 01:53:36PM 2 points [-]

I beseech you, in the bowels of Christ, think it possible your fundamental values may be mistaken.

I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don't currently know how to handle this kind of upheaval mathematically.

If that is seen as a problem, then we better get started working on building better mathematics.

Comment author: lukstafi 31 October 2010 08:45:32PM *  1 point [-]

OK. I've been sympathetic with your view from the beginning, but haven't really thought through (so, thanks,) the formalization that puts values on epistemic level: distribution of believes over propositions "my-value (H, X)" where H is my history up to now and X is a preference (order over world states, which include me and my actions). But note that people here will call the very logic you use to derive such distributions your value system.

ETA: obviously, distribution "my-value (H1, X[H2])", where "X[H2]" is the subset of worlds where my history turns out to be "H2", can differ greatly from "my-value (H2, X[H2])", due to all sorts of things, but primarily due to computational constraints (i.e. I think the formalism would see it as computational constraints).

ETA P.S.: let's say for clarity, that I meant "X[H2]" is the subset of world-histories where my history has prefix "H2".

Comment author: timtyler 31 October 2010 02:53:18PM *  1 point [-]

I think that we need to be able to change our minds about fundamental values, just as we need to be able to change our minds about fundamental beliefs. Even if we don't currently know how to handle this kind of upheaval mathematically.

What we may need more urgently is the maths for agents who have "got religion" - because we may want to build that type of agent - to help to ensure that we continue to receive their prayers and supplications.