All of koning_robot's Comments + Replies

I'm not sure what you're trying to say here, but if you consider this a relative weakness of Solomonoff Induction, then I think you're looking at it the wrong way. We will know it as well as we possibly could given the evidence available. Humans are subject to the constraints that Solomonoff Induction is subject to, and more.

[This comment is no longer endorsed by its author]Reply
-2koning_robot
Whoops, thread necromancy.

Hrrm. I don't think it's that simple. Looking at that page, I imagine nonprogrammers wonder:

  • What are comments?
  • What are strings?
  • What is this "#=>" stuff?
  • "primitives"?
  • ... This seems to be written for people who are already familiar with some other language. Better to show a couple of examples so that they recognize patterns and become curious.
0Dr_Manhattan
You have a point, there might be something more customized to people who have never programmed, but I like the simplicity of it.

What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you're just making something up to rationalize your preconceptions.

0Ghatanathoah
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other. When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account. The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn't distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don't think it's perfected yet, but it's a step in the right direction. I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility. Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of dimini

Hm, I've been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I've considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn't think of a suitable reward (there's not much that I like). I hadn't thought of just making something up, like pieces of paper. Thanks for the inspiration!

Do you have specific ideas useful for resolving this question?

Fear of death doesn't mean death is bad in the same way that fear of black people doesn't mean black people are bad. (Please forgive me the loaded example.)

Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of "me". We don't know what we mean by "me", or if we do then we don't know what's valuable about the existence of one "me" as opposed to anoth... (read more)

1Vladimir_Nesov
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can't explain what exactly is valuable in such situations, that doesn't strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).

Because it feels good. My ongoing survival leaves me cold entirely.

1Desrtopa
How would you distinguish this, as a "rational" reason, from "emotional" reasons, as you did in your previous comment?
1Viliam_Bur
Then wireheading is the best solution. The interesting fact is that wireheading anyone else would give you as much utility as wireheading you.

It's different. The fact that I feel bad when confronted with my own mortality doesn't mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.

1[anonymous]
I'm curious. What is your position on wireheading?

Emotions clearly support non-fungibility, in particular concerning your own life, and it's a strong argument.

I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don't think the emotions in this case form that strong an argument. There's a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.

Thanks for helping me understan... (read more)

I accept this objection; I cannot describe in physical terms what "pleasure" refers to.

Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.

5Vladimir_Nesov
Do you have specific ideas useful for resolving this question? It's usually best to avoid using the word "rationality" in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like "Which answer to this question is truth?" or "Which belief about the answer to this question would be rational?", or "Which belief about this question is desirable?". See What Do We Mean By "Rationality"?, Avoid inflationary use of terms.

No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.

Yes, but they disagree because what they want is not the same as what they would like.

The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing g

... (read more)
2thomblake
To a rationalist, the "burden of proof" is always on one's own side.

I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn't find any arguments for the insistence on placating wants rather than improving experience. I'll read it in full next week.

And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent.

This comment has justification. I don't see how this would affect evolutionary psychology. I'm not sure if I'm parsing your last sentence here correctly; I didn't "look up" anything, and I don't know what the weak points are.

Assuming that the scenario you paint is plausible and the optimal way to get there, then yea... (read more)

2DaFranker
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this. There's no superstition of "true/pure/honest/all-natural pleasure" in my model - right now, my current brain feels extreme anti-hedons towards the idea of living in Wirehead Land. Right now, and to my best reasonable extrapolation, I and any future version of "myself" will hate and disapprove of wireheading, and would keep doing so even once wireheaded, if not for the fact that the wireheading necessarily overrides this in order to achieve maximum happiness by re-wiring the user to value wireheading and nothing else. The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing going on. The quality of mental states is presumably the only thing we should care about - my model also points towards "that" (same label, probably not same referent). The thing is, that phrase is so open to interpretation (What's "should"? What's "quality"? How meta do the mental states go about analyzing themselves and future/past mental states, and does the quality of a mental state take into account the bound-to-reality factor of future qualitative mental states? etc.) that it's almost an applause light.

A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.

Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?

2DaFranker
You haven't read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it? Also relevant: Joy in the merely real

"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing.

Indeed they are not necessarily the same thing, which is why my utility function should not value that which I "want" but that which I "like"! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to "just" program an AI to maximize... (read more)

1DaFranker
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent. I think you may be confusing labels and concepts. Maximizing hedonistic mental states means, to the best of my knowledge, programming a hedonistic imperative directly into DNA for full-maximal state constantly from birth, regardless of conditions or situations, and then stacking up humans as much as possible to have as many of them as possible feeling as good as possible. If any of the humans move, they could prove to be a danger to efficient operation of this system, and letting them move thus becomes a net negative, so it follows from this that in the process of optimization all human mobility should be removed, considering that for a superintelligence removing limbs and any sort of mobility from "human" DNA is probably trivial. But since they're all feeling the best they could possibly feel, then it's all good, right? It's what they like (having been programmed to like it), so that's the ideal world, right? Edit: See Wireheading for a more detailed explanation and context of the possible result of a happiness-maximizer.
5nshepperd
Why's that?

Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don't think you gave me anything but an appeal to emotion.

3[anonymous]
Arguing we should seek pleasurable experiences is also an appeal to emotion.

The emotions are irrational in the sense that they are not supported by anything - your brain generates these emotions in these situations and that's it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I'm not saying that to be rational is to get rid of all your desires. I'm saying that it's a tradeoff, and I am suggesting the possibilit... (read more)

2Vladimir_Nesov
Beliefs are also something your brain generates. Being represented in meat doesn't by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don't approve of. See Feeling Rational, The Mystery of the Haunted Rationalist, Summary of "The Straw Vulcan".
4Kindly
If I get rid of my desire to do something, then I've replaced myself by a possibly less frustrated person who doesn't value the same things as I do. This is obviously a trade-off, yes. On the one hand, it's not that I'm ridiculously frustrated by our lack of immortality, I've kind of gotten used to it. I recognize that things could be better, yes. On the other hand, a version of me that doesn't care if people die or not seems very different from me and frankly kind of abhorrent. I don't even know if I even want that version of me to exist, and I'm certainly not going to have it replace myself if I can help it.

Pleasurable experiences. My life facilitates them, but it doesn't have to be "my" life. Anyone's life will do.

4Desrtopa
And why do you think it's rational to want this, but not to want one's own survival?

Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?

-3metatroll
Don't feed the troll.
8Kindly
Rational doesn't mean emotionless. These are emotional reasons -- to which I think I should add that I care about the pain Joe's loved ones feel when Joe dies -- but I think they're important emotional reasons. I wouldn't be me if I didn't care about these things. I would not want to become "rational" at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
5Desrtopa
If it's irrational not to want to die, what do you think it would be rational to want?
3Vladimir_Nesov
I'm uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it's a strong argument. On the other hand, my goals are sufficiently similar to everyone else's goals that loss of my life wouldn't prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world. (The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you've made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it's low. If your claim is that we shouldn't be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won't be successfully instilled in others.)
9Kindly
First, we are selfish, and don't want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don't want them to die. Third, we want to spare everyone from having to be afraid of death. I think if you forget about these reasons, then there's no point in preserving people. Edit: I'm sorry that your comment was downvoted, but I for one think that it's a worthwhile objection to make, even though I disagree with it for the above reasons.
3metatroll
If you go to a really high place, and look over the edge far enough, you'll find out.

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

Either this conclusion contradicts the whole point of the article, or I don't understand what is meant by the various terms "desire", "want", "pleasure", etc. If pleasure is "that which we like", then yes we can solve FAI by programming an AI to maximize pleasure.

The mis... (read more)

2Vladimir_Nesov
Saying a word with emphasis doesn't clarify its meaning or motivate the relevance of what it's intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It's not even clear what each of these means, and these distinctions don't automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don't quite match those of people it designed. See Approving reinforces low-effort behaviors, The Blue-Minimizing Robot, Urges vs. Goals: The analogy to anticipation and belief.
3nshepperd
"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing. There's nothing superstitious about caring about stuff other than your own mental state.

I am going to try to be there. I'll be traveling from Maastricht.

Edit: I decided not to go after all.