Confirmed by experiment. :D
I've just left reading LW to eat 2 spoons of olive oil. For my taste receptors, it has a bad taste, but not a strong one. I certainly do not desire to eat more (and I am not afraid that this taste would ever asociate with anything I would voluntarily eat) and I had to drink water afterwards, but it was not that bad, and at the moment I write this comment the effect is over.
However, it vas very pleasant to leave the kitchen after the experiment. So here is another hypothesis: this diet works because it associates negative feelings with kitchen and eating in general.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
First recommendation is to get to the bottom of what question you are actually asking. What are you actually trying to do? Do the right thing? Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?
See disguised queries
It feels good? Some pretty heavy neuroscience to say anything beyond that. Again, what are you going to do with the answer to this question. Ask that question instead.
Also note that "necessary and sufficient" is an obsolete model of concepts. See the human's guide to words.
What does this mean? How do I calculate exactly how much pain someone will experience if I punch them? Again, ask the real question.
Um. Why would you want to do that? Is this simply a hypothetical to see if we understand the concept?
It really depends on what aspect you are interested in; you could create "pleasure" and "pain" by hacking up some kind of simple reinforcement learner, and I suppose you could shoehorn that into a neural network if you really wanted to. But why?
Note that a simple reinforcement learner "experiences" "pain" and "pleasure" in some sense, but not in the morally relevant sense. You will find that the moral aspect is much more anthropomorphic and much more complex, I think.
I guess you could have a little "visceral happiness" meter that gets filled up in the right conditions, but this would a profound waste of AGI capability, and probably doesn't do what you actually wanted. What is it you actually want?
Ask them? The same way we think we know for non-uploaded minds.
If I wanted to turn the universe into paperclips and meaningless crap, how would I do it? Why is your question interesting? Is this simply an excercise in learning how to fill the universe with X? You could pick a less confusing X.
I feel like you might be importing a few mistaken assumptions into this whole line of questioning. I recommend that you lurk more and read some of the stuff I linked.
Good question:
How would a potentially powerful optimizing process have to be constructed to be provably capable of steering towards some coherent objective(s) over the long run and through self-modifications?
Downvote preventers get downvoted.
Even if it turns out that there is no rigorously definable one-dimensional measure of valence we still need to search for physical correlates to pleasure and pain and find approximate measures to use when resolving moral dilemmas.
Regarding the response to (6), why don't you want to maximise hedons? Having a rigorous definition of what you are trying to maximise needn't mean that what you are trying to maximise is arbitrary to you, and that pleasure is complex (or maybe it is simple but we don't understand it yet) does not imply that we don't want it.