I have tried alcohol twice in an attempt to break my reputation for being a loner who doesn't drink. Both times I felt very drowsy afterwards, had to go to bed early and slept about ten hours. Sleepiness was the only discernible effect.
Assuming that the question means "would you be interested" and not "does there exist at least one person in the multiverse who would be interested".
The students were split up into the control and values affirmation groups. If the values affirmation group happened by chance to contain more of the brighter women then the control group would contain fewer of them so the two samples cannot be treated as independent. The paper doesn't seem to mention any attempt to take this into account, so the actual p-values might be higher than those calculated in the paper, which weren't especially low to begin with.
I can concentrate much better after I've spent time running around outdoors, watching sunsets or listening to good music. I do not believe that the pleasure of being outside is more important than my other goals, but when I force myself to stay indoors and spend more time working I become too moody to concentrate and I get less work done in total than I would if I had 'wasted' more time. Cookies are different though, because the tedium of baking them outweighs the pleasure of eating them.
As human population densities increased and complex societies formed, selection pressure for social skills increased, and social skills became more relevant than intelligence. Larger brains usually have fewer long-range connections but more local connections, and long-range connections enable the rapid processing required for socialising. People with autism tend to have larger brains than those without and females tend to have smaller brains than males, so an inverse correlation between brain size and social skills would not surprise me.
If slowing metabolism increases longevity, how come exercise, which increases metabolism, is beneficial?
As an endurance runner with a BMI of ~20 on an eat-as-much-as-you-like diet, is my calorie consumption is optimal for longevity?
I remain convinced that the probability is 90%.
The confusion is over whether you want to maximize the expectation of the number of utilons there will be if you wake up in a green room or the expectation of the number of utilons you will observe if you wake up in a green room.
Drink lots of water. Stop eating anything that contains wheat and other grains.
I don't think that either of these two has much evidence going for it.
Do short but intense exercise once a week.
Once a week is not often enough. The endorphins from exercise wear off fast so to sustain high energy levels I require a short burst of intense exercise is required every few hours with a longer run at least once a day.
First recommendation is to get to the bottom of what question you are actually asking. What are you actually trying to do? Do the right thing? Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?
(1) What are the necessary and sufficient properties for a thought to be pleasurable?
It feels good? Some pretty heavy neuroscience to say anything beyond that. Again, what are you going to do with the answer to this question. Ask that question instead.
Also note that "necessary and sufficient" is an obsolete model of concepts. See the human's guide to words.
(2) What are the characteristic mathematics of a painful thought?
What does this mean? How do I calculate exactly how much pain someone will experience if I punch them? Again, ask the real question.
(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?
Um. Why would you want to do that? Is this simply a hypothetical to see if we understand the concept?
It really depends on what aspect you are interested in; you could create "pleasure" and "pain" by hacking up some kind of simple reinforcement learner, and I suppose you could shoehorn that into a neural network if you really wanted to. But why?
Note that a simple reinforcement learner "experiences" "pain" and "pleasure" in some sense, but not in the morally relevant sense. You will find that the moral aspect is much more anthropomorphic and much more complex, I think.
(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness -- how would we do that?
I guess you could have a little "visceral happiness" meter that gets filled up in the right conditions, but this would a profound waste of AGI capability, and probably doesn't do what you actually wanted. What is it you actually want?
(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that?
Ask them? The same way we think we know for non-uploaded minds.
(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?
If I wanted to turn the universe into paperclips and meaningless crap, how would I do it? Why is your question interesting? Is this simply an excercise in learning how to fill the universe with X? You could pick a less confusing X.
I feel like you might be importing a few mistaken assumptions into this whole line of questioning. I recommend that you lurk more and read some of the stuff I linked.
And if you think certain questions aren't good, could you offer some you think are?
Good question:
How would a potentially powerful optimizing process have to be constructed to be provably capable of steering towards some coherent objective(s) over the long run and through self-modifications?
My first post; please be somewhat gentle. Thanks!
Downvote preventers get downvoted.
Even if it turns out that there is no rigorously definable one-dimensional measure of valence we still need to search for physical correlates to pleasure and pain and find approximate measures to use when resolving moral dilemmas.
Regarding the response to (6), why don't you want to maximise hedons? Having a rigorous definition of what you are trying to maximise needn't mean that what you are trying to maximise is arbitrary to you, and that pleasure is complex (or maybe it is simple but we don't understand it yet) does not imply that we don't want it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Why does she care about music and sunsets? Why would she have scope insensitivity bias? She's programmed to care about the number, not the log, right? And if she was programmed to care about the log, she'd just care about the log, not be unable to appreciate the scope.
Maybe she cares about other things besides paperclips, including the innate desire to be able to name a single, simple and explicit purpose in life.
This is not supposed to be about non-human AGI paperclip maximisers.