by [anonymous]
1 min read28th Aug 201340 comments

0

.

New to LessWrong?

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 10:39 PM

I would do some good if you explained at the outset what the hell you're talking about.

For context, see Carl Shulman's excellent post, Are pain and pleasure equally energy-efficient?

My values pretty much treat hedonium as a zero-to-negative value, so I find this idea rather strange. ETA: In particular, I consider hedonium that displaces the human race to be very, very bad.

I think hedonistic utilitarians would be happy to leave humans alone in exchange for the computational resources of the rest of the universe. An interesting feature of many "radical" moral views is that their proponents can get most of what they value by giving up something that is of very little value to themselves, but of great value to others.

[-][anonymous]11y60

It's obvious that a well-done hedonium shockwave would be the best possible future scenario for hedonistic utilitarians.

It's also obvious (to me) that configurations containing hedonistic utility are really the only thing in the universe of terminal value. It's the most natural and straightforward no-nonsense philosophy of value that I know.

Unfortunately, most people disagree with it, and even those who don't disagree on trade-offs or assume a counterproductive simplistic approach that alienates other people in practice.

So the question is, can something like hedonium be popularized by combining it with other value axes? It seems a loss in efficiency (positve over negative experiences per joule) is acceptable if the compromise has a considerably higher probability of being realized.

OTOH, maybe convincing certain potentially sympathetic people of endorsing hedonium could be hugely beneficial - maybe some factions or individuals of the future have control over some fraction of the resources and could be inspired to use it to implement local versions of hedonium. It could have a niche in a market-future with diverse values.

But how to communicate it so more people are inspired than alienated? I have no idea, I'm not a good communicator. Not killing all humans might be a good start for a compromise. :)

Do you have an ethical theory that tells you, given a collection of atoms, how much hedonic value it contains? I guess the answer is no, since AFAIK nobody is even close to having such a theory. Going from our current state of knowledge to having such a theory (and knowing that you're justified in believing in it) would represent a huge amount of philosophical progress. Don't you think that this progress would also give us a much better idea of which of various forms of consequentialism is correct (if any of them are)? Why not push for such progress, instead of your current favorite form of consequentialism?

I currently have a general sense of what it would look like but definitely not a naturalistic definition of what I value. I can think of a couple of different ways that suffering and happiness could turn out to work that would alter what sort of wireheading or hedonium I would want to implement, but not drastically. i.e it would not make me reject the idea.

I'm not sure that people would generally start wanting the same sorts of things I do if they had this knowledge and encouraging other people to do research so that I could later get access to it would have a poor rate of return. And so it seems like a better idea encourage people to implement these somewhat emotionally salient things when they are able to, rather than working on very expensive science myself. I'm not sure I'd be around to see the time when it might be applied and even then I'm not sure how likely most people would be to implementing it.

Having said that, since many scientists don't have these values there will be some low hanging fruit in looking at and applying previous research and I intend to do that. I just won't make a career out of it.

I think that moral realism is a non starter so I ignored that part of your question, but I can go into detail on that if you would like.

[-][anonymous]11y10

Surely that would be a huge amount of mostly scientific progress? How much value we assign to a particular thing is totally arbitrary.

Are you a moral realist? I get the feeling we're heading towards the is/ought problem.

Surely that would be a huge amount of mostly scientific progress?

What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.

Are you a moral realist?

I think we don't know enough to accept or reject moral realism yet. But even assuming "no objective morality", there may be moral theories that are more or less correct relative to an individual (for example, which hedonic value theory is correct for you), and "philosophy" seems to be the only way to try to answer these questions.

What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.

You apply your moral sentiments to the facts to determine what to do. As you suggest, you don't look for them in other objects. I wouldn't be testing my moral sentiments per say, but what I want to do with the world depends on how exactly it works and testing that so I can best achieve this would be great

Figuring out more about what can suffer what can feel happiness would be necessary and some other questions would be useful to answer.

Moral realism vs. non realism can be a long debate but hopefully this will at least tell you where we are coming from even if you disagree.

How do we, in principle, figure out what can suffer and what can feel happiness? Do you think that's a scientific question, amenable to the scientific method? Suppose I told you that computer simulations of people can't suffer because they don't have consciousness. Can you conduct an experiment to disprove this?

Can't you just ask an upload whether it's conscious? It seems to me that belief in being conscious is causally linked to being conscious, or at least that's true for humans and would need a special reason to be false for uploads.

To address your question more directly, Eliezer thinks that it should be possible to create accurate but nonsentient models of humans (which an FAI would use to avoid simulating people in its environment as it tries to predict the consequences of its actions). This seems plausible, but if you attach a microphone and speaker to such a model then it would say that it's conscious even though it's not. It also seems plausible that in an attempt to optimize uploads for computational efficiency (in order to fit more people into the universe), we could unintentionally change them from sentient beings to nonsentient models. Does this convince you that it's not safe to "just ask an upload whether it's conscious"?

Optimizing an upload can turn it into Eliza! Nice :-)

I still think that if a neuron-by-neuron upload told you it was conscious, that would probably mean computer programs can be conscious. But now I'm less sure of that, because scanning can be viewed as optimizing. For example, if the neuron saying "I'm conscious" gets its input from somewhere that isn't scanned, the scanner might output a neuron that always says "yes". Thanks for helping me realize that!

I still think that if a neuron-by-neuron upload told you it was conscious, that would mean computer programs can be conscious.

It seems that in this case the grounds for thinking that the upload is phenomenally conscious have largely to do with the fact that it is a "neuron-by-neuron" copy, rather than the fact that it can verbally report having conscious experiences.

It seems to me that belief in being conscious is causally linked to being conscious

This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.

or at least that's true for humans and would need a special reason to be false for uploads.

If you (meaning someone interested in promoting Hedonium) are less sure that it's true for non-humans, then I would ask a slightly different question that makes the same point. Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?

This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.

Yeah, I think that making more philosophical or conceptual progress was higher value relative to cost than doing more experimental work.

Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?

The question seems like it probably come down to 'how similar is the algorithm this thing is running to the algorithms can cause happiness in humans (and I'm very sure some other animals as well). And if running the exact same algorithm in a human would produce happiness and that person could tell us so, that would be pretty conclusive.

If Omega was concerned about this sort of thing (and didn't know it already it already) it could test exactly what conditions changes in physical conditions led to changes or lapses in its own consciousness and find out that way. That seems like potentially a near solution to the hard problem of consciousness that I think you are talking about.

[-][anonymous]11y-10

Science won't tell us anything about value, only about what collections of atoms make certain experiences, then we assign values to those.

Hm, yeah, moral uncertainty does seem a little important, but I tend to reject it for a few reasons. We can discuss it if you like but maybe by email or something would be better?

Some people will oppose Hedonium, and also things like wireheading, on various ethical grounds. But I think some people may be confused about wireheading and Hedonium rather than it actually being unacceptable according to their value system.

I think I potentially oppose hedonium, and definitely oppose wireheading, on a various ethical ground (objective list utilitarianism). Am I mistaken? (I imagine I'll need to elaborate before you can answer, so let me know what kind of elaboration would be useful.)

[-][anonymous]11y00

I think the disagreement might be about objective list theory, which (from the very little I know about it) doesn't sound like something I'm into.

However, if you value several things why not have wireheads experience them in succession? Or all at once? Likewise with utilitronium?

However, if you value several things why not have wireheads experience them in succession?

I value "genuinely real" experiences. Or, rather, I want sufficiently self-aware and intelligent people to interact with other sufficiently self-aware and intelligent people (though I am fine if these people are computer simulations). This couldn't be replaced by wireheading, though I do think it could be done (optimally, in fact) via some "utilitronium" or "computronium".

Would be up for creating wireheaded minds if they didn't care about interacting with other people?

Not sure that interacting with people is the most impotent part of my life and I'd be fine living a life without that feature providing it otherwise good.

Would be up for creating wireheaded minds if they didn't care about interacting with other people?

No. But if already existing people would prefer to wirehead, I'd be up for making that happen.

[-][anonymous]11y20

You values make me sad :'( Still, maybe you'll make a massive happy simulation and get everyone to live in it, that's pretty awesome, but perhaps not nearly as good as Hedonium

Problem: I don't value happiness or pleasure, basically at all. I don't know exactly what my extrapolated volition, but it seems safe to say it's some form of network with uniqueness requirements on nodes and interconnectivity requirements and goes up somewhere between quadratically and exponentially with it's size, and that an FAI is a strict requirement not just a helpful bonus. So any kind of straightforward tiling will be suboptimal, since the best thing to do locally depends on the total amount of resources available, and if a to similar already exists somewhere else in the network. And of top of that you have to keep latency between all parts of the system low.

tl;dr: my utility function probably looks more like "the size of the internet" than "the amount of happiness in the universe".

Hmm, well we could just differ in fundamental values. It does seem strange to me that based on the behavior of most people in their everyday lives that they wouldn't value experiential things very highly. And that if they did their values of what to do with the universe would share this focal point.

I'll share the intuitions pumps and thought experiments that lead to my values because should make them seem less alien.

So when I reflect on my what are my strongest self regarding values, it's pretty clear to me that "not getting tortured" is at the top of my preferences. I have other values and I seem to value some non-experiential values such as truth and wanting to remain the same sort of person as I currently am, but really these just pale in comparison to my preference for not_torture. I don't think that most people on LW imagine torture really consider torture when they reflect on what they value.

I also really strongly value peak hedonic experiences that I have had, but I haven't experienced any that have an intensity that could compare directly to what I can imagine real torture would be like so I use torture as an example instead. The strongest hedonic experiences I have had are nights where I successfully met interesting, hot women and had sex with them. I would certainly trade a number of these nights for a nigh of real torture and so they be described on the same scale.

My other regarding desires are straightforwardly about the well being of other beings and I would want to do this in the same way that I would want to satisfy myself if I were to have the same desires as they have. So if they have desires A, B & C, I would want the same thing to happen for them as I would want for myself if I had the same exact same set of desires.

Trying to maximize things other than happiness and suffering involves trading off against these two things and it just does seem worth it to do that. The action that maximizes hedons is also the action that the most beings care the most about happening and it feels kind of arbitrary and selfish to do something else instead.

I accept these intuition pumps and that leads me to hedonium. If it's unclear how exactly this follows, I can elaborate.

That might be the case. Lets do the same "observe current everyday behaviours" thing for me:

For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions, and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character. An alien observing me might conclude I mostly value memorizing internet culture, and my explicit goals are mostly making a specific piece of information (often a work of art fitting some concept) exist on the internet not caring much abut if it was "me" that put it there. Quite literally a meme machine.

I don't think these types of analysis (intuition pump is the wrong word I think) are very useful thou. Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.

For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions,

Most people are at an extremely low risk of actually getting tortured so looking at their revealed preferences for it would be hard. The odd attitudes people have to low risk high impact events would also confound that analysis.

It seems like a good portion of the long term plans of people are also the things that make them happy. The way I think about this is asking whether I would still want to want to do something if It would not satisfy my wanting or liking systems when I performed it. The answer is usually no.

and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character.

I'm not quite sure what you mean here.

Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.

It seems to me that there would be problems like the voter paradox for CEV. And so the process would involve judgement calls and I am not sure if I would likely agree with the judgement calls someone else made for me if that was how CEV was to work. Being given super human intelligence to help me decide my values would be great, though.

I have also have some of the other problems CEV that are discussed in this thread: http://lesswrong.com/lw/gh4/cev_a_utilitarian_critique/

"And then the galaxies were turned into an undifferentiated mass of eudaimonium plasma" - phrases you don't hear very often.

[-][anonymous]11y30

It may seem like a loss of diversity, but if many-worlds is true, this happens only in a very small percentage of the worlds.

Most worlds with intelligent life will contain all kinds of diversity anyway, so why not maximize raw pleasure in the few worlds where it's possible?

As for a general point, I think you need more information on whether any current strategies to promote hedonium would actually be useful (i.e., increase the chance of hedonium) and if there isn't a better way (i.e., by promoting something else).

I would do some good if you explained at the outset what the hell you're talking about. I stopped reading about halfway into the post because I couldn't get a clear idea of that; what is a hedonium-esque scenario and what does promotion of hedonium mean? The wiki link for utilitronium doesn't help much.

[-][anonymous]11y20

Sorry, imagine something along the lines of tiling the universe with the smallest collection of atoms that makes a happy experience.

Hedonium-esque would just be something like converting all available resources except earth into Hedonium.

By "promotion" I mean stuff like popularizing it, I'm not sure how this might be done. Maybe ads targeted at people who interested in transhumanism?

I still have no idea what that means. In particular, I don't see how this is anything but a very roundabout formulation for "change the world so that people are happier". Is this about the difference between preference utilitarianism and some sort of experience-based utilitarianism, or what?

It's "change the world so that people are happier, to the exclusion of all else". In particular, they are only "people" to the extent that this is necessary to be happy. They will not have an interesting culture. They will not be intelligent, except insomuch as this is necessary to be sentient, and by extension happy.

If all you care about is happiness, then tiling the universe with hedonium is the best-case scenario. If you want the universe to be an interesting place populated by intelligent beings, then it is less so. I'm in the former category. Most of the people here are not.

Thanks for that explanation.

You scare me.

It's mostly a matter of scale, but at such a large scale that obvious discrepancies or differentiability pop up. Just as there is a difference between making a computer chip factor and paving an entire desert with computronium, there's a difference between everyone being polite to each other and the universe being re-arranged specifically to make a person as happy as they possibly could be.

Think about situations like Just Another Day In Utopia, excepting with less control over the experience. Upload scenarios are more overt about it in the literature -- see chapter 6 of Friendship is Optimal, no knowledge of the setting required -- but that largely reflects energy efficiency and population density optimization more than a fundamental necessary of the scenario. See Huxley's Brave New World for a non-Singularity take on things.

And for clarity, we're probably also compressing more complex values into the word "happy".

Consider updating your article in light of this or (given your current slate of downvotes), deleting and republishing.

Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?

The universe has already been tiled, with star-onium. Mess with that, and the antibodies of the space gods will stop you.

If it's been tiled with star-onium, why is there so much dark matter, nebulas, gas clouds, black holes, and generally non-stars? If they're that incompetent at their primary objective, I don't think we need to worry about their antibodies.