See you then!
Surely that would be a huge amount of mostly scientific progress?
What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.
Are you a moral realist?
I think we don't know enough to accept or reject moral realism yet. But even assuming "no objective morality", there may be moral theories that are more or less correct relative to an individual (for example, which hedonic value theory is correct for you), and "philosophy" seems to be the only way to try to answer these questions.
Science won't tell us anything about value, only about what collections of atoms make certain experiences, then we assign values to those.
Hm, yeah, moral uncertainty does seem a little important, but I tend to reject it for a few reasons. We can discuss it if you like but maybe by email or something would be better?
Do you have an ethical theory that tells you, given a collection of atoms, how much hedonic value it contains? I guess the answer is no, since AFAIK nobody is even close to having such a theory. Going from our current state of knowledge to having such a theory (and knowing that you're justified in believing in it) would represent a huge amount of philosophical progress. Don't you think that this progress would also give us a much better idea of which of various forms of consequentialism is correct (if any of them are)? Why not push for such progress, instead of your current favorite form of consequentialism?
Surely that would be a huge amount of mostly scientific progress? How much value we assign to a particular thing is totally arbitrary.
Are you a moral realist? I get the feeling we're heading towards the is/ought problem.
However, if you value several things why not have wireheads experience them in succession?
I value "genuinely real" experiences. Or, rather, I want sufficiently self-aware and intelligent people to interact with other sufficiently self-aware and intelligent people (though I am fine if these people are computer simulations). This couldn't be replaced by wireheading, though I do think it could be done (optimally, in fact) via some "utilitronium" or "computronium".
Some people will oppose Hedonium, and also things like wireheading, on various ethical grounds. But I think some people may be confused about wireheading and Hedonium rather than it actually being unacceptable according to their value system.
I think I potentially oppose hedonium, and definitely oppose wireheading, on a various ethical ground (objective list utilitarianism). Am I mistaken? (I imagine I'll need to elaborate before you can answer, so let me know what kind of elaboration would be useful.)
I think the disagreement might be about objective list theory, which (from the very little I know about it) doesn't sound like something I'm into.
However, if you value several things why not have wireheads experience them in succession? Or all at once? Likewise with utilitronium?
I would do some good if you explained at the outset what the hell you're talking about. I stopped reading about halfway into the post because I couldn't get a clear idea of that; what is a hedonium-esque scenario and what does promotion of hedonium mean? The wiki link for utilitronium doesn't help much.
Sorry, imagine something along the lines of tiling the universe with the smallest collection of atoms that makes a happy experience.
Hedonium-esque would just be something like converting all available resources except earth into Hedonium.
By "promotion" I mean stuff like popularizing it, I'm not sure how this might be done. Maybe ads targeted at people who interested in transhumanism?
And don't have children.
All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.
Where does sentience fail as a boundary?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Anyone here?