In response to Meetup : Dublin
Comment author: Ruairi 09 May 2015 01:09:50PM 0 points [-]

Anyone here?

In response to Meetup : Dublin
Comment author: Ruairi 30 April 2015 07:23:33AM 2 points [-]

See you then!

In response to comment by Ruairi on .
Comment author: Wei_Dai 28 August 2013 11:41:40PM 1 point [-]

Surely that would be a huge amount of mostly scientific progress?

What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.

Are you a moral realist?

I think we don't know enough to accept or reject moral realism yet. But even assuming "no objective morality", there may be moral theories that are more or less correct relative to an individual (for example, which hedonic value theory is correct for you), and "philosophy" seems to be the only way to try to answer these questions.

In response to comment by Wei_Dai on .
Comment author: Ruairi 29 August 2013 08:25:57AM -1 points [-]

Science won't tell us anything about value, only about what collections of atoms make certain experiences, then we assign values to those.

Hm, yeah, moral uncertainty does seem a little important, but I tend to reject it for a few reasons. We can discuss it if you like but maybe by email or something would be better?

In response to .
Comment author: Wei_Dai 28 August 2013 10:59:32PM 3 points [-]

Do you have an ethical theory that tells you, given a collection of atoms, how much hedonic value it contains? I guess the answer is no, since AFAIK nobody is even close to having such a theory. Going from our current state of knowledge to having such a theory (and knowing that you're justified in believing in it) would represent a huge amount of philosophical progress. Don't you think that this progress would also give us a much better idea of which of various forms of consequentialism is correct (if any of them are)? Why not push for such progress, instead of your current favorite form of consequentialism?

In response to comment by Wei_Dai on .
Comment author: Ruairi 28 August 2013 11:12:14PM 1 point [-]

Surely that would be a huge amount of mostly scientific progress? How much value we assign to a particular thing is totally arbitrary.

Are you a moral realist? I get the feeling we're heading towards the is/ought problem.

In response to comment by Ruairi on .
Comment author: peter_hurford 28 August 2013 09:25:18PM 2 points [-]

However, if you value several things why not have wireheads experience them in succession?

I value "genuinely real" experiences. Or, rather, I want sufficiently self-aware and intelligent people to interact with other sufficiently self-aware and intelligent people (though I am fine if these people are computer simulations). This couldn't be replaced by wireheading, though I do think it could be done (optimally, in fact) via some "utilitronium" or "computronium".

In response to comment by peter_hurford on .
Comment author: Ruairi 28 August 2013 10:17:33PM 2 points [-]

You values make me sad :'( Still, maybe you'll make a massive happy simulation and get everyone to live in it, that's pretty awesome, but perhaps not nearly as good as Hedonium

In response to .
Comment author: peter_hurford 28 August 2013 12:42:35PM 3 points [-]

Some people will oppose Hedonium, and also things like wireheading, on various ethical grounds. But I think some people may be confused about wireheading and Hedonium rather than it actually being unacceptable according to their value system.

I think I potentially oppose hedonium, and definitely oppose wireheading, on a various ethical ground (objective list utilitarianism). Am I mistaken? (I imagine I'll need to elaborate before you can answer, so let me know what kind of elaboration would be useful.)

In response to comment by peter_hurford on .
Comment author: Ruairi 28 August 2013 12:58:49PM 0 points [-]

I think the disagreement might be about objective list theory, which (from the very little I know about it) doesn't sound like something I'm into.

However, if you value several things why not have wireheads experience them in succession? Or all at once? Likewise with utilitronium?

In response to .
Comment author: Creutzer 28 August 2013 11:55:10AM 1 point [-]

I would do some good if you explained at the outset what the hell you're talking about. I stopped reading about halfway into the post because I couldn't get a clear idea of that; what is a hedonium-esque scenario and what does promotion of hedonium mean? The wiki link for utilitronium doesn't help much.

In response to comment by Creutzer on .
Comment author: Ruairi 28 August 2013 12:23:43PM 1 point [-]

Sorry, imagine something along the lines of tiling the universe with the smallest collection of atoms that makes a happy experience.

Hedonium-esque would just be something like converting all available resources except earth into Hedonium.

By "promotion" I mean stuff like popularizing it, I'm not sure how this might be done. Maybe ads targeted at people who interested in transhumanism?

Comment author: Ruairi 28 August 2013 11:21:50AM 1 point [-]

You might like to ask in this facebook group too :)

Comment author: diegocaleiro 28 August 2013 11:05:18AM 1 point [-]

And don't have children.

Comment author: Ruairi 28 August 2013 11:20:39AM 1 point [-]
Comment author: shminux 28 July 2013 09:31:20PM 2 points [-]

All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.

Comment author: Ruairi 28 July 2013 10:23:11PM 5 points [-]

Where does sentience fail as a boundary?

View more: Next