Diagonalore

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

When considering this topic I think one has to contend with the notion that suffering and well-being can't carry symmetrical weight. 

The idea that they're not things you can combine into one value with the hopes that the sum ends up being positive. That in fact suffering just exists in the negative domain of qualia, and no amount of positive qualia can "cancel it out", unless the two are experienced simultaneously (in which case I don't think I'd consider that to be actual suffering). 

I'm currently undecided on the merits of antinatalism for a variety of reasons.

That said, I have past experience of at least ten years of excruciating major depressive disorder, (doing much better now). If I were given the option to experience another decade of that, in exchange for an extra century of pain-free euphoria, I would absolutely decline that offer. Even if there were only a 10% chance that I'd even experience that decade, I would still decline.

I appreciate your input, these are my first two comments here so apologies if i'm out of line at all.

>Roughly speaking, you're saying that the ground-truth source of values is the self-evidence of those values to agents holding them. 

In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn't Imply that consciousness is any less real, even if it means that it isn't possible for one agent to entirely assess the "realness" of another agent's claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don't suggest doubting the fact that the existence of consciousness can be arrived at self evidently.

For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they're wrong to call that self evident, and you could justifiably dismiss their level of confidence.

Similarly, I'm not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.

I realise that probably makes it sound like I'm trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I'm suggesting there's a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value. 

If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.

>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.

Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn't obvious to me. 

There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"

I'm not sure if I'm able to properly articulate my thoughts on this but I'd be interested to know if it's understandable and where it might fit. Sorry if I repeat myself.

from my perspective It's like if you applied a similar template to verify/refute the cogito.

I know consciousness exists because I'm conscious of it. If you asked me if I'd accept the truth that I'm not conscious, supposing this were the result of the cogito, I'd consider that question incoherent.

If someone concluded that they're not conscious, by leveraging consciousness to assess whether they're conscious, then I could only conclude that they misunderstand consciousness.

My version of moral realism would be similar. The existence of positive and negative moral value is effectively self evident to all beings affected by such values.

To me, saying: "what if the ground truth of morality is that (all else equal) an instance of suffering is preferable to it's absence." Is like saying: "what if being conscious of one's own experience isn't necessarily evidence for consciousness."