I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible?
Such an entity is possible, but would not be an atom-exact copy of you.
...Has someone been mass downvoting you?
What if you're like me and consider it extremely implausible that even a strong superintelligence would be sentient unless explicitly programmed to be so (or at least deliberately created with a very human-like cognitive architecture), and that any AI that is sentient is vastly more likely than a non-sentient AI to be unfriendly?
I've never heard of 'Dust Theory' before, but I should think it follows trivially from most large multiverse theories, does it not?
Trigger warning: memetic hazard.
Abj guvax nobhg jung guvf zrnaf sbe nalbar jub unf rire qvrq (be rire jvyy).
I'm not too concerned, but primarily because I still have a lot of uncertainty as to how to approach that sort of question. My mind still spits out some rather nasty answers.
EDIT: I just realized that you were probably intentionally implying exactly what I just said, which makes this comment rather redundant.
What bullet is that? I implicitly agreed that murder is wrong (as per the way I use the word 'wrong') when I said that your statement wasn't a misinterpretation. It's just that as I mentioned before, I don't care a whole lot about the thing that I call 'morality'.
What I meant when I called myself a nihilist was essentially that there was no such thing as an objective, mind-independent morality. Nothing more. I would still consider myself a nihilist in that sense (and I expect most on this site would), but I don't call myself that because it could cause confusion.
Can you explain how the statement 'A world in which everyone but me does not murder is preferable to a world in which everyone including me does not murder' is a misinterpretation of this quotation?
It isn't, although that doesn't mean I would necessaril...
That's my point. You're saying the 'nihilists' are wrong, when you may in fact be disagreeing with a viewpoint that most nihilists don't actually hold on account of them using the words 'nihilism' and/or 'morality' differently to you. And yeah, I suppose in that sense my 'morality' does tie into my actual values, but only my values as applied to an unrealistic thought experiment, and then again a world in which everyone but me adhered to my notions of morality (and I wasn't penalized for not doing so) would still be preferable to me than a world in which everyone including me did.
I mean that what I call my 'morality' isn't intended to be a map of my utility function, imperfect or otherwise. Along the same lines, you're objecting that self-proclaimed moral nihilists have an inaccurate notion of their own utility function, when it's quite possible that they don't consider their 'moral nihilism' to be a statement about their utility function at all. I called myself a moral nihilist for quite a while without meaning anything like what you're talking about here. I knew that I had preferences, I knew (roughly) what those preferences were...
Personally, when I use the word 'morality' I'm not using it to mean 'what someone values'. I value my own morality very little, and developed it mostly for fun. Somewhere along the way I think I internalized it at least a little, but it still doesn't mean much to me, and seeing it violated has no perceivable impact on my emotional state. Now, this may just be unusual terminology on my part, but I've found that a lot of people at least appear based on what they say about 'morality' to be using the term similarly to myself.
I think a big part of it is that I don't really care about other people except instrumentally. I care terminally about myself, but only because I experience my own thoughts and feelings first-hand. If I knew I were going to be branched, then I'd care about both copies in advance as both are valid continuations of my current sensory stream. However, once the branch had taken place, both copies would immediately stop caring about the other (although I expect they would still practice altruistic behavior towards each other for decision-theoretic reasons). I s...
Approximately the same extent to which I'd consider myself to exist in the event of any other form of information-theoretic death. Like, say, getting repeatedly shot in the head with a high powered rifle, or having my brain dissolved in acid.
I mean the sufficiency of the definition given. Consider a universe which absolutely, positively, was not created by any sort of 'god', the laws of physics of which happen to be wired such that torturing people lets you levitate, regardless of whether the practitioner believes he has any sort of moral justification for the act. This universe's physics are wired this way not because of some designer deity's idea of morality, but simply by chance. I do not believe that most believers in objective morality would consider torturing people to be objectively good in this universe.
Hm. I'll acknowledge that's consistent (though I maintain that calling that 'morality' is fairly arbitrary), but I have to question whether that's a charitable interpretation of what modern believers in objective morality actually believe.
Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it's relevant, it gives you advantages you wouldn't otherwise have. Though even in the sense you've described, I'm not sure whether the word 'morality' really seems applicable. If torturing people let us levitate, would we call that 'objective morality'?
EDIT: To be clear, my intent isn't to nitpick. I'm simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn't obviously seem to equate those patterns with 'morality' in any sense of the word that I'm familiar with.
I have no idea what 'there is an objective morality' would mean, empirically speaking.
More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we'd like.
I don't think Harry meant to imply that actually running this test would be nice, but rather that one cannot even think of running this test without first thinking of the possibility of making a horcrux for someone else (something which is more-or-less nice-ish in itself, the amorality inherent in creating a horcrux at all notwithstanding).
A paperclip maximizer won't wirehead because it doesn't value world states in which its goals have been satisfied, it values world states that have a lot of paperclips.
In fact, taboo 'values'. A paperclip maximizer is an algorithm the output of which approximates whichever output leads to world states with the greatest expected number of paperclips. This is the template for maximizer-type AGIs in general.
I didn't say I knew which parts of the brain would differ, but to conclude therefore that it wouldn't is to confuse the map with the territory.