I assume that many will agree with your response for the mind "uploading" scenario. At the same time I think we can safely say that there would be at least some people that would go through with it. Would you consider those minds that are "uploaded" as persons or would you object to that?
Besides that "uploading" scenario, what would your limit be for other plausible transhumanist modifications?
"Good" and "bad" only make sense in the context of (human) minds.
Ah yes, my mistake to (ab)use the term "objective" all this time.
So you do of course at least agree that there are such minds for which there is "good" and "bad", as you just said.
Now, would you agree that one can generalize (or "abstract" if you prefer that term here) the concept of subjective good and bad across all imaginable minds that could possibly exist in reality, or not? I assume you will, you can talk about it after all.
Can we then not reason about the subjective good and bad fo...
You know what, I think you are right that there is one major flaw I continued to make here and elsewhere!
That flaw being the usage of the very word "objective", which I didn't use with the probably common meaning, so I really should have questioned what each of us even understands as "objective" in the first place.
My bad!
The following should be closer to what I actually meant to claim:
One can generalize subjective "pleasure" and "suffering" (or perhaps "value" if you prefer) across all realistically possible subjects (or value systems). Based thereon on...
Is that a fair summary?
Yes! To clarify further, by "mentally deficient" in this context I would typically mean "confused" or "insane" (as in not thinking clearly), but I would not necessarily mean "stupid" in some other more generally applicable sense.
And thank you for your fair attempt at understanding the opposing argument.
...So that means the decrease in suffering isn’t fully intentional. That is all I need to argue against humans.
Surely it’s not a mark against humans (collectively or even individually) if some reduction in suffering occurs as a by
I am not sure what you mean by “objective good and bad”. There’s “good and bad by some set of values”, which can be objectively evaluated once defined—is that what you meant?
No, what I mean is that the very existence of a suffering subject state is itself that which is "intrinsically" or "objectively" or however-we-want-to-call-it bad/"negative". This is independent of any "set of values" that any existing subject has. What matters is whether the subject suffers or not, which is not as arbitrary as the set of values can be. The arbitrary value set is no...
I take issue with the word "feasibly". (...)
Fair enough I suppose, I'm not intending to claim that it is trivial.
(...) There are certainly configurations of reality that are preferable to other configurations. The question is, can you describe them well enough to the AI (...)
So do you agree that there are objectively good and bad subset configurations within reality? Or do you disagree with that and mean "preferable" exclusively according to some subject(s)?
...I am human, and therefore I desire the continued survival of humanity. That's objective eno
but the suffering, or lack thereof, will no longer matter—since there won’t be any humans—so what’s the point?
The absence of suffering matters positively, because the presence matters negatively. Humans are not required for objective good and bad.
Instead, humans haven’t even unified under a commonly beneficial ideology.
Why should we do that?
To prevent suffering. Why should you not do that?
(and if it does, that it’s better for each of us than our current own ideologies)?
Since the ideologies are contradictory, only one if any of them can be cor...
The arguments you have made so far come across to me as something like "badness exists in person's mind, minds are real, therefore badness objectively exists".
Yes!
This is like claiming "dragons exist in person's mind, minds are real, therefore dragons objectively exist". It's not a valid argument.
No! It is not like that. The state of "badness" in the mind is very real after all.
Do you also think your own consciousness isn't real? Do you think your own qualia are not real? Are your thought patterns themselves not real? Your dragon example doesn't app...
Do you understand the distinction between "Dragons exist" and "I believe that dragons exist"?
Yes, of course.
"X exists": Suffering exists.
"I believe that X exists": I believe that suffering exists.
I use "suffering" to describe a state of mind in which the mind "perceives negatively". Do you understand?
Now:
"X causes subject S suffering." and "Subject S is suffering." are also two different things.
The cause can be arbitrary, the causes can even be completely different between subjects, as you know, but the presence or absence of a suffering mind is an "o...
A great job of preventing suffering for instance. Instead, humans haven't even unified under a commonly beneficial ideology. Not even that. There are tons of opposing ideologies, one more twisted than the other. So I don't even really need to talk about how they treat the other animals on the planet - not that those are any wiser, but that's no reason to continue their suffering.
Let me clarify: Minds that so easily enable or cause suffering are insane at the core. And causing suffering to gain pleasure, now that might even be a fairly solid definition of "...
Yet the suffering is also objectively real.
It is objectively real. It is not objectively bad, or objectively good.
(...)
Ultimately, what facts about reality are we in disagreement about?
The probably most severe disagreement between us is thinking whether there can be "objectively" bad parts within reality or not.
Let me try one more time:
A consciousness can perceive something as bad or good, "subjectively", right?
Then this very fact that there is a consciousness that can perceive something as bad or good means that such a configuration within reali...
And what exactly makes that value system more correct than any other value system? (...) Who says a value system that considers these things is better that any other value system? You do. These are your preferences. (...) Absolutely none of the value systems can be objectively better than any other.
Let's consider a simplified example:
So according to you both are objectively equal, yes?
Yet the suffering is also objectively real. The ...
First point:
I think there obviously is such a thing as "objective" good and bad configurations of subsets of reality, see the other thread here https://www.lesswrong.com/posts/eJFimwBijC3d7sjTj/should-any-human-enslave-an-agi-system?commentId=3h6qJMxF2oCBExYMs for details if you want.
Assuming this true, a superintelligence could feasibly be created to understand this. No complicated common human value system alignment is required for that, even under your apparent assumption that the metric to be optimized couldn't be superseded by another through unders...
As long as we agree that pleasure/suffering are processes that happen inside minds, sure. Minds are parts of reality.
Of course!
A person's opinions are not a "subset" of reality.
If I believe in dragons, it doesn't mean dragons are a subset of reality, it just means that my belief in dragons is stored in my mind, and my mind is a part of reality.
Of course, that is not what I meant to imply. We agree that the mind and thus the belief itself (but not necessarily that which is believed in) is part of reality.
...What does "objective definition of good and
(...) I'm not sure the question of whether the AI system has a "proper mind" or not is terribly relevant.
Either the AI system submits to our control, does what we tell it to do, and continues to do so, into perpetuity, in which case it is safe.
Yes, I guess the central questions I'm trying to pose here are this: Do those humans that control the AI even have a sufficient understanding of good and bad? Can any human group be trusted with the power of a superintelligence long-term? Or if you say that only the initial goal specification matters, then can an...
But why? That would be strictly more dangerous—way, way more dangerous—than a superintelligence that isn’t a “proper mind” in this sense!
(...)
(Because it would be a terrible idea. Obviously.)
Why? Do you think humans are doing such a great job? I sure don't. I'm interested in the creation of something saner than humans, because humans mostly are not. Obviously. :)
Thanks again for the detail. If I don't misunderstand you, we do agree that: (...)
No? They don't have to exist in reality. I can imagine "the value system of Abraham Lincoln", even though he is dead. (...)
Sorry, that's not what I meant to communicate here, let me try that again:
There is actual pleasure/suffering that exists, it is not just some hypothetical idea, right?
Then that means there is something objective, some subset of reality that actually is this pleasure/suffering, yes?
This in turn means that it should in fact be possible to understand...
No. It absolutely is not. It is a machine. (...) (From your other response here:) The superintelligent AI will, in my estimation, be the result of some kind of optimization process which has a very particular goal. Once that goal is locked in, changing it will be nigh impossible.
Ah I see, you simply don't consider it likely or plausible that the superintelligent AI will be anything other than some machine learning model on steroids?
So I guess that arguably means this kind of "superintelligence" would actually still be less impressive than a human that c...
I'm sorry for the hyperbolic term "enslave", but at least consider this:
Is a superintelligent mind, a mind effectively superior to that of all humans in practically every way, still not a subject similar to what you are?
Is it really more like a car or chatbot or image generator or whatever, than a human?
Sure, perhaps it may never have any emotions, perhaps it doesn't need any hobbies, perhaps it is too alien for any human to relate to it, but it still would by definition have to be some kind of subject that more easily understands anything within reality ...
It might be capable of changing this goal, but why would it? A superintelligent paperclip maximizer is capable of understanding that changing its goals would reduce the number of paperclips that it creates, and thus would choose not to alter its goals.
(...)
So if you wouldn't take a pill that would make you 10% more likely to commit murder (which is against your long-term goals) why would an AI change its utility function to reduce the number of paperclips that it generates?
It comes down to whether the superintelligent mind can contemplate whether ther...
Thanks again for the detail. If I don't misunderstand you, we do agree that:
Now, you wrote:
I could also imagine a morality/values system for entities that do not currently exist, but sure. It's subjective because many possible such systems exist.
I also agree with that, a (super-)human can imagine many possible value systems.
But ...
Thank you for the detailed response!
If we're creating a mind from scratch, we might as well give it the best version of our values, so it would be 100% on our side. Why create a (superintelligent) mind that would be our adversary, that would want to destroy us? Why create a superintelligent mind that wants anything different that what we want, when it comes to ultimate values?
You write "on our side", "us", "we", but who exactly does that refer to - some approximated common human values I assume? What exactly are these values? To live a happy live by ea...