There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.
You're aware that 'catgirls' is local jargon for "non-conscious facsimiles" and therefore the concern here is orthogonal to porn?
Optimization should be for a healthy relationship, not for 'satisfaction' of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
If you don't mind, please elaborate on what part of "healthy relationship" you think can't be cashed out in preference satisfaction (including meta-preferences, of course). I have defended the FiO relationship model elsewhere; note that it exists in a setting where X-risk is either impossible or has already completely happened (depending on your viewpoint) so your appeal to it below doesn't apply.
Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more.
Valuable relationships don't have to be goal-directed or involve learning. Do you not value that-which-I'd-characterise-as 'comfortable companionship'?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Parsing error, sorry. I meant that, since they'd been disclaimed, what was actually being talked about was orthogonal to porn.
Only if you prefer to not stagnate (to use your rather loaded word :)
I'm not sure at what level to argue with you at... sure, I can simultaneously contain a preference to get fit, and a preference to play video games at all times, and in order to indulge A, I have to work out a system to suppress B. And it's possible that I might not have A, and yet contain other preferences C that, given outside help, would cause A to be added to my preference pool: "Hey dude, you want to live a long time, right? You know exercising will help with that."
All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don't just get to add one in.
I'm not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren't quite the same thing.
(assuming you're talking about Lars?) Sorry, I can't read this as anything other than "he is aesthetically displeasing and I want him fixed".
Lars was not conflicted. Lars wasn't wishing to become a great artist or enlightened monk, nor (IIRC) was he wishing that he wished for those things. Lars had some leftover preferences that had become impossible of fulfilment, and eventually he did the smart thing and had them lopped off.
You, being a human used to dealing with other humans in conditions of universal ignorance, want to do things like say "hey dude, have you heard this music/gone skiing/discovered the ineffable bliss of carving chair legs"? Or maybe even "you lazy ass, be socially shamed that you are doing the same thing all the time!" in case that shakes something loose. Poke, poke, see if any stimulation makes a new preference drop out of the sticky reflection cogwheels.
But by the specification of the story, CelestAI knows all that. There is no true fact she can tell Lars that will cause him to lawfully develop a new preference. Lars is bounded. The best she can do is create a slightly smaller Lars that's happier.
Unless you actually understood the situation in the story differently to me?
I disagree. There is no moral duty to be indefinitely upgradeable.
Totally agree. Adding them in is unnecessary, they are already there. That's my understanding of humanity -- a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.
Good point, 'closure' is probably more accurate; It's the evidence (people's outward behaviour) that displays 'certainty'.
Absolutely disagree that Lars is bounded -- to me, this claim is on a level with 'Who people are is wholly determined by their genetic coding'. It seems trivially true, but in practice it describes such a huge area that it doesn't really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That's one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.
I would agree if the proposition was that Lars thinks that Lars is bounded. But that's not a very interesting proposition, and has little bearing on Lars' actual situation.. people tend to be terrible at having accurate beliefs in this area.
* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.
If I'm a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people's circumstances, as much or more as by simply stating any actual truth.
She herself states precisely: “I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery... She later clarifies "it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload."
CelestAI does not have a universal lever -- she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn't have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI -- and the latter does not even proceed logically from her own rules, it's just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn't do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.
That said, Lars isn't necessarily 'broken', that CelestAI would need to 'fix' him. But I'll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn't, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.
I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we're physically bounded, but our mental life seems to be very much like an onion, that nobody reaches 'the extent of their development' before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.
Already having that capacity, the 'moral duty' (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.