Spoilers for mad investor chaos and the woman of asmodeus (planecrash Book 1).
The Watcher spoke on, then, about how most people have selfish and unselfish parts - not selfish and unselfish components in their utility function, but parts of themselves in some less Law-aspiring way than that. Something with a utility function, if it values an apple 1% more than an orange, if offered a million apple-or-orange choices, will choose a million apples and zero oranges. The division within most people into selfish and unselfish components is not like that, you cannot feed it all with unselfish choices whatever the ratio. Not unless you are a Keeper, maybe, who has made yourself sharper and more coherent; or maybe not even then, who knows? For (it was said in another place) it is hazardous to non-Keepers to know too much about exactly how Keepers think.
It is dangerous to believe, said the Watcher, that you get extra virtue points the more that you let your altruistic part hammer down the selfish part. If you were older, said the Watcher, if you were more able to dissect thoughts into their parts and catalogue their effects, you would have noticed at once how this whole parable of the drowning child, was set to crush down the selfish part of you, to make it look like you would be invalid and shameful and harmful-to-others if the selfish part of you won, because, you're meant to think, people don't need expensive clothing - although somebody who's spent a lot on expensive clothing clearly has some use for it or some part of themselves that desires it quite strongly.
I've been thinking a lot lately about exactly how altruistic I am. The truth is that I'm not sure: I care a lot about not dying, and about my girlfriend and family and friends not dying, and about all of humanity not dying, and about all life on this planet not dying too. And I care about the glorious transhuman future and all that, and the (or whatever) possible good future lives hanging in the balance.
And I care about some of these things disproportionately to their apparent moral magnitude. But, what I care about is what I care about. Rationality is the art of getting more of what you want, whatever that is; of systematized winning, by your own lights. You will totally fail in that art if you bulldoze your values in a desperate effort to fit in, or to be a "good" person, in the way your model of society seems to ask you to. What you ought to do instead is protect your brain's balance of undigested value-judgements: be corrigible to the person you will eventually, on reflection, grow up to be. Don't rush to lock in any bad, "good"-sounding values now; you are allowed to think for yourself and discover what you stably value.
It is not the Way to do what is "right," or even to do what is "right" instrumentally effectively. The Way is to get more of what you want and endorse on reflection, whatever that ultimately is, through instrumental efficacy. If you want that, you'll have to protect the kernel encoding those still-inchoate values, in order to ever-so-slowly tease out what those values are. How you feel is your only guide to what matters. Eventually, everything you care about could be generated from that wellspring.
Well put. If you really do consist of different parts, each wanting different things, then your values should derive from a multi agent consensus among your parts, not just an argmax over the values of the different parts.
In other words, this:
seems like a very limited way of looking at “coherence”. In the context of multi agent negotiations, becoming “sharper and more coherent” should equate to having an internal consensus protocol that approaches closer to the Pareto frontier of possible multi agent equilibria.
Technically, “allocate all resources to a single agent” is a Pareto optimal distribution, but it’s only possible if a single agent has an enormously outsized influence on the decision making process. A person for whom that is true would, I think, be incredibly deranged and obsessive. None of my parts aspire to create such a twisted internal landscape.
I instead aspire to be the sort of person whose actions both reflect a broad consensus among my individual parts and effectively implement that consensus in the real world. Think results along the line of the equilibrium that emerges from superrational agents exchanging influence, rather than some sort of “internal dictatorship” where one part infinitely dominates over all others
Trying to simply even further an answer to "How to deal with internal conflict?" of "Don't have internal conflict" is pretty much correct and unhelpful.
I though that there was a line favoring consensus as approaching conflict situations something along the lines of "How to do deal with internal conflict?" "In conflict you are outside the pareto frontier so you should do nothing as there is nothing mututally agreeable to be found." or "cooperate to the extent that mutual agreeement exists and then do nothing after that point where true disagreement starts"