Poly marriage?
A thought occurred to me today as I skimmed an article in a rationality forum where the subject of gay marriage cropped up; seeing as the issue has been hotly contested in various public fora and especially the courts, what about poly? After all, many if not all the arguments for gay marriage apply to poly marriage as well.
Questions for LWers who are currently in a such a relationship, or have an opinion to share:
Do polies want to marry each other or do such relationships not lend themselves to permanence above a threshold of partners? Should polies campaign for the right for a civil union anyway? what are the up and down sides of this? etc
wireless-heading, value drift and so on
A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.
What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?
by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.
Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.
but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?
discuss.
Complete Wire Heading as Suicide and other things
I came to the idea after a previous lesswrong topic discussing nihilism, and its several comments on depression and suicide. My argument is that wire heading in its extreme or complete/full form can be easily modeled as suicide, or less strongly as volitional intelligence reduction, at least given current human brain structure and the technology being underdeveloped and hence understood and more likely to lead to such end states.
I define Full Wire Heading as that which a person would not want to reverse after it 'activates' and which deletes their previous utility function or most of it. a weak definition yes, but it should be enough for the preliminary purposes of this post. A full wire head is extremely constrained, much like an infant for e.g. and although the new utility function could involve a wide range of actions, the activation of a few brain regions would be the main goal, and so they are extremely limited.
If one takes this position seriously, it follows that only one's moral standpoint on suicide or say lobotomy should govern judgments about full wire heading. This is trivially obvious of course, but to take this position as true we need to understand more about wire heading, as data is extremely lacking especially in regards to human like brains. My other question then is to what extent could such an experiment help in answering the first question?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)