Patrick_(orthonormal)

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Interesting. Since people are commenting on fiction vs. non-fiction, it's interesting to note that my formative books were all non-fiction (paleontology, physics, mathematics, philosophy), and that I now find myself much more easily motivated to try understanding the problems of the world than motivated to try fixing them.

Plural of anecdote, etc, etc.

I'm not sure, but was this line:

But, from the first species, we learned a fact which this ship can use to shut down the Earth starline

supposed to read "the Huygens starline"?

I was going to say that this (although very good) wasn't quite Weird enough for your purposes; the principal value of the Baby-Eaters seems to be "individual sacrifice on behalf of the group", which we're all too familiar with. I can grok their situation well enough to empathize quickly with the Baby-Eaters. I'd have hoped for something even more foreign at first sight.

Then I checked out the story title again.

Eagerly awaiting the next installments!

(E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil)

Uh, I don't mean that literally, though doing up a whole Litany of Politics might be fun.

Carl:

Those are instrumental reasons, and could be addressed in other ways.

I wouldn't want to modify/delete hatred for instrumental reasons, but on behalf of the values that seem to clash almost constantly with hatred. Among those are the values I meta-value, including rationality and some wider level of altruism.

I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.

I agree with that heuristic in general. I would be very cautious regarding the means of ending hatred-as-we-know-it in human nature, and I'm open to the possibility that hatred might be integral (in a way I cannot now see) to the rest of what I value. However, given my understanding of human psychology, I find that claim improbable right now.

My first point was that our values are often the victors of cultural/intellectual/moral combat between the drives given us by the blind idiot god; most of human civilization can be described as the attempt to make humans self-modify away from the drives that lost in the cultural clash. Right now, much of this community values (for example) altruism and rationality over hatred where they conflict, and exerts a certain willpower to keep the other drive vanquished at times. (E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil).

So far, we haven't seen disaster from this weak self-modification against hatred, and we've seen a lot of good (from the perspective of the values we privilege). I take this as some evidence that we can hope to push it farther without losing what we care about (or what we want to care about).

Carl:

I don't think that automatic fear, suspicion and hatred of outsiders is a necessary prerequisite to a special consideration for close friends, family, etc. Also, yes, outgroup hatred makes cooperation on large-scale Prisoner's Dilemmas even harder than it generally is for humans.

But finally, I want to point out that we are currently wired so that we can't get as motivated to face a huge problem if there's no villain to focus fear and hatred on. The "fighting" circuitry can spur us to superhuman efforts and successes, but it doesn't seem to trigger without an enemy we can characterize as morally evil.

If a disease of some sort threatened the survival of humanity, governments might put up a fight, but they'd never ask (and wouldn't receive) the level of mobilization and personal sacrifice that they got during World War II— although if they were crafty enough to say that terrorists caused it, they just might. Concern for loved ones isn't powerful enough without an idea that an evil enemy threatens them.

Wouldn't you prefer to have that concern for loved ones be a sufficient motivating force?

Roko:

Not so fast. We like some of our evolved values at the expense of others. Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can.

The interesting part of moral progress is that the values etched into us by evolution don't really need to be consistent with each other, so as we become more reflective and our environment changes to force new situations upon us, we realize that they conflict with one another. The analysis of which values have been winning and which have been losing (in different times and places) is another fascinating one...

Doug S:

If the broker believes some investment has a positive expectation this year (and is not very likely to crash terribly), he could advise John Smith to invest in it for a year minus a day, take the proceeds and go to Vegas. If he arrives with $550,000 instead of $500,000, there's a betting strategy more likely to wind up with $1,000,000 than the original plan.

The balance of risk and reward between the investment part and the Vegas part should have an optimal solution; but since anything over $1,000,000 doesn't factor nearly as much in John's utility function, I'd expect he's not going to bother with investment schemes that have small chances of paying off much more than $1,000,000, and he'd rather look for ones that have significant chances of paying off something in between.

Given your actual reasons for wondering about the world economy in 2040 conditioned on there not having been an extinction/Singularity yet, the survivalist option is actually worth a small hedge bet. If you can go (or convince someone else to go) live in a very remote area, with sufficient skills and resources to continue working quietly on building an FAI if there's a non-existential global catastrophe, that looks like it has a strongly positive expectation (since in those circumstances, the number of competing AI attempts will probably be few if any).

Now considering the Slump scenarios in which civilization stagnates but survives, it looks like there's not much prospect of winding up with extra capital in that situation, relative to others; but the capital you acquire might go relatively farther.

I have to say that the fact you're strongly considering these matters is a bit chilling. I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.

Sexual Weirdtopia: What goes on consensually behind closed doors doesn't (usually) affect the general welfare negatively, so it's not a matter of social concern. However, that particular bundle of biases known as "romantic love" has led to so much chaos in the past that it's become heavily regulated.

People start out life with the love-module suppressed; but many erstwhile romantics feel that in the right circumstances, this particular self-deception can actually better their lives. If a relationship is going well, the couple (or group, perhaps) can propose to fall in love, and ask the higher authorities for a particular love-mod for their minds.

Every so often, each loving relationship must undergo an "audit" in which they have the love-mods removed and decide whether to put them back in. No unrequited love is allowed; if one party ends it, the other must as well...

Load More