Posts

Sorted by New

Wiki Contributions

Comments

Yvain215y20

One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)

Yvain215y50

"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."

Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I should probably think harder before I become certain that I can make this prediction with something more complicated like my life. I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things.

"Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it."

In all of human society-space, not just the ones that have existed but every possible combination of social structures that could exist, I interpret only a vanishingly small number (the ones that contain large amounts of freedom, for example) as non-evil. Looking over all of human history, the number of societies I would have enjoyed living in are pretty minimal. I'm not just talking about Dante's Hell here. Even modern day Burma/Saudi Arabia, or Orwell's Oceania would be awful enough to make me regret not dying when I had the chance.

I don't think it's so hard to get a Singularity that leaves people alive but is still awful. If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.

"Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else."

I think that's false. In most cases I imagine, torturing people is not the terminal value of the dystopia, just something they do to people who happen to be around. In a pre-singularity dystopia, it will be a means of control and they won't have the resources to 'create' people anyway, (except the old-fashioned way). In a post-singularity dystopia, resources won't much matter and the AI's more likely to be stuck under injunctions to protect existing people than trying to create new ones (unless the problem is the Mere Addition Paradox). Though I admit it would be a very specific subset of rogue AIs that view frozen heads as "existing people".

"Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present."

I'm glad you hesitated to point it out. Luckily, I'm not as rationalist as I like to pretend :) More seriously, I currently have a lot of things preventing me from suicide. I have a family, a debt to society to pay off, and the ability to funnel enough money to various good causes to shape the future myself instead of passively experience it. And less rationally but still powerfully, I have the self-preservation urge pretty strongly that would probably kick in if I tried anything. Someday when the Singularity seems very near, I really am going to have to think about this more closely. If I think a dictator's about to succeed on an AI project, or if I've heard about the specifics of the a project's code and the moral system seems likely to collapse, I do think I'd be sitting there with a gun to my head and my finger on the trigger.

Yvain215y70

"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred years). I don't see any reason why the taboo on suicide must disappear. And any society advanced enough to revive me has by definition conquered death, so I can't just wait it out and die of old age. I place about 50% odds on not being able to die again after I get out.

I'm also less confident the future wouldn't be a dystopia. Even in the best case scenario the future's going to be scary through sheer cultural drift (see: legalized rape in Three Worlds Collide). I don't have to tell you that it's easier to get a Singularity that goes horribly wrong than one that goes just right, and even if we restrict the possibilities to those where I get revived instead of turned into paperclips, they could still be pretty grim (what about some well-intentioned person hard-coding in "Promote and protect human life" to an otherwise poorly designed AI, and ending up with something that resurrects the cryopreserved...and then locks them in little boxes for all eternity so they don't consume unnecessary resources.) And then there's just the standard fears of some dictator or fundamentalist theocracy, only this time armed with mind control and total surveillance so there's no chance of overthrowing them.

The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. You could change my mind if you had a utopian post-singularity society that completely mastered Fun Theory. But when I compare the horrible possibility of being forced to live forever either in a dystopia or in a world no better or worse than our own, to the good possibility of getting to live between thousand years and forever in a Fun Theory utopia that can keep me occupied...well, the former seems both more probable and more extreme.

Yvain215y60

facepalm And I even read the Sundering series before I wrote that :(

Coming up with narratives that turn the Bad Guys into Good Guys could make good practice for rationalists, along the lines of Nick Bostrom's Apostasy post. Obviously I'm not very good at it.

GeorgeNYC, very good points.

Yvain215y60

Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.

Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".

The worker can always cooperate, in which case he makes some money but the capitalist makes more.

Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.

(To perfect the analogy with capitalism, make two changes. First, the capitalist makes zero without the worker's cooperation. Second, the worker makes zero in all categories, and can only make money by entering into deals with the capitalist. But now it's not a Prisoner's Dilemma at all - it's the Ultimatum Game.)

IANAGT, but I bet the general rule for this class of game is that the worker's salary should depend a little on how much the capitalist can make without workers, how much the worker can make without capitalists, and what the marginal utility structure looks like - but mostly on their respective stubbornness and how much extra payoff having the worker's cooperation gives the capitalist.

In the posted example, AA's "labor" brings JW from a total of 50 to a total of 100. Perhaps if we ignore marginal utilities and they're both equally stubborn, and they both know they're both equally stubborn and so on, JW will be best off paying AA 25 for his cooperation, leading to the equal 75 - 75 distribution of wealth?

[nazgul, a warning. I think I might disagree with you about some politics. Political discussions in blogs are themselves prisoner's dilemmas. When we all cooperate and don't post about politics, we are all happy. When one person defects and talks about politics, he becomes happier because his views get aired, but those of us who disagree with him get angry. The next time you post a political comment, I may have to defect as well and start arguing with you, and then we're going to get stuck in the (D,D) doldrums.]

Yvain215y90

Darnit TGGP, you're right. Right. From now on I use Lord of the Rings for all "sometimes things really are black and white" examples. Unless anyone has some clever reason why elves are worse than Sauron.

Yvain215y270

[sorry if this is a repost; my original attempt to post this was blocked as comment spam because it had too many links to other OB posts]

I've always hated that Dante quote. The hottest place in Hell is reserved for brutal dictators, mass murderers, torturers, and people who use flamethrowers on puppies - not for the Swiss.

I came to the exact opposite conclusion when pondering the Israel-Palestinian conflict. Most of the essays I've seen in newspapers and on bulletin boards are impassioned pleas to designate one side or the other as Evildoers and the other as the Brave Heroic Resistance by citing who stole whose land first, whose atrocities were slightly less provoked, which violations of which cease-fire were dastardly betrayals and which were necessary pre-emptive actions, et cetera.

Not only is this issue so open to bias that we have little hope of getting to the truth, but I doubt there's much truth to be attained at all. Since "policy debates should not appear one-sided" and "our enemies are not innately evil", it seems pretty likely that they're two groups of people who both are doing what they honestly think is right and who both have some good points.

This isn't an attempt to run away from the problem, it's the first step toward solving the real problem. The real problem isn't "who's the hero and who's the terrorist scumbag?" it's "search solution-space for the solution that leads to the least suffering and the most peace and prosperity in the Middle East" There is a degree to which finding out who's the evildoer is useful here so we can punish them as a deterrent, but it's a pretty small degree, and the amount of energy people spend trying to determine it is completely out of proportion to the minimal gains it might produce.

And "how do we minimize suffering in the Middle East?" may be an easier question than "who's to blame?" It's about distributing land and resources to avoid people being starved or killed or oppressed, more a matter for economists and political scientists then for heated Internet debate. I've met conservatives who loathe the Palestinians and liberals who hate all Israelis who when asked supported exactly the same version of the two-state solution, but who'd never realized they agreed because they'd never gotten so far as "solution" before.

My defense of neutrality, then, would be something like this: human beings have the unfortunate tendency not to think of an issue as "finding the best solution in solution-space" but as "let's make two opposing sides at the two extremes, who both loathe each other with the burning intensity of a thousand suns". The issue then becomes "Which of these two sides is the Good and True and Beautiful, and which is Evil and Hates Our Freedom?" Thus the Democrats versus the Republicans or the Communists versus the Objectivists. I'd be terrified if any of them got one hundred percent control over policy-making. Thus, the Wise try to stay outside of these two opposing sides in order to seek the best policy solution in solution-space without being biased or distracted by the heroic us vs. them drama - and to ensure that both sides will take their proposed solution seriously without denouncing them as an other-side stooge.

A "neutral" of this sort may not care who started it, may not call one side "right" or "wrong", may claim to be above the fray, may even come up with a solution that looks like a "compromise" to both sides, but isn't abdicating judgment or responsibility.

Not that taking a side is never worth it. The Axis may have had one or two good points about the WWI reparations being unfair and such, but on the whole the balance of righteousness in WWII was so clearly on the Allies' side that the most practical way to save the world was to give the Allies all the support you could. It's always a trade-off between how ideal a solution is and how likely it is to be implemented.

Yvain215y200

"To be concerned about being grown up, to admire the grown up because it is grown up, to blush at the suspicion of being childish; these things are the marks of childhood and adolescence. And in childhood and adolescence they are, in moderation, healthy symptoms. Young things ought to want to grow. But to carry on into middle life or even into early manhood this concern about being adult is a mark of really arrested development. When I was ten, I read fairy tales in secret and would have been ashamed if I had been found doing so. Now that I am fifty I read them openly. When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up." - C.S. Lewis

Yvain215y100

Bruce and Waldheri, you're being unfair.

You're interpreting this as "some scientists got together one day and asked Canadians about their grief just to see what would happen, then looked for things to correlate it with, and after a bunch of tries came across some numbers involving !Kung tribesmen reproductive potential that fit pretty closely, and then came up with a shaky story about why they might be linked and published it."

I interpret it as "some evolutionary psychologists were looking for a way to confirm evolutionary psychology, predicted that grief at losing children would be linked to reproductive potential in hunter-gatherer tribes, and ran an experiment to see if this was true. They discovered that it was true, and considered their theory confirmed."

I can't prove my interpretation is right because the paper is gated, but in my support, I know of many studies very similar to this one that were done specifically to confirm evo psych's predictions (for example, The Adapted Mind is full of them). And most scientists don't have enough free time to go around doing studies of people's grief for no reason and then comparing it to random data sets until they get a match, nor would journals publish it if they did. And this really is exactly the sort of elegant, testable experiment a smart person would think up if ze was looking for ways to test evolutionary theory.

It's true that correlation isn't causation and so on et cetera, but if their theory really did predict the results beforehand when other theories couldn't, we owe them a higher probability for their theory upon learning of their results.

Yvain215y20

@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.

@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.

Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and statistical arguments linked to empirical studies probably also unpack into this category.

And then the colloquial meaning in which "valid argument" just means the same as "good point", eg "Senator Brown was implicated in a scandal" is a "valid argument" against voting for Senator Brown. You can't make a decision based on that fact alone, but you can include it in a broader decision-making process.

The problem with the second definition is that it makes "Slavery increases cotton production" a valid argument for slavery, which invites confusion. I'd rather say that the statement about cotton production is a "good point" (even better: "truthful point") and then call the cost-benefit analysis where you eventually decide "increased cotton production isn't worth the suffering, and therefore slavery is wrong" a "valid argument".

I can't really tell from the original post in which way Eliezer is using "valid argument". I assumed the first way, because he uses the phrase "valid form of argument" a few times. But re-reading the post, maybe I was premature. But here's my opinion:

Fiction isn't the first type of valid argument because there are no stated premises, no stated conclusion, and no formal structure. Or, to put it another way, on what grounds could you claim that a work of fiction was an invalid argument?

Fiction can convincingly express the second type of valid argument (good point), and this is how I think of Uncle Tom's Cabin. "Slavery is bad because slaves suffer" is a good point against slavery, and Uncle Tom's Cabin is just a very emotionally intense way of making this point that is more useful than simple assertion would be for all the reasons previously mentioned.

My complaint in my original post is that fiction tends to focus the mind on a single good point with such emotional intensity that it can completely skew the rest of the cost-benefit analysis. For example, the hypothetical sweatshop book completely focuses the mind on the good point that people can suffer terribly while working in a sweatshop. Anyone who reads the sweatshop book is in danger of having this one point become so salient that it makes a "valid argument" of the first, more formal type much more difficult.

Load More