All of Robin_Hanson2's Comments + Replies

Rather than trust any one economist, whatever her gender or heritage, I'd rather trust a betting market estimating future African GDP conditional on various aid levels.

The site seems to be promising to later evaluate a rather large number of widely ranging predictions. If it manages to actually keep this commitment, it will make an important contribution. The five year limit on prediction horizons is unfortunate, but of course site authors have every right to limit their effort commitment. I do suggest that they post the date that each prediction was submitted, along with the date it was made, to help observers correct for selection effects.

I'll admit lots of childhood experiences influenced my tastes and values, and that I don't have good reasons to expect those to be especially good tastes and values. So I will let them change to the extent I can.

There is a vast space of possible things that can go wrong, so each plan will have to cover a pretty wide range of scenarios. Even to include a scenario as one with a plan will signal to viewers that you consider it more likely and/or important.

Eliezer, in most signaling theories that economists construct the observers of signals are roughly making reasonable inferences from the signals they observe. If someone proposed to us that people take feature F as signaling C, but in fact there is no relation between F and C, we would want some explanation for this incredible mistake, or at least strong evidence that such a mistake was being consistently made.

I'm not quite sure what you mean by "mere" signaling. If visible feature F did not correlate with hard to observe character C, then F could not signal C. Of course the correlation isn't perfect, but why doesn't it make sense to choose F if you want people to believe you have C? Are you saying you didn't really care what people thought of your maturity?

It is functional for leaders to be more reluctant than most to "take sides" in common disputes. Our leaders do this, and so one can in fact signal high status by being "above" common disputes. Our leaders are in fact wiser than the average person, and in addition we want to say they are even wiser, so it makes sense to call people who signal high status as "wise." Furthermore, on average across human disputes with near equal support on the two sides the middle position is in fact the more correct position. So in this sense it does in fact signal wisdom to take a middle position.

Sure if you set the idealistic-enough cut high enough then of course then only a small fraction will make the cut. But if we consider the median non-fiction library book, don't you agree it is more idealistic than cynical?

Quoting myself:

The cynic's conundrum is that while a cynic might prefer that others believe an idealistic theory of his cynical mood, his own cynical beliefs should lead him to believe a cynical theory of his cynical mood. That is, cynics should think that rude complainers tend to be losers, rather than altruists.

It bothers me that some folks complaint about the story seems to be that it is too realistic, that it too clearly shows the actual sorts of betrayal that exist in the world. Yes, perhaps they misunderstood the intent of the story, but I must take my stand with telling the truth, as opposed to "teaching" morals via telling misleading stories, where betrayal is punished more consistently than it is in reality.

By what process was this story selected? That could help me judge how representative is this story.

Eliezer, our choices aren't between only the two polar opposites of only caring for the children's "own sake" vs. caring smartly for their reproductive value. Yes, the fact that our grief has not update for modern fertility patterns rejects one of those poles, but that does not imply the other pole.

The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value. ... Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake.
This just doesn't follow. Just because there is one feature that isn't taken into account and updated optimally in grief intensity doesn't imply that nothing else is taken into account but "the childrens' own sake", whatever that means.

Anna's point is similar to mine point that most behaviors we talk about are a mix of computation at all levels; this doesn't seem a good basis for hard lines for dichotomous cynical vs. not distinctions.

Eliezer, wishes aren't horses; strongly wanting to be able to tell the difference doesn't by itself give us evidence to distinguish. Note that legal punishments often distinguish between conscious intention and all other internal causation; so apparently that is the distinction the law considers most relevant, and/or easiest to determine. "Optimize" invites too many knee-jerk complaints that we won't exactly optimize anything.

Eliezer, you are right that my sense of moral approval or disapproval doesn't rely as heavily on this distinction as yours, and so I'm less eager to make this distinction. But I agree that one can sensibly distinguish genetically-encoded evolution-computed strategies from consciously brain-computed strategies from unconsciously brain-computed strategies. And I agree it would be nice to have clean terms to distinguish these, and to use those terms when we intend to speak primarily about one of these categories.

Most actions we take, however, probably have ... (read more)

My latest post hopefully clarifies my position here.

Eliezer, when I said "humans evolved tendencies ... to consciously believe that such actions were done for some other more noble purposes" I didn't mean that we create complex mental plans to form such mistaken beliefs. Nor am I contradicting your saying "he wants you to understand his logic puzzles"; that may well be his conscious intention.

Eliezer, you have misunderstood me if you think I typically suggest "you told yourself a self-deceiving story about virtuously loving them for their mind" or that I say "no human being was ever attracted to a mate's mind, nor ever wanted to be honest in a business transaction and not just signal honesty." I suspect we tend to talk about different levels of causation; I tend to focus on more distal causes while you focus on more proximate causes. I'm also not sure you understand what I mean by "signaling."

Eliezer, why so reluctant to analyze an actual equilibrium, rather than first order strategies ignoring so many important effects? My claims were about real equilibrium behavior, not some hypothetical world of clueless caricatures. And why so emphasize a few "writing" experts you've read over vast numbers of teachers of writing styles in law, engineering, accounting, academia, etc.?

Eliezer, as I indicate in my new post, the issue isn't so much whether you the author judge that some fiction would help inform readers about morals, but whether typical readers can reasonably trust your judgment in such things, relative to the average propaganda content of authors writing apparently similar moral-quandary stories.

Yvian, I warned against granting near-thought virtues to fictional detail here. I doubt Uncle Tom's cabin would have persuaded many slave holders against slavery; I expect well-written well-recommended anti-slavery fiction more served to signal to readers where fashionable opinion was moving.

Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the ... (read more)

2staticIP
Morals are axioms. They're ultimately arbitrary. Relying on arguments with logic and reason for deciding the axioms of your morals is silly, go with what feels right. Then use logic and reason to best actualize on those beliefs. Try to trace morality too far down and you'll realize it's all ultimately pointless, or at least there's no single truth to the matter.

I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?

To be clear, Eliezer is developing a new website and will tentatively use his editor status here and there to promote some posts there to here; whether and how long that continues will depend on the quality and relevance of those posts.

People who answer survey questions seem to consistently display a pessimism bias about these large scale trends, and the equity premium puzzle also can be interpreted as people being unreasonably pessimistic about such things. So I find it hard to believe that people tend to be too optimistic about such things. If you really want to bet on the low tail of the global distribution, I guess you should listen to survivalists. If you think the US will be more down, why invest in foreign places you don't think will be so down.

You forgot to mention - two weeks later he and all other humans were in fact deliriously happy. We can see that he at this moment did not want to later be that happy, if it came at this cost. But what will he think a year or a decade later?

I suppose he will be thinking along the same lines as a wirehead.

Are you sure this isn't the Eliezer concept of boring, instead of the human concept? There seem to be quite a few humans who are happy to keep winning using the same approach day after day year after year. They keep getting paid well, getting social status, money, sex, etc. To the extent they want novelty it is because such novelty is a sign of social status - a new car every year, a new girl every month, a promotion every two years, etc. It is not because they expect or want to learn something from it.

0waveman
Maybe for some people more shallow forms of novelty suffice e.g. sex with new women.
0[anonymous]
The argument that you would loose interest if you could explain boredom away -- which is what I have to conclude from your stance: seems a bit thin to me. Does a magician loose interest because he knows every single trick that wows the audience? Does the musician who has spent a lifetime studying the intricacies of Bach's partita No 2 loose interest just because he can deconstruct it entirely? Douglas Hoefstadter expressed a similar concern a decade or so ago when he learnt of some "computer program" able to "generate Mozart music better then Mozart himself" only to recant a bit later when facing the truism that there is more to the entity than the some of its parts. I do not know that we will someday be able to "explain magic away", and if that makes me irrational (and no, I don't need to bring any kind of god in the picture: I'm perfectly happy being goddless and irrational :) so be it.
6[anonymous]
An easy way to differentiate the two kinds for those who like games is: People who can play Mario Kart thousands of times and have a lot of fun. People who must play the new final fantasy. There are those who do both, and those who only enjoy games designed for doing the same thing, better and better, every five minutes. Compare the complexity of handball with the complexity of bowling. Maybe bowling is Eliezer::boring but it isn't boring for a lot of people. It would be a waste of energetic resources if FAI gave those people Final Fantasy 777 instead of just letting them play Mario Kart 9. The tough question then becomes: Are those of us who enjoy Mario Kart and bowling willing to concede the kind of fun that the Eliezer Final Fantasy, pro-increasing-rate-of-complexity find desirable? They will be consuming soooo much energy for their fun. Isn't it fair that we share the pie half in half, and they consume theirs exponencially, while we enjoy for subjectively longer?

Thank you for the praise! I'll post soon on fiction as near vs far-thinking.

It seems to me that you should take the surprising seductiveness of your imagined world that violated your abstract sensibilities as evidence that calls those sensibilities into question. I would have encouraged you to write the story, or at least write up a description of the story and what about it seemed seductive. I do think I have tried to describe how my best estimates of the future seem shocking to me, and that I would be out of place there in many ways.

It seems pretty obvious that time-scaling should work - just speed up the operation of all parts in the same proportion. A good bet is probably size-scaling, adding more parts (e.g. neurons) in the same proportion in each place, and then searching in the space of different relative sizes of each place. Clearly evolution was constrained in the speed of components and in the number of parts, so there is no obvious evolutionary reason to think such changes would not be functional.

Yeah Michael, what Eliezer said.

Even if Earth ends in a century, virtually everyone in today's world is absolutely influential. Even if 200 folks do the same sort of work in the same office, they don't do the exact same work, and usually that person wouldn't be there or be paid if no one thought their work made any difference. You can even now easily identify your mark, but it is usually tedious to trace it out, and few have the patience for it.

Virtually everyone in today's world is influential in absolute terms, and should be respected for their unique contribution. The problem is those eager to be substantially influential in percentage terms.

Yes humans are better at dealing with groups of size 7 and 50, but I don't think that has much to do with your complaint. You are basically noticing that you would probably be the alpha male in a tribe of 50, ruling all you surveyed, and wouldn't that be cool. Or in a world of 5000 people you'd be one of the top 100, and everyone would know your name, and wouldn't that be cool. Even we had better ingrown tools for dealing with larger social groups, you'd still have to face the fact that as a small creature in a vast social world, most such creatures can't expect to be very widely known or influential.

I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.

0pnrjulius
Yeah, do we really want to give over control to a super-powerful intelligence that DOESN'T have feelings?

I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?

0Eliut
"...and mostly likely even more from the rest of humanity today. " True, 90% of humanity, in this age, believe in ominpotent beings that look over our wellfare. To me what Eliezer says is that it would be boring to have a god around serving all our needs. But perhaps "it" exists and it is benevolent by not ruining our existence, simply by not existing...

Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.

A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we ar... (read more)

You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder? A friendly AI that was conscious and created conscious simulations to figure things out would still be pretty friendly overall.

I'm having trouble distinguishing problems you think the friendly AI will have to answer from problems you think you will have to answer to build a friendly AI. Surely you don't want to have to figure out answers for every hard moral question just to build it, or why bother to build it? So why is this problem a problem you will have to figure out, vs. a problem it would figure out?

Eliezer, this post seems to me to reinforce, not weaken, a "God to rule us all" image. Oh, and among the various clues that might indicate to me that someone would make a good choice with power, the ability to recreate that power from scratch does not seem a particularly strong clue.

3Wei Dai
That was my first reaction as well, but Eliezer must have intentionally chosen a "clue" that is not too strong. After all, an FAI doesn't really need to use any clues--it can just disallow any choice that is not actually good (except that would destroy the feeling of free will). So I think "let someone make a choice with a power if they can recreate that power from scratch" is meant to be an example of the kind of tradeoff an FAI might make between danger and freedom. What I don't understand is, since this is talking about people born after the Singularity, why do parents continue to create children who are so prone to making bad choices. I can understand not wanting to take an existing person and forcibly "fix" them, but is there supposed to be something in our CEV that says even new beings created from scratch must have a tendency to make wrong choices to be maximally valuable?

What is the point of trying to figure out what your friendly AI will choose in each standard difficult moral choice situation, if in each case the answer will be "how dare you disagree with it since it is so much smarter and more moral than you?" If the point is that your design of this AI will depend on how well various proposed designs agree with your moral intuitions in specific cases, well then the rest of us have great cause to be concerned about how much we trust your specific intuitions.

James is right; you only need one moment of "weakness" to approve a protection against all future moments of weakness, so it is not clear there is an asymmetric problem here.

The hard question is: who do you trust to remove your choices, and are they justified in doing so anyway even if you don't trust them to do so?

1pnrjulius
One would hope you at least trust yourself to limit your own options.

Honestly, almost everything the ordinary person thinks economists think is wrong. Which is what makes teaching intro to econ such a challenge. The main message here is to realize you don't know nearly as much as you might think about what other groups out there think, especially marginalized and colorful groups. Doubt everything you think you know about the beliefs of satanists, theologians, pedophiles, free-lovers, marxists, mobsters, futurists, UFO folk, vegans, and yes economists.

But how much has your intuitive revulsion at your dependence on others, your inability to do everything by yourself, biased your beliefs about what options you are likely to have. If wishes were horses you know. It is not clear what problems you can really blame on each of us not knowing everything we all know; to answer that you'd have to be clearer on what counterfactuals you are considering.

Marcello, I won't say any particular possible scenario isn't worth thinking about; the issue is just its relative importance.

Carl, yes of course singletons are not very unlikely. I don't think I said the other claim you attribute to me.

Why shouldn't we focus on working out our preferences in more detail for the scenarios we think most likely? If I think it rather unlikely that I'll have a genie who can grant three wishes, why should I work hard to figure out what those wishes would be? If we disagree about what scenarios are how likely, we will of course disagree about where preferences should be elaborated in the most detail.

Wei, yes I meant "unlikely." Bo, you and I have very different ideas of what "logical" means. V.G., I hope you will comment more.

Eliezer, I'd advise no sudden moves; think very carefully before doing anything. I don't know what I'd think after thinking carefully, as otherwise I wouldn't need to do it. Are you sure there isn't some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.

Load More