Comment author: Wei_Dai 04 July 2011 08:07:44AM 1 point [-]

I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society.

I'm guessing mostly at the meme level.

And then I would ask how that meme contributes to its own propagation at that level.

It seems pretty obvious, doesn't it? Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.

Comment author: Perplexed 04 July 2011 03:18:56PM 1 point [-]

Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can't successfully propagate (it repeatedly arises by spontaneous generation, presumably).

I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue as you suggested in the great-grandparent:

"Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."

Isn't it more likely that someone realizing that they have been subverted by a selfish meme would be trying to self-modify?

Comment author: Jonathan_Graehl 02 July 2011 04:10:00PM 0 points [-]

The value of saved vs. new vs. cloned lives is a worthwhile question (and yes, it's only one example) - to introspect on.

I'd gain more satisfaction out of saving a group of people by defeating the cause directly - safely killing or capturing the kidnappers rather than paying the ransom. I'd rather save all those at risk by defeating the entire threat, permanently. If I can only save a small fraction of the group threatened by a single cause, that's less satisfying. But maybe in what you'd think would be a nearly-linear region (you can save a few people from starvation today, for sure), I'd be more than half as satisfied by helping one identifiable person and being able to monitor the consequences than I would by helping two (out of an ocean of a billion). Further, in those "drop in a bucket" cases, I'd expect some desire to save people from diverse threats, as long as the reduced efficiency wasn't too high to justify the thrill of novelty. This desire would be in tension with conserving research/decision effort (just save one more life in the way already researched, prepared, and tested), consistency, a desire for complete victory (but I postulated that my maximal impact was too small - but becoming part of an alliance that achieves complete victory would be nice).

Part of the value of saving existing lives is that I feel a sense of security knowing that I and people like me are fighting such threats as might someday affect me - a reflexive feeling of having allies in the world who might help me - not as a result of anonymous charity (which would be irrational), but as a result of my being the type of person who, when having resources to spare, helps where it's needed more.

But I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states. I want to use additive utility, and as far as I can tell I'm immune to arguments about nonlinearity of objects.

Comment author: Perplexed 04 July 2011 04:55:29AM 1 point [-]

I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states.

I don't disagree. My choice of slogan wording - "utility is not additive" - doesn't capture what I mean. I meant only to deny that the value of something happening N times is (N x U) where U is the value of it happening once.

Comment author: Wei_Dai 04 July 2011 03:27:31AM *  1 point [-]

Thanks for pointing me to Binmore's work. It does sound very interesting.

Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare.

This is tangential to your point, but what would you say to a utilitarian who says:

"Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."

And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.

By "orthodox position" are you referring to TDT-related ideas? I've made the point several times that I doubt they apply to humans. (I don't vote myself, actually.) I don't see how Binmore could have "demolished" those ideas as they relate to AIs since he couldn't have learned about them when he wrote his books.

Comment author: Perplexed 04 July 2011 04:43:43AM *  1 point [-]

what would you say to a utilitarian who say: "Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."

There are two separate issues here. I assume that by "linearly" you are referring to the subject that started this conversation: my claim that utilities "are not additive", an idea also expressed as "diminishing returns", or diminishing marginal utility of additional people. I probably would not dispute the memetic evolution claim if it focused on "linearity".

The second issue is a kind of universality - all people valued equally regardless of kinship or close connectedness in a network of reciprocity. I would probably express skepticism at this claim. I would probe the claim to determine whether the selection operates at the level of the meme, the individual, or the society. And then I would ask how that meme contributes to its own propagation at that level.

By "orthodox position" are you referring to TDT-related ideas?

Mostly, I am referring to views expressed by EY in the sequences and frequently echoed by LW regulars in comments. Some of those ideas were apparently repeated in the TDT writeup (though I may be wrong about that - the write-up was pretty incoherent.)

Comment author: Perplexed 04 July 2011 01:11:44AM 7 points [-]

... the mistake began as soon as we started calling it a "blue-minimizing robot".

Agreed. But what kind of mistake was that?

Is "This robot is a blue-minimizer" a false statement? I think not. I would classify it as more like the unfortunate selection of the wrong Kuhnian paradigm for explaining the robot's behavior. A pragmatic mistake. A mistake which does not bode well for discovering the truth, but not a mistake which involves starting from objectively false beliefs.

Comment author: Perplexed 04 July 2011 01:11:24AM 0 points [-]

... the mistake began as soon as we started calling it a "blue-minimizing robot".

Agreed. But what kind of mistake was that?

Is "This robot is a blue-minimizer" a false statement? I think not. I would classify it as more like the unfortunate selection of the wrong Kuhnian paradigm for explaining the robot's behavior. A pragmatic mistake. A mistake which does not bode well for discovering the truth, but not a mistake which involves starting from objectively false beliefs.

Comment author: Nick_Tarleton 12 March 2008 08:24:41PM 6 points [-]

Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"?

The frequency of an event depends on how narrowly the initial conditions are defined. If an atomically identical coin flip is repeated, obviously the frequency of heads will be either 1 or 0 (modulo a tiny quantum uncertainty).

Comment author: Perplexed 03 July 2011 08:50:45PM 2 points [-]

I think that we should follow Jaynes and insist upon 'probability' as the name of the subjective entity. But so-called objective probability should be called 'propensity'. Frequency is the term for describing actual data. Propensity is objectively expected frequency. Probability is subjectively expected frequency. That is the way I would vote.

Comment author: Peterdjones 03 July 2011 08:34:11PM *  0 points [-]

There are other interpretations of quantum mechanics, but they don't make any sense.

In you opinion. Many Worlds does not make sense in the opinions of its critics. You are entitled to back an interpretation as you are entitled to back a football team. You are not entitled to portray your favourite interpretation of quantum mechanics as a matter of fact. If interpretations were proveable, they wouldn't be called interpretations.

Comment author: Perplexed 03 July 2011 08:44:28PM 1 point [-]

As I understand it, EY's commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer's prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.

Comment author: Wei_Dai 03 July 2011 07:11:46AM *  3 points [-]

Does your utility function treat "a life saved by Perplexed" differently from just "a life"? I could understand an egoist who does not terminally value other lives at all (as opposed to instrumentally valuing saving lives as a way to obtain positive emotions or other benefits for oneself), but a utility function that treats "a life saved by me" differently from just "a life" seems counterintuitive. If the utility of a life saved by Perplexed not different from the utility of another life, then unless your utility function just happens to have a sharp bend at the current world population level, the utility of two saved lives can't be much less than twice the utility of one saved life. (See Eliezer's version of this argument, and more along this vein, here.)

Comment author: Perplexed 03 July 2011 04:16:52PM 1 point [-]

Does your utility function treat "a life saved by Perplexed" differently from just "a life"?

I'm torn between responding with "Good question!" versus "What difference does it make?". Since I can't decide, I'll make both responses.

Good question! You are correct in surmising that the root justification for much of the value that I attach to other lives is essentially instrumental (via channels of reciprocity). But not all of the justification. Evolution has instilled in me the instinct of valuing the welfare (fitness) of kin at a significant fraction of the value of my own personal welfare. And then there are cases where kinship and reciprocity become connected in serial chains. So the answer is that I discount based on 'remoteness' where remoteness is a distance metric reflecting both genetic and social-interactive inverse connectedness.

What difference does it make? This is my utility function we are talking about, and it is only operational in deciding my own actions. So, even if my utility function attached huge value to lives saved by other people, it is not clear how this would change my behavior. The question seems to be whether people ought to have multiple utility functions - one for directing their own rational choices; the others for some other purpose.

I am currently reading Binmore's two-volume opus Game Theory and the Social Contract. I strongly recommend it to everyone here who is interested in decision theory and ethics. Although Binmore doesn't put it in these terms, his system does involve two different sets of values, which are used in two different ways. One is the set of values used in the Game of Life - a set of values which may be as egoistic as the agent wishes (or as altruistic). However, although the agent is conceptually free in the Game of Life, as a practical matter, he is coerced by everyone else to adhere to a Social Contract. Due to this coercion, he mostly behaves morally.

But how does the Social Contract arise? In Binmore's normative fiction, it arises by negotiated consensus of all agents. The negotiation takes place in a Rawlsian Original Position under a Veil of Ignorance. Since the agent-while-negotiating has different self-knowledge than does the agent-while-living, he manifests different values in the two situations - particularly with regard to utilities which accrue indexically. So, according to Binmore, even an agent who is inherently egoistic in the Game of Life will be egalitarian in the Game of Morals where the Social Contract is negotiated. Different values for a different purpose.

That is the concise summary of the ethical system that Binmore is constructing in the two volumes. But he does a marvelously thorough job of ground-clearing - addressing mistakes made by Kant, Rawls, Nozick, Parfit, and others regarding the Prisoner's Dilemma, Newcomb's 'paradox', whether it is rational to vote (probably wasted), etc. And in the course of doing so, he pretty thoroughly demolishes what I understand to be the orthodox position on these topics here at Less Wrong.

Really, really recommended.

Comment author: [deleted] 02 July 2011 04:51:40PM 5 points [-]

That was about time discounting, not diminishing returns.

In response to comment by [deleted] on People neglect small probability events
Comment author: Perplexed 02 July 2011 07:04:27PM *  2 points [-]

Correct. In fact, I probably confused things here by using the word "discount" for what I am suggesting here. Let me try to summarize the situation with regard to "discounting".

Time discounting means counting distant future utility as less important than near future utility. EY, in the cited posting, argues against time discounting. (I disagree with EY, for what it is worth.)

"Space discounting" is a locally well-understood idea that utility accruing to people distant from the focal agent is less important than utility accruing to the focal agent's friends, family, and neighbors. EY presumably disapproves of space discounting. (My position is a bit complicated. Distance in space is not the relevant parameter, but I do approve of discounting using a similar 'remoteness' parameter.)

The kind of 'discounting' of large utilities that I recommended in the great-grandparent probably shouldn't be called 'discounting'. I would sloganize it as "utilities are not additive." The parent used the phrase 'diminishing returns'. That is not right either, though it is probably better than 'discounting'. Another phrase that approximates what I was suggesting is 'bounded utility'. (I'm pretty sure I disagree with EY on this one too.)

The fact that I disagree with EY on discounting says absolutely nothing about whether I agree with EY on AI risk, reductionism, exercise, and who writes the best SciFi. That shouldn't need to be said, but sometimes it seems to be necessary in your (XiXiDu's) case.

Comment author: XiXiDu 02 July 2011 03:24:15PM *  2 points [-]

A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with.

What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her).

And the people from GiveWell even knew about Pascal's Mugging, what is it that they are insufficiently familiar with?

I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don't know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don't know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities?

And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn't that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?

Comment author: Perplexed 02 July 2011 06:40:12PM 1 point [-]

What benelliot said.

Sheesh! Please don't assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.

View more: Prev | Next