Stupid Questions Thread - January 2014

10 Post author: RomeoStevens 13 January 2014 02:31AM

Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!

Comments (293)

Sort By: Controversial
Comment author: Ghatanathoah 23 January 2014 06:58:21PM *  1 point [-]

What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?

I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.

The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means that negative preference utilitarians are opposed to having children, as doing so will create more unsatisfied preferences. And they are opposed to people dying under normal circumstances, because someone's death will prevent them from satisfying their existing preferences.

So what happens when you create someone who is going to die, and has an unbounded utility function? The amount of preferences they have is essentially infinite, does that mean that if such a person is created it is impossible to do any more harm, since an infinite amount of unsatisfied preferences have just been created? Does that mean that we should be willing to torture everyone on Earth for a thousand years if doing so will prevent the creation of such a person?

The problem doesn't go away if you assume humans have bounded utility functions. Suppose we have a bounded utility function, so living an infinite number of years, or a googolplex number of years, is equivalent to living a mere hundred billion years for us. That still means that creating someone who will live a normal 70 year lifespan is a titanic harm, a harm that everyone alive on Earth today should be willing to die to prevent it, as it would create 99,999,999,930 years worth of unsatisfied preferences!

My question is, how do negative preference utilitarians deal with this? The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it. And I don't think akrasia is the cause, because I've heard some of them admit that it would be acceptable to have a child if doing so reduced the preference frustration/suffering of a very large amount of existing people.

So with that introduction out of the way, my questions, on a basic level are:

  1. How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.

  2. How exactly did they go about calculating the answer to question 1?

There has to be some answer to this question, there wouldn't be whole communities of anti-natalists online if their ideology could be defeated with a simple logic problem.

Comment author: Kaj_Sotala 24 January 2014 06:33:24PM *  2 points [-]

(To the extent that I'm negative utilitarian, I'm a hedonistic negative utilitarian, so I can't speak for the preference NUs, but...)

So what happens when you create someone who is going to die, and has an unbounded utility function?

Note that every utilitarian system breaks once you introduce even the possibility of infinities. E.g. a hedonistic total utilitarian will similarly run into the problem that, if you assume that a child has the potential to live for an infinite amount of time, then the child can be expected to experience both an infinite amount of pleasure and an infinite amount of suffering. Infinity minus infinity is undefined, so hedonistic total utilitarianism would be incapable of assigning a value to the act of having a child. Now saving lives is in this sense equivalent to having a child, so the value every action that has even a remote chance of saving someone's life becomes undefined as well...

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself. One possible answer is that loudly advocating "you shouldn't have children, it's literally infinitely bad" is a horrible PR strategy that will just get your movement discredited, and e.g. talking about NU in the abstract and letting people piece the full implications themselves may be more effective.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account.

How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Comment author: Ghatanathoah 24 January 2014 08:08:18PM 0 points [-]

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires. I find this massively counter-intuitive and want to know how the antinatalist community addresses this.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself.

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Well, of course I'm not expecting an exact answer. But a ballpark would be nice. Something like "no more than x, no less than y." I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

Comment author: Kaj_Sotala 26 January 2014 07:32:59AM 1 point [-]

Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires.

Is that really how preference utilitarianism works? I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment. Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

I'm confused. How is wanting to live forever in a situation where you don't think that living forever is possible, different from any other unsatisfiable preference?

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

That doesn't sound right. The disutility is huge, yes, but the probability is so low that focusing your efforts on practically anything with a non-negligible chance of preventing further births would be expected to prevent many times more disutility. Like supporting projects aimed at promoting family planning and contraception in developing countries, pro-choice policies and attitudes in your own country, rape prevention efforts to the extent that you think rape causes unwanted pregnancies that are nonetheless carried to term, anti-natalism in general (if you think you can do it in a way that avoids the PR disaster for NU in general), even general economic growth if you believe that the connection between richer countries and smaller families is a causal and linear one. Worrying about vanishingly low-probability scenarios, when that worry takes up cognitive cycles and thus reduces your chances of doing things that could have an even bigger impact, does not maximize expected utility.

I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

I don't know. At least I personally find it very difficult to compare experiences of such differing magnitudes. Someone could come up with a number, but that feels like trying to play baseball with verbal probabilities - the number that they name might not have anything to do with what they'd actually choose in that situation.

Comment author: Ghatanathoah 27 January 2014 10:00:56PM *  -1 points [-]

I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment

I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goals it might consider death a good thing, since it would prevent an agent from developing additional unsatisfied preferences in the future.

However, you are probably onto something by suggesting some method of limiting which unsatisfied preferences count as negative. "What a person is thinking about at any given moment" has the problems I pointed out earlier, but another formulation could well work better.

Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

I believe Total Preference Utilitarianism typically avoids this by regarding the creation of at most types of unsatisfied preferences as neutral rather than negative. While there are some preferences whose dissatisfaction typically counts as negative, such as the preference not to be tortured, most preference creations are neutral. I believe that under TPU, if a person spends the majority of their life not preferring to be dead then their life is considered positive no matter how many unsatisfied preferences they have.

At least I personally find it very difficult to compare experiences of such differing magnitudes.

I feel like I could try to get some sort of ballpark by figuring how much I'm willing to pay to avoid each thing. For instance, if I had an agonizing migraine I knew would last all evening, and had a choice between paying for an instant cure pill, or a device that would magically let me avoid traffic for the next two months, I'd probably put up with the migraine.

I'd be hesitant to generalize across the whole population, however, because I've noticed that I don't seem to mind pain as much as other people, but find boredom far more frustrating than average.

Comment author: RomeoStevens 23 January 2014 09:50:20PM *  1 point [-]

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering. Death is bad and causes negative experiences. I want to solve death before we have more kids, but I recognize this isn't realistic. It's worth pointing out that negative utilitarianism is incoherent. Prioritarianism makes slightly more sense.

Comment author: Ghatanathoah 24 January 2014 04:31:43AM 1 point [-]

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering.

If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart.

It's worth pointing out that negative utilitarianism is incoherent.

Why? For the reason I pointed out, or for a different one? I'm not a negative utilitarian personally, but I think a few aspects of it have promise and would like to see them sorted out.

Comment author: VAuroch 16 January 2014 09:26:19AM 0 points [-]

What does changing a core belief feel like? If I have a crisis of faith, how will I know?

I would particularly like to hear from people who have experienced this but never deconverted. Not only have I never been religious, no one in my immediate family is, none of the extended family I am close with is, and while I have friends who believe in religion I don't think I have any who believe their faith. So I have no real point of comparison.

Comment author: ESRogs 16 January 2014 10:36:29PM 2 points [-]

If I have a crisis of faith, how will I know?

A sense of panic and dread, and a feeling of being lost were some highlights for me. I think it would be hard to not know, though perhaps others experience these things differently.

Comment author: ChristianKl 16 January 2014 03:52:58PM 0 points [-]

I think there are many ways how beliefs get changed.

Take a belief such as: "The world is an hostile place and therefore I have to hide myself behind behind a shield of anonymity when I post online."

Ten years ago I would have feared that somebody associates my online writing with my real identity at that time I thought I needed the shield. Today I don't (Nickname is firstname + first 2 letters of lastname).

How did that process felt like? At the beginning I felt fear and now I don't but it was a gradual process over time.

For most practical concerns I think that we use religion way to often as reference concept. Children get usually taught that it's bad to talk to strangers. In our world it's a useful skill to talk to strangers in an friendly and inviting way.

Most people hit walls very quickly if the try to start to say to hello with a smile to every stranger they pass on the street. The come up with excuses that saying so is weird and that people will hate them if the find out that they will engage in such weird behavior.

If you want to experience a crisis of faith those social beliefs are were I would focus. There are more interesting because they actually have empirical results that you can see and you can't just pretend that you have changed your belief.

Comment author: hyporational 13 January 2014 06:45:29AM *  3 points [-]

What motivates rationalists to have children? How much rational decision making is involved?

ETA: removed the unnecessary emotional anchor.

ETA2: I'm not asking this out of Spockness, I think I have a pretty good map of normal human drives. I'm asking because I want to know if people have actually looked into the benefits, costs and risks involved, and done explicit reasoning on the subject.

Comment author: Ishaan 13 January 2014 11:20:24PM *  1 point [-]

rationalists

I think you mean "humans"?

With respect to adoption vs. biological children, having your own child allows you more control over the circumstances and also means the child will probably share a some facets of your / your mate's personality, in ways that are often surprising and pleasurable.

With respect to raising children in general, it's intrinsically rewarding, like a mix of writing a book and being in love. Also, if you're assuming the environment won't radically change, having youth probably makes aging easier.

(I don't have children, but have watched them being raised. Unsure of my own plans.)

Comment author: Calvin 13 January 2014 06:55:51AM *  0 points [-]

I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older.

Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?

Comment author: DaFranker 13 January 2014 01:21:57PM 2 points [-]

I've always been curious to see the response of someone with this view to the question:

What if you knew, as much as any things about the events of the world are known, that there will be circumstances in X years that make it impossible for any child you conceive to possibly take care of you when you are older?

In such a hypothetical, is the executive drive to have children still present, still being enforced by the programming of Azathoth, merely disconnected from the original trigger that made you specifically have this drive? Or does the desire go away? Or something else, maybe something I haven't thought of (I hope it is!)?

Comment author: Lumifer 13 January 2014 08:13:32AM 6 points [-]

What motivates rationalists to have children?

The same what motivates other people. Being rational doesn't necessarily change your values.

Clearly, some people think having children is worthwhile and others don't, so that's individual. There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

The amount of decision-making also obviously varies -- from multi-year deliberations to "Dear, I'm pregnant!" :-)

Comment author: CronoDAS 13 January 2014 03:45:17PM 3 points [-]

There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

Really? The reproductive urge in humans seems to be more centered on a desire for sex rather than on a desire for children. And, in most animals, this is sufficient; sex leads directly to reproduction without the brain having to take an active role after the exchange of genetic material takes place.

Humans, oddly enough, seem to have evolved adaptations for ensuring that people have unplanned pregnancies in spite of their big brains. Human females don't have an obvious estrus cycle, their fertile periods are often unpredictable, and each individual act of copulation has a relatively low chance of causing a pregnancy. As a result, humans are often willing to have sex when they don't want children and end up having them anyway.

Comment author: Randy_M 14 January 2014 04:19:13PM *  1 point [-]

There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

A developmentally complex species needs a drive to care for offspring. A simple species just needs a drive to reproduce.

ETA: What Lumifer said

Comment author: Lumifer 13 January 2014 04:16:33PM 4 points [-]

The reproductive urge in humans seems to be more centered on a desire for sex rather than on a desire for children.

These are not mutually exclusive alternatives.

And, in most animals, this is sufficient; sex leads directly to reproduction without the brain having to take an active role after the exchange of genetic material takes place.

Not in those animals where babies require a long period of care and protection.

Comment author: gjm 13 January 2014 12:17:56PM 13 points [-]

I wouldn't dream of speaking for rationalists generally, but in order to provide a data point I'll answer for myself. I have one child; my wife and I were ~35 years old when we decided to have one. I am by any reasonable definition a rationalist; my wife is intelligent and quite rational but not in any very strong sense a rationalist. Introspection is unreliable but is all I have. I think my motivations were something like the following.

  1. Having children as a terminal value, presumably programmed in by Azathoth and the culture I'm immersed in. This shows up subjectively as a few different things: liking the idea of a dependent small person to love, wanting one's family line to continue, etc.

  2. Having children as a terminal value for other people I care about (notably spouse and parents).

  3. I think I think it's best for the fertility rate to be close to the replacement rate (i.e., about 2 in a prosperous modern society with low infant mortality), and I think I've got pretty good genes; overall fertility rate in the country I'm in is a little below replacement and while it's fairly densely populated I don't think it's pathologically so, so for me to have at least one child and probably two is probably beneficial for society overall.

  4. I expected any child I might have to have a net-positive-utility life (for themselves, not only for society at large) and indeed probably an above-average-utility life.

  5. I expected having a child to be a net positive thing for marital harmony and happiness (I wouldn't expect that for every couple and am not making any grand general claim here).

I don't recall thinking much about the benefits of children in providing care when I'm old and decrepit, though I suppose there probably is some such benefit.

So far (~7.5 years in), we love our daughter to bits and so do others in our family (so #1,#2,#5 seem to be working as planned), she seems mostly very happy (so #4 seems OK so far), it's obviously early days but my prediction is still that she'll likely have a happy life overall (so #4 looks promising for the future) and I don't know what evidence I could reasonably expect for or against #3.

Comment author: Aharon 13 January 2014 09:38:48PM 1 point [-]

I first wanted to comment on 5, because I had previously read that having children reduces happiness. Interestingly, when searching a link (because I couldn't remember where I had read it), I found this source (http://www.demogr.mpg.de/papers/working/wp-2012-013.pdf) that corrobates your specific expectation: children lead to higher happiness for older, better educated parents.

Comment author: Douglas_Knight 14 January 2014 01:53:44PM 2 points [-]

Having children is an example where two methodologies in happiness research dramatically diverge. One method is asking people in the moment how happy they are; the other is asking how they happy they generally feel about their lives. The first method finds that people really hate child care and is probably what you remembered.

Comment author: gjm 13 January 2014 10:15:04PM -1 points [-]

How excellent! It's nice to be statistically typical :-).

Comment author: Lumifer 13 January 2014 06:44:18PM 1 point [-]

One more point that I haven't seen brought up -- listen to Queen:

Can anybody find me somebody to love?
Each morning I get up I die a little
Can barely stand on my feet
Take a look in the mirror and cry
Lord what you're doing to me
I have spent all my years in believing you
But I just can't get no relief,
Lord!
Somebody, somebody
Can anybody find me somebody to love?

Comment author: CronoDAS 14 January 2014 08:32:04AM 2 points [-]

Personally, I'd recommend a dog or cat to this person.

Comment author: hyporational 13 January 2014 06:54:11PM *  -1 points [-]

Children as match makers when you're too old to stand on your feet? ;)

Comment author: Lumifer 13 January 2014 07:08:31PM 1 point [-]

That's an interesting interpretation :-) it was also fun to watch it evolve :-D

Comment author: blacktrance 13 January 2014 04:09:29PM *  4 points [-]

Disclaimer: I don't have kids, won't have them anytime soon (i.e. not in the next 5 years), and until relatively recently didn't want them at all.

The best comparison I can make is that raising a child is like making a painting. It's work, but it's rewarding if done well. You create a human being, and hopefully impart them with good values and set them on a path to a happy life, and it's a very personal experience.

Personally, I don't have any drive to have kids, not one that's comparable to hunger or sexual attraction.

Comment author: hyporational 13 January 2014 06:32:49PM *  2 points [-]

I'd like that personal painting experience if it went well and I have experienced glimpses of it with some kids not of my own.

Unfortunately it's not clear to me at all how much success of the project could be of my own doing, and I've seen enough examples of when things go horribly wrong despite of optimally seeming conditions. I wonder what kinds of studies could be done on the subject of parenting skills and parental satisfaction on the results of upbringing that aren't hugely biased.

ETA: my five year old step brother just barged into my room (holiday at my folks). "You always get new knowledge in this room.", he said, and I was compelled to pour that little vessel full again.

Comment author: solipsist 13 January 2014 03:33:09AM *  11 points [-]

Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?

ETA: Potentially less contentious rephrase: why isn't making a life as important as saving a life?

Comment author: Dias 13 January 2014 04:38:08AM 3 points [-]

Upvoted. Remember to keep in mind the answer might be "making a person is as good as killing a person is bad.

Here's a simple argument for why we can't be indifferent to creating people. Suppose we have three worlds:

  • Jon is alive and has 10 utils
  • Jon was never conceived
  • 1Jon is alive and has 20 utils

Assume we prefer Jon to have 20 utils to 10. Assume also we're indifferent between 10 utils and Jon's. Hence by transitivity we must prefer Jon exist and have 20 utils to Jon's non-existance. So we should try to create Jon, if we think he'll have over 10 utils.

Comment author: pragmatist 13 January 2014 03:50:04AM *  5 points [-]

Making a person and unmaking a person seem like utilitarian inverses

Doesn't seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you're only focusing on the utility for the person made or unmade, then maybe (although see blacktrance's comment on that), but as a utilitarian you have no license for doing that.

Comment author: solipsist 13 January 2014 04:17:12AM 7 points [-]

A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple's child leads a life far longer and happier than the forgotten Hermit's ever would have been.

Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?

Comment author: cata 13 January 2014 04:31:17AM *  1 point [-]

Sure, Eve did a good thing.

Comment author: solipsist 13 January 2014 04:26:05PM 2 points [-]

Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?

Comment author: Calvin 13 January 2014 05:01:29AM *  1 point [-]

I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.

Comment author: solipsist 13 January 2014 04:32:21PM -1 points [-]

I didn't mean for the hermit to be sad, just less happy than the child.

Comment author: Ishaan 13 January 2014 10:07:30PM *  0 points [-]

This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.

A scenario that would capture the spirit of the problem is:

"Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive." (The "hermit" portion of the problem is unnecessary now - you can replace hermit with "family" or "society" if you want.)

Compare with...

"Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive." (Again, hermit / family / society are interchangeable)

and

"Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive."

Comment author: [deleted] 13 January 2014 03:08:21PM 3 points [-]

If there are a large number of "yes" replies, the hermit lfestyle becomes very unappealing.

Comment author: pragmatist 13 January 2014 05:27:55AM *  9 points [-]

Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.

If you're asking me, I'd say no, but I'm not a utilitarian, partly because utilitarianism answers "yes" to questions similar to this one.

Comment author: Luke_A_Somers 16 January 2014 06:46:33PM 0 points [-]

Only if you use a stupid utility function.

Comment author: pragmatist 17 January 2014 04:43:58AM *  2 points [-]

Utilitarianism doesn't use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone's utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.

In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as "utilitarianism doesn't respect the separateness of persons." For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it's possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don't matter, just the amount of utility sloshing about (or, if you're an average utilitarian, the number of vessels matters, but the vessels don't matter beyond that). An extreme consequence of this kind of thinking is the whole "utility monster" problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).

I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn't mean that trade-offs between peoples' rights/well-being/whatever are always ruled out, but they shouldn't be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can't capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.

Comment author: Luke_A_Somers 17 January 2014 01:48:20PM -1 points [-]

Yes, I should have rephrased that as 'Only because hedonic utilitarianism is stupid' --- how's that?

Comment author: Lumifer 13 January 2014 04:39:22AM 2 points [-]

This looks very similar to the trolley problem, specifically the your-organs-are-needed version.

Comment author: [deleted] 13 January 2014 05:05:57PM 2 points [-]

The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.

Comment author: Lumifer 13 January 2014 05:16:56PM 2 points [-]

So you think that the only problem with the Transplant scenario is that it discourages people from using hospitals..?

Comment author: Eugine_Nier 16 January 2014 05:32:16AM 0 points [-]

Well, that's the standard rationalization utilitarians use to get out of that dilemma.

Comment author: hairyfigment 13 January 2014 07:34:24AM 0 points [-]

Even within pleasure- or QALY-utilitarianism, which seems technically wrong, you can avoid this by recognizing that those possible people probably exist regardless in some timeline or other. I think. We don't understand this very well. But it looks like you want lots of people to follow the rule of making their timelines good places to live (for those who've already entered the timeline). Which does appear to save utilitarianism's use as a rule of thumb.

Comment author: Ishaan 13 January 2014 09:54:53PM *  3 points [-]

Why isn't making a person as good as killing a person is bad

Possibly because...

I don't think contraception is tantamount to murder.

You have judged. It's possible that this is all there is to it... not killing people who do not want to die might just be a terminal value for humans, while creating people who would want to be created might not be a terminal value.

(Might. If you think that it's an instrumental value in favor of some other terminal goal, you should look for it)

Comment author: lmm 13 January 2014 12:51:30PM 5 points [-]

Cheap answer, but remember that it might be the true one: because utilitarianism doesn't accurately describe morality, and the right way to live is not by utilitarianism.

Comment author: Pablo_Stafforini 13 January 2014 04:51:07AM *  10 points [-]

Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn't, since nonexisting entities can't have preferences.

The question also turns on issues about population ethics. The previous paragraph assumes the "total view": that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn't count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn't.

For further discussion, see Peter Singer's Practical Ethics, chap. 4 ('What's wrong with killing?").

Comment author: torekp 20 January 2014 01:34:23AM 2 points [-]

Pablo makes great points about the suffering of loved ones, etc. But, modulo those points, I'd say making a life is as important as saving a life. (I'm only going to address the potentially contentious "rephrase" here, and not the original problem; I find the making life / saving life case more interesting.) And I'm not a utilitarian.

When you have a child, even if you follow the best available practices, there is a non-trivial chance that the child will have a worse-than-nothing existence. They could be born with some terminal, painful, and incurable illness. What justifies taking that risk? Suggested answer: the high probability that a child will be born to a good life. Note that in many cases, the child who would have an awful life is a different child (coming from a different egg and/or sperm - a genetically defective one) than the one who would have a good life.

Comment author: Kaj_Sotala 15 January 2014 06:28:27AM 1 point [-]

I created a new article about this.

Comment author: Arran_Stirton 14 January 2014 09:26:32PM 2 points [-]

As far as I can tell killing/not-killing a person isn't the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.

This is the thought experiment that comes to mind. It's worth noting that all that follows depends heavily on how one calculates things.

Comparing the universes where we choose to make Jon to the one where we choose not to:

  • Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
  • Universe A': Jon not-made; Jon doesn't exist in this universe so the amount of utility he has is undefined.

Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:

  • Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
  • Universe B': Jon killed; Jon's life is cut short, his life has a global net utility of u.

The marginal utility for Jon in Universe B vs B' is easy to calculate, (2u - u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.

However the marginal utility for Jon in Universe A vs A' is undefined (in the same sense 1/0 is undefined). As Jon doesn't exist in universe A' it is impossible to assign a value to Utility_Jon_A', as a result our marginal (Utility_Jon_A - Utility_Jon_A') is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A' is undefined.

It follows from this that the marginal utility between any universe and A' is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.

I myself (probably) don't have a preference for creating universes where I exist over ones where I don't. However I'm sure that I don't want this current existence of me to terminate.

So personally I choose maximise the utility of people who already exist over creating more people.

Eliezer explains here why bringing people into existence isn't all that great even if someone existing over not existing has a defined(and positive) marginal utility.

Comment author: Douglas_Knight 13 January 2014 04:44:51PM 1 point [-]

Here are two related differences between a child is and an adult. (1) It is very expensive to turn a child into an adult. (2) An adult is highly specific and not replaceable, while a fetus has a lot of subjective uncertainty and is fairly easily duplicated within that uncertainty. Uploading is relevant to both of these points.

Comment author: Manfred 13 January 2014 04:25:40AM *  1 point [-]

From a classical utilitarian perspective, yeah, it's pretty much a wash, at least relative to non-fatal crimes that cause similar suffering.

However, around here, "utilitarian" is usually meant as "consistent consequentialism." In that frame we can appeal to motives like "I don't want to live in a society with lots of murder, so it's extra bad."

Comment author: blacktrance 13 January 2014 03:33:57AM 1 point [-]

Because killing a person deprives them of positive experiences that they otherwise would have had, and they prefer to have them. But a nonexistent being doesn't have preferences.

Comment author: gwern 13 January 2014 04:23:26AM 8 points [-]

Once you've killed them and they've become nonexistent, then they don't have preferences either.

Comment author: pragmatist 13 January 2014 05:38:06AM *  1 point [-]

Presumably what should matter (assuming preference utilitarianism) when we evaluate an act are the preferences that exist at (or just before) the time of commission of the act. If that's right, then the non-existence of those preferences after the act is performed is irrelevant.

The Spanish Inquisition isn't exculpated because it's victims' preferences no longer exist. They existed at the time they were being tortured, and that's what should matter.

Comment author: lmm 13 January 2014 10:54:44PM 1 point [-]

So it's fine to do as much environmental damage as we like, as long as we're confident the effects won't be felt until after everyone currently alive is dead?

Comment author: Nornagest 13 January 2014 10:58:16PM 2 points [-]

I'd presume that many people's preferences include terms for the expected well-being of their descendants.

Comment author: lmm 15 January 2014 12:52:36PM 1 point [-]

That's a get out of utilitarianism free card. Many people's preferences include terms for acting in accordance with their own nonutilitarian moral systems.

Comment author: Nornagest 15 January 2014 09:26:34PM *  1 point [-]

Preference utilitarianism isn't a tool for deciding what you should prefer, it's a tool for deciding how you should act. It's entirely consistent to prefer options which involve you acting according to whim or some nonutilitarian system (example: going to the pub), yet for it to dictate -- after taking into account the preferences of others -- that you should in fact do something else (example: taking care of your sick grandmother).

There may be some confusion here, though. I normally think of preferences in this context as being evaluated over future states of the world, i.e. consequences, not over possible actions; it sounds like you're thinking more in terms of the latter.

Comment author: lmm 16 January 2014 01:30:05AM 0 points [-]

Yeah, I sometimes have trouble thinking like a utilitarian.

If we're just looking at future states of the world, then consider four possible futures: your (isolated hermit) granddaughter exists and has a happy life, your granddaughter exists and has a miserable life, your granddaughter does not exist because she died, your granddaughter does not exist because she was never born.

It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same - if we're allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.

If we accept that the utility of the last two options is the same, then we have an awkward dilemma. Either this utility value is higher than option 2 - meaning that if someone's life is sufficiently miserable, it's better to kill them than allow them to continue living. Or it's lower, meaning that it's always better to give birth to someone than not. Worse, if your first granddaughter was going to be miserable and your second would be happy, it's a morally good action if you can do something that kills your first granddaughter but gives rise to the birth of your second granddaughter. It's weirdly discontinuous to say that your first granddaughter's preferences become valid once she's born - does that mean that killing her after she's born is a bad thing, but if you set up some rube goldberg contraption that will kill her after she's born then that's a good thing?

Comment author: pragmatist 16 January 2014 06:50:20AM *  0 points [-]

It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same - if we're allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.

Whatever action I take right now, eventually the macroscopic state of the universe is going to look the same (heat death of the universe). Does this mean the utilitarian is committed to saying that all actions available to me are morally equivalent? I don't think so. Even though the (macroscopic) end state is the same, the way the universe gets there will differ, depending on my actions, and that matters from the perspective of preference utilitarianism.

Comment author: blacktrance 13 January 2014 05:10:56AM 0 points [-]

That's true, but they have preferences before you kill them. In the case of contraception, there is no being to have ever had preferences.

Comment author: JoshuaFox 13 January 2014 07:42:58AM *  24 points [-]

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it's not good enough to say that we're really rational, scientific, altruist, utilitarian, etc, in contrast to those people -- they thought the same.)

So, how might we find that all these ideas are massively wrong?

Comment author: CellBioGuy 14 January 2014 04:00:32PM *  0 points [-]

By their degree of similarity to ancient religious mythological and sympathetic magic forms with the nouns swapped out.

Comment author: Calvin 13 January 2014 08:45:27AM 1 point [-]

We might find out by trying to apply them to the real world and seeing that they don't work.

Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.

Comment author: lmm 13 January 2014 12:49:19PM 3 points [-]

Is it? I mean, I'd happily say that the LW crowd as a whole does not seem particularly good at winning at life, but that is and should be our goal.

Comment author: adbge 13 January 2014 11:32:40PM *  9 points [-]

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

A few that come to mind:

  • Some religious framework being basically correct. Humans having souls, an afterlife, etc.
  • Antinatalism as the correct moral framework.
  • Romantic ideas of the ancestral environment are correct and what feels like progress is actually things getting worse.
  • The danger of existential risk peaked with the cold war and further technological advances will only hasten the decline.
Comment author: JoshuaFox 14 January 2014 07:47:45AM *  10 points [-]

I think the whole MIRI/LessWrong memeplex is not massively confused.

But conditional on it turning out to be very very wrong, here is my answer:

A. MIRI

  1. The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.

  2. MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.

  3. It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.

B. CfAR

  1. It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.

  2. It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.

Again, I think that the above scenarios are not likely, but they're my best guess at what "massively wrong" would look like.

Comment author: John_Maxwell_IV 15 January 2014 05:44:51AM *  7 points [-]

MIRI failure modes that all seem likely to me:

  • They talk about AGI a bunch and end up triggering an AGI arms race.

  • AI doesn't explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.

  • The future is just way harder to predict than everyone thought it would be... we're cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn't have possibly forseen.

  • Uploads come first.

Comment author: Squark 13 January 2014 06:43:38PM 2 points [-]

Define "massively wrong". My personal opinions (stated w/o motivation for brevity):

  • Building AGI from scratch is likely to be unfeasible (although we don't know nearly enough to discard the risk altogether)
  • Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a "foom"
  • "Correct" morality is low Kolmogorov complexity and conforms with radical forms of transhumanism

Infeasibility of "classical" AGI and feasibility of mind uploading should be scientifically provable.

So: My position is very different from MIRI's. Nevertheless I think LessWrong is very interesting and useful (in particular I'm all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as "massively wrong"?

Comment author: bokov 13 January 2014 05:56:29PM 1 point [-]

It would look like a failure to adequately discount for inferential chain length.

Comment author: Lalartu 13 January 2014 02:19:55PM 11 points [-]

Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.

Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.

1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.

2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don't launch any AI before civilization runs out of resources or collapses for some other reason.

3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts "Blindsight". Results if wrong: many, among them classic sci-fi AI rebellion.

4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.

Comment author: Chrysophylax 13 January 2014 02:59:56PM 1 point [-]

4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.

Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.

Comment author: gwern 13 January 2014 06:42:16PM 7 points [-]

Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company's investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it's supposed to be using only part of the return (so the endowment builds up and there's protection against disasters) and in any case, society's gain from the extra investment should exceed the fund's return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.

So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.

Comment author: lmm 13 January 2014 10:49:03PM 0 points [-]

Money circulates more when used for short-term consumption, than long-term investment, no? So I'd expect a shift from the former to the latter to slow economic growth.

Comment author: gwern 13 January 2014 10:53:56PM 2 points [-]

I don't follow. How can consumption increase economic growth when it comes at the cost of investment? Investment is what creates economic output.

Comment author: Chrysophylax 14 January 2014 11:28:13AM -2 points [-]

There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.

Comment author: RolfAndreassen 16 January 2014 06:23:52AM 1 point [-]

Can you define either one without reference to value judgements? If not, I suggest you make explicit the value judgement involved in saying that we currently have underconsumption.

Comment author: Chrysophylax 16 January 2014 09:33:44PM -2 points [-]

Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that "we are all Keynesians now". Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn't wrong, it was incomplete; the reason fine-tuning by demand management doesn't work simply wasn't known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.

Comment author: lmm 15 January 2014 12:48:30PM 0 points [-]

Economic activity, i.e. positive-sum trades, are what generate economic output (that and direct labour). Investment and consumption demand can both lead to economic activity. AIUI the available evidence is that with the current economy a marginal dollar will produce a greater increase in economic activity in consumption than in investment.

Comment author: RolfAndreassen 16 January 2014 06:21:42AM 1 point [-]

I think you are failing to make a crucial distinction: positive-sum trades do not generate economic activity, they are economic activity. Investment generates future opportunities for such trades.

Comment author: ChristianKl 13 January 2014 12:10:17PM *  7 points [-]

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.

Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.

Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.

There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.

So, how might we find that all these ideas are massively wrong?

It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can't put into good metrics.

There however no way to explain the framework in an article. Most people who read the introductory book don't get the point before they spent years experiencing the system from the inside.

It's the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won't be misunderstood.

Comment author: Ishaan 13 January 2014 09:36:06PM *  8 points [-]

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.

MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don't actually "FOOM" dramatically ... they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn't much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that."

If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)

Comment author: ChristianKl 13 January 2014 10:05:43PM 1 point [-]

If it's impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.

It might building moral framework that allow for effective prevention of technological development. I do think that's significantly differs from the current LW-memeplex.

Comment author: Ishaan 13 January 2014 10:50:01PM *  2 points [-]

What I mean is...the difference between "FAI is possible but difficult" and "FAI is impossible and all AI are uFAI" is like the difference between "A narrow subset of people go to heaven instead of hell" and " and "every human goes to hell". Those two beliefs are mostly identical

Whereas "FOOM doesn't happen and there is no reason to worry about AI so much" is analogous to "belief in afterlife is unfounded in the first place". That''s a massively different idea.

In one case, you're committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that "all AI are UFAI" is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)

Comment author: RolfAndreassen 16 January 2014 06:26:32AM 0 points [-]

like the difference between "A narrow subset of people go to heaven instead of hell" and " and "every human goes to hell". Those two beliefs are mostly identical

Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you're in the narrow subset. In the second, you want to overthrow the system.

Comment author: RomeoStevens 13 January 2014 09:13:39AM 3 points [-]

We should be wary of ideologies that involve one massive failure point....crap.

Comment author: therufs 17 March 2014 03:34:23PM -1 points [-]

What's the most useful thing for a non-admin to do with/about wiki spam?

Comment author: seez 05 February 2014 08:04:10AM 2 points [-]

Why hasn't anyone ever come back from the future and stopped us all from suffering, making it so we never horrible things? Does that mean we ever never learn time travel, or at least time travel+a way to make the original tough experiences be un-experienced?

Comment author: metatroll 05 February 2014 09:51:06AM *  1 point [-]

Whenever they invent time travel, they discover that the ability to change the past becomes the biggest cause of suffering, so in the end they always un-invent it.

Comment author: seez 05 February 2014 09:17:32AM 0 points [-]

And, similarly, should I be depressed that there currently exists NO alien species with the inclination+ability to eliminate horrific suffering in all sentient life-forms?

Comment author: Kaj_Sotala 15 January 2014 04:37:01PM *  4 points [-]

Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.

How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of a Dutch Book, too - they keep repeating this set of trades every evening, and losing lots of time because of that.

To me this seems to suggest that having circular preferences isn't necessarily the bad thing that it's often made out to be - after all, the people in question probably wouldn't say that they're being exploited. But maybe I'm missing something.

Comment author: Alejandro1 15 January 2014 05:21:36PM *  7 points [-]

The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".

The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.

Comment author: Kaj_Sotala 15 January 2014 06:50:10PM 1 point [-]

Okay. But that still makes it sound like there would almost never be actual real-life cases where you could clearly say that the person exhibited circular preferences? At least I can't think of any real-life scenario that would be an example of the way you define "bad" circular preferences.

Comment author: ChristianKl 16 January 2014 02:59:17PM 2 points [-]

I think there are plenty of cases where people prefer not sit on in front of their computer today over going to the fitness studio while preferring going to the fitness studio to sitting on in front of their computer tomorrow.

Changing the near to a far frame changes references. I know that not an exact example of the Dutch Book but it illustrates the principle that framing matters.

I don't think it's hard to get people in a laboratory and offer them different food choices to get a case where a person prefers A for B, C for A and B for C.

I think it's difficult to find general model real life cases because we don't use the idea as a phenomenal primitive and therefore don't perceive those situations as general situations where people act normal but see those situation as exception where people are being weird.

Comment author: asr 29 January 2014 01:57:12AM *  0 points [-]

I feel like it happens to me in practice routinely. I see options A, B, C and D and I keep oscillating between them. I am not indifferent; I perceive pairwise differences but can't find a global optimum. This can happen in commonplace situations, e.g., when choosing between brands of pasta sauce or somesuch. And I'll spend several minutes agonizing before finally picking one.

I had the impression this happened to a lot of people.

Comment author: RichardKennaway 29 January 2014 07:24:33AM 1 point [-]

That looks like noisy comparisons being made on near indistinguishable things. (Life tip: if they're too difficult to distinguish, it hardly matters which you choose. Just pick one already.)

Comment author: Douglas_Knight 29 January 2014 01:05:45AM 0 points [-]

I can't find it anymore, but years ago I found on LW a recording of an interview with someone who had exhibited circular preferences in an experiment.

Comment author: Alejandro1 15 January 2014 10:50:42PM *  0 points [-]

The Allais paradox is close to being one such example, though I don't know if it can be called "real-life". There may be marketing schemes that exploit the same biases.

A philosophical case where I feel my naive preferences are circular is torture vs. dust specks. As I said here:

I prefer N years of torture for X people to N years minus 1 second of torture for 1000X people, and any time of torture for X people over the same time of very slightly less painful torture for 1000X people, and yet I prefer a very slight momentary pain for any number of people, however large, to 50 years of torture for one person.

If I ever reverse the latter preference, it will be because I will have been convinced by theoretical/abstract considerations that non transitive preferences are bad (and because I trust the other preferences in the cycle more), but I don't think I will ever introspect it as a direct preference by itself.

Comment author: newerspeak 15 January 2014 02:45:54AM *  5 points [-]

What are your best arguments against the reality/validity/usefulness of IQ?

Improbable or unorthodox claims are welcome; appeals that would limit testing or research even if IQ's validity is established are not.

Comment author: Eugine_Nier 15 January 2014 05:21:39AM 2 points [-]

IQ, or intelligence as commonly understood, is a poor proxy for rationality. In many cases it simple makes people better at rationalizing beliefs they acquired irrational.

Comment author: pragmatist 15 January 2014 04:41:25AM *  2 points [-]

These are not my arguments, since I haven't thought about the issue enough. However, the anthropologist Scott Atran, in response to the latest Edge annual question, "What Scientific Idea is Ready for Retirement?", answered "IQ". Here's his response:

There is no reason to believe, and much reason not to believe, that the measure of a so-called "Intelligence Quotient" in any way reflects some basic cognitive capacity, or "natural kind" of the human mind. The domain-general measure of IQ is not motivated by any recent discovery of cognitive or developmental psychology. It thoroughly confounds domain-specific abilities—distinct mental capacities for, say, geometrical and spatial reasoning about shapes and positions, mechanical reasoning about mass and motion, taxonomic reasoning about biological kinds, social reasoning about other people's beliefs and desires, and so on—which are the only sorts of cognitive abilities for which an evolutionary account seems plausible in terms of natural selection for task-specific competencies.

Nowhere in the animal or plant kingdoms does there ever appear to have been natural selection for a task-general adaptation. An overall measure of intelligence or mental competence is akin an overall measure for "the body," taking no special account of the various and specific bodily organs and functions, such as hearts, lungs, stomach, circulation, respiration, digestion and so on. A doctor or biologist presented with a single measure for "Body Quotient" (BQ) wouldn't be able to make much of it.

IQ is a general measure of socially acceptable categorization and reasoning skills. IQ tests were designed in behaviorism's heyday, when there was little interest cognitive structure. The scoring system was tooled to generate a normal distribution of scores with a mean of 100 and a standard deviation of 15.

In other societies, a normal distribution of some general measure of social intelligence might look very different, in that some "normal" members of our society could well produce a score that is a standard deviation from "normal" members of another society on that other society's test. For example, in forced-choice tasks East Asian students (China, Korea, Japan) tend to favor field-dependent perception over object-salient perception, thematic reasoning over taxonomic reasoning, and exemplar-based categorization over rule-based categorization.

American students generally prefer the opposite. On tests that measure these various categorization and reasoning skills, East Asians average higher on their preferences and Americans average higher on theirs'. There is nothing particularly revealing about these different distributions other than that they reflect some underlying socio- cultural differences.

There is a long history of acrimonious debate over which, if any, aspects of IQ are heritable. The most compelling studies concern twins raised apart and adoptions. Twin studies rarely have large sample populations. Moreover, they often involve twins separated at birth because a parent dies or cannot afford to support both, and one is given over to be raised by relatives, friends or neighbors. This disallows ruling out the effects of social environment and upbringing in producing convergence among the twins. The chief problem with adoption studies is that the mere fact of adoption reliably increases IQ, regardless of any correlation between the IQs of the children and those of their biological parents. Nobody has the slightest causal account of how or why genes, singly or in combination, might affect IQ. I don't think it's because the problem is too hard, but because IQ is a specious rather natural kind.

Comment author: Douglas_Knight 15 January 2014 03:32:18PM *  5 points [-]

Which of reality, validity, and usefulness is this an argument against? All three? None?

Added: I don't know what it would mean for IQ to be "real." Maybe this is an argument that IQ is not real. Maybe it is an argument that IQ is not ontologically fundamental. But it seems to me little different than arguing that total body weight, BMI, or digit length ratio are not "real"; or even that arguing that temperature is not "real," either temperature of the body or temperature of an ideal gas. The BQ sentence seems to assert that this kind of unreality implies that IQ is not useful, but I'd hardly call that an argument.

Comment author: pragmatist 17 January 2014 04:26:15AM *  1 point [-]

I tend to interpret "Is X real?" more or less as "Is X a part of the best predictive theory of the relevant domain?" This doesn't require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.

According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn't? I don't know enough about the relevant science to make that judgment.

Anyway, given my preferred pragmatist way of thinking about ontology, there isn't much difference between the reality, validity and usefulness of a concept.

Comment author: Douglas_Knight 17 January 2014 05:13:15AM 2 points [-]

I tend to interpret "Is X real?" more or less as "Is X a part of the best predictive theory of the relevant domain?"

It seems excessive to me to define real as a superlative. Isn't it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It's not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.

I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.

Comment author: pragmatist 17 January 2014 05:42:21AM *  0 points [-]

Lorentz ether theory is a good predictive theory, but I don't want to say that ether is real. In general, if there's a better theory currently available that doesn't include property X, I'd say we're justified in rejecting the reality of X.

I do agree that if there's no better theory currently available, it's a bit weird to say "I reject the reality of X because I'm sure we're going to come up with a better theory at some point." Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it's probably a good idea to withhold ontological commitment until a better theory comes along.

Again, I don't know enough about IQ research to judge which, if any, of these scenarios holds in that field.

Comment author: ChristianKl 16 January 2014 02:48:36PM 1 point [-]

What does arguments against "reality" mean?

Arguing against misconceptions of what people believe about IQ? In general I consider arguments against reality pointless. Asking whether IQ is real is the wrong question. It makes much more sense to ask what IQ scores mean.

Comment author: SuspiciousTitForTat 15 January 2014 02:30:26AM 1 point [-]

Society, by survival, in the survival of the fittest sense, stimulates people to be of service, be interesting, useful, effective, and even altruistic.

I suspect, and would like to know your opinion, that we are, for that social and traditional reason biased against a life of personal hedonic exploration, even if for some particular kinds of minds, that means, literally, reading internet comics, downloading movies and multiplayer games for free, exercising near your home, having a minimal amount of friends and relationships, masturbating frequently, and eating unhealthy for as long as the cash lasts.

So two questions, do you think we are biased against these things, and do you think doing this is a problem?

Comment author: DanielLC 16 January 2014 01:04:59AM 1 point [-]

What do you mean by biased? Is there a difference between being biased towards something and desiring to do it?

Comment author: Viliam_Bur 16 January 2014 09:47:29AM 0 points [-]

For example, a bias could be if your prediction of how much you will enjoy X is systematically smaller than how much you actually do enjoy X when you are doing it.

Comment author: DanielLC 17 January 2014 01:51:17AM 0 points [-]

So what you're asking is if people are good at maximizing their own happiness?

We are not. Our happiness is set up to make sure we maximize inclusive genetic fitness. Rather than fixing a bias, evolution can simply account for it. For example, the joy of sex does not compare with the discomfort of pregnancy, but due to time discounting, it's enough to make women want to have sex.

As for what would maximize happiness, I'm not an expert. You'd need to ask a psychologist. I'm given to understand that doing things that at first appear to make you happy will tend to reset your hedonic setpoint and have little effect. The obvious conclusion from that is that no matter what you do, your happiness will be the same, but I'm pretty sure that's not right either. People can change how generally happy they are.

I am in favor of happiness, so all else being equal, I'd prefer it if people were more successful at making themselves happy.

Comment author: somervta 16 January 2014 04:44:23AM 0 points [-]

what do you mean by 'personal hedonic exploration'? The things you list don't sound very exploratory...

Comment author: CronoDAS 14 January 2014 08:30:29AM 1 point [-]

I have tremendous trouble with hangnails. My cuticles start peeling a little bit, usually near the center of the base of my nail, and then either I remove the peeled piece (by pulling or clipping) or it starts getting bigger and I have to cut it off anyway. That leaves a small hole in my cuticle, the edges of which start to wear away and peel more, which makes me cut away more. This goes on until my fingertips are a big mess, often involving bleeding and bandages. What should I do with my damaged cuticles, and how do I stop this cycle from starting in the first place?

Comment author: dougclow 14 January 2014 01:52:43PM 2 points [-]

To repair hangnails: Nail cream or nail oil. I had no idea these products existed, but they do, and they are designed specifically to deal with this problem, and do a very good job IME. Regular application for a few days fixes my problems.

To prevent it: Keep your hands protected outside (gloves). Minimise exposure of your hands to things that will strip water or oil from them (e.g. detergent, soap, solvents, nail varnish, nail varnish remover), and when you can't avoid those, use moisturiser afterwards to replace the lost oil.

(Explanation: Splitting/peeling nails is usually due to insufficient of oil or more rarely moisture. I've heard some people take a paleo line that we didn't need gloves and moisturiser and nail oil in the ancestral environment. Maybe, but we didn't wash our hands with detergent multiple times a day then either.)

Comment author: CronoDAS 16 January 2014 06:59:17AM 1 point [-]

It's not the nail itself, it's the skin around the nail...

Comment author: dougclow 17 January 2014 10:00:10AM 1 point [-]

Yes - that's the part I too have trouble with, and that these products and practices help. They also help the nail itself, but fewer people tend to have that problem.

In my explanation should've said "Splitting/peeling nails, and troubles with the skin around them, are usually due to insufficient oil ...", sorry.

There's no reason why you should trust a random Internet person like me with health advice. But think cost/expected benefit. If your hangnails are anything like as painful and distracting as mine were, trying out a tube of nail cream, moisturiser, and a pair of gloves for a week is a small cost compared to even an outside chance that it'll help. (Unless the use of such products causes big problems for your self image.)

Comment author: CronoDAS 19 January 2014 04:16:23AM 1 point [-]

I'll see if I can find any nail cream at my local supermarket, then. How often should I apply it?

There's no reason why you should trust a random Internet person like me with health advice.

I've seen similar advice on various web pages after I did a Google search on the problem, too. Which means that it's many random Internet people, which is slightly more trustworthy. ;)

Comment author: dougclow 15 February 2014 02:48:20PM 0 points [-]

:)

I got mine in a large pharmacist, in case you're still looking.

How often should I apply it?

I'd be guided by the instructions on the product and your common sense.

For me, a single application is usually enough these days - so long as I've been able to leave it on for ages and not have to wash my hands. The first time I used it, when my fingernails had got very bad, it took about three or four applications over a week. Then ordinary hand moisturiser and wearing gloves outside is enough for maintenance. Then I get careless and forget and my fingernails start getting bad again and the cycle repeats! But I'm getting better at noticing, so the cycles are getting shallower, and I've not actually had to use the nail cream at all so far this winter. (Although it hasn't been a very cold one where I am.)

(Almost a month late, sorry.)

Comment author: ciphergoth 17 January 2014 06:25:18PM 1 point [-]

I would take a recommendation from Doug as strong evidence that something is a good idea, FWIW.

Comment author: Chrysophylax 14 January 2014 11:56:53AM -1 points [-]

Nail polish base coat over the cuticle might work. Personally I just try not to pick at them. I imagine you can buy base coat at the nearest pharmaceuticals store, but asking a beautician for advice is probably a good idea; presumably there is some way that people who paint their nails prevent hangnails from spoiling the effect.

Comment author: Halfwitz 14 January 2014 01:38:50AM *  5 points [-]

How much does a genius cost? MIRI seems intent on hiring a team of geniuses. I’m curious about what the payroll would look like. One of the conditions of Thiel’s donations was that no one employed by MIRI can make more than one-hundred thousand a year. Is this high enough? One of the reasons I ask is I just read a story about how Google pays an extremely talented programmer over 3 million dollars per year - doesn't MIRI also need extremely talented programmers? Do they expect the most talented to be more likely to accept a lower salary for a good cause?

Comment author: Chrysophylax 14 January 2014 12:03:18PM -1 points [-]

Eliezer once tried to auction a day of his time but I can't find it on ebay by Googling.

On an unrelated note, the top Google result for "eliezer yudkowsky " (note the space) is "eliezer yudkowsky okcupid". "eliezer yudkowsky harry potter" is ninth, while HPMOR, LessWrong, CFAR and MIRI don't make the top ten.

Comment author: kalium 15 January 2014 02:13:35AM 0 points [-]

I suspect more of the price comes from his reputation than his intelligence.

Comment author: drethelin 14 January 2014 10:14:23PM 1 point [-]

I believe eliezer started the bidding at something like 4000 dollars

Comment author: DanArmak 18 January 2014 02:58:53PM 1 point [-]

But where did it end?

Comment author: drethelin 19 January 2014 09:28:53PM 2 points [-]

There were no bids

Comment author: ChrisHallquist 15 January 2014 06:51:31AM 2 points [-]

Highly variable with skills, experience, and how badly they want the job. I bet there are some brilliant adjunct professors out there effectively making minimum wage because they really wanted to be professors. OTOH, I bet that google programmer isn't just being paid for talent, but specific skills and experience.

Comment author: D_Alex 03 February 2014 05:32:17AM 1 point [-]

From some WSJ article:

The setting of Einstein's initial salary at Princeton illustrates his humility and attitude toward wealth. According to "Albert Einstein: Creator & Rebel" by Banesh Hoffmann, (1972), the 1932 negotiations went as follows: "[Abraham] Flexner invited [Einstein] to name his own salary. A few days later Einstein wrote to suggest what, in view of his needs and . . . fame, he thought was a reasonable figure. Flexner was dismayed. . . . He could not possibly recruit outstanding American scholars at such a salary. . . . To Flexner, though perhaps not to Einstein, it was unthinkable [that other scholars' salaries would exceed Einstein's.] This being explained, Einstein reluctantly consented to a much higher figure, and he left the detailed negotiations to his wife."

The reasonable figure that Einstein suggested was the modest sum of $3,000 [about $46,800 in today's dollars]. Flexner upped it to $10,000 and offered Einstein an annual pension of $7,500, which he refused as "too generous," so it was reduced to $6,000. When the Institute hired a mathematician at an annual salary of $15,000, with an annual pension of $8,000, Einstein's compensation was increased to those amounts.

Comment author: DanArmak 18 January 2014 03:04:59PM 1 point [-]

Many such geniuses (top intellectual performers in fields where they can out-perform the median by several orders of magnitude) choose their work not just on the basis of payment, but what they work on, where, how, and with whom (preferring the company of other top performers).

If MIRI were to compete with Google at hiring programmers, I expect money be important but not overwhelmingly so. Google let you work with many other top people in your field, develop and use cool new tech, have big resources for your projects, and provides many non-monetary workplace benefits. MIRI lets you contribute to existential risk reduction, work with rationalists, etc.

Comment author: ChristianKl 14 January 2014 11:47:01AM 3 points [-]

Do they expect the most talented to be more likely to accept a lower salary for a good cause?

Yes. Any one with the necessary mindset of thinking that AI is the most important issue in the world will accept a lower salary than what's possible in the market elsewhere.

I don't know whether MIRI has an interest in hiring people who don't have that moral framework.

Comment author: Dan_Weinand 14 January 2014 06:34:57AM 3 points [-]

Two notes: First, the term "genius" is difficult to define. Someone may be a "genius" at understanding the sociology of sub-Saharan African tribes, but this skill will obviously command a much lower market value compared to someone who is a "genius" as a chief executive officer of a large company. A more precise definition of genius will narrow the range of costs per year.

Second, and related to the first, MIRI is (to the extent of my knowledge) currently focusing on mathematics and formal logic research rather than programming. This makes recruiting a team of "geniuses" much cheaper. While skilled mathematicians can attract quite strong salaries, highly skilled programmers can demand significantly more. It seems the most common competing job for MIRI's researchers would be that of a mathematics professor (which have a median salary ~88,000$). Based on this, MIRI could likely hire high quality mathematicians while offering them relatively competitive salaries.

Comment author: Torello 13 January 2014 11:28:29PM 3 points [-]

Doesn't cryonics (and subsequent rebooting of a person) seem obviously too difficult? People can't keep cars running indefinitely, wouldn't keeping a particular consciousness running be much harder?

I hinted at this in another discussion and got downvoted, but it seems obvious to me that the brain is the most complex machine around, so wouldn't it be tough to fix? Or does it all hinge on the "foom" idea where every problem is essentially trivial?

Comment author: ChristianKl 14 January 2014 11:49:01AM 1 point [-]

There are oldtimer cars that seem to have no problem with running "indefinitely" provided you fix parts here and there.

Comment author: RomeoStevens 14 January 2014 01:13:55AM 1 point [-]

error checking on solid state silicon is much easier than error checking neurons.

Comment author: VAuroch 16 January 2014 06:45:22AM -1 points [-]

We know a lot more about solid state silicon than neurons. When we understand neurons as well as we currently do solid state silicon, I see no reason why error checking on them should be harder than error checking on silicon is now.

Comment author: Calvin 13 January 2014 11:33:15PM *  2 points [-]

Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.

Comment author: Luke_A_Somers 15 January 2014 02:26:15PM 0 points [-]

Too difficult for whom? Us, now? Obviously. Later? Well, how much progress are you willing to allow for 'too difficult' to become 'just doable'?

Comment author: diegocaleiro 13 January 2014 06:16:45PM 2 points [-]

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

Utilitarians could say they are trying to maximize the World's something.

But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of decisions like job, partner, quitting, smoking, big travels, big decisions, not ice cream flavour stuff)

So, why don't rationalists use data driven happiness research, and reasoning in the happiness spectrum, to decide their stuff?

Comment author: pragmatist 15 January 2014 04:36:27AM 1 point [-]

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

"Non-utilitarian" doesn't equate to "ethical egoist". I'm not a utilitarian, but I still think my big life decisions are subject to ethical constraints beyond what will make me happy. It's just that the constraint isn't always (or even usually) the maximization of some aggregate utility function.

Comment author: Dahlen 14 January 2014 12:13:17AM 6 points [-]

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

I don't know the extent to which this applies to other people, but for me (a non-utilitarian) it does, so here's my data point which may or may not give you some insight into how other non-utilitarians judge these things.

I can't really say I value my own happiness much. Contentment / peace of mind (=/= happiness!) and meaningfulness are more like what I aim for; happiness is too fleeting, too momentary to seek it out all the time. I'm also naturally gloomy, and overt displays of cheerfulness just don't hold much appeal for me, in an aesthetic sense. (They get me thinking of those fake ad people and their fake smiles. Nobody can look that happy all the time without getting paid for it!) There simply are more important things in life than my own happiness; that one can be sacrificed, if need be, for the sake of a higher value. I suppose it's just like those utilitarians you're talking about which are "trying to maximize the world's something" rather than their own pleasure, only we don't think of it in a quantitative way.

But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of decisions like job, partner, quitting, smoking, big travels, big decisions, not ice cream flavour stuff)

Well... that's a rather unflattering way of putting it. You don't have to compute utilities in order for your decision-making process to look a wee little more elaborate than that.

Comment author: ChristianKl 13 January 2014 10:11:26PM 1 point [-]

I don't think the predictive power of models build from data driven happiness research is very high. I wouldn't ignore the research completely but there nothing rational about using a model just because it's data based if nobody showed that the model is useful for prediction in the relevant domain.

Comment author: cata 13 January 2014 09:01:08PM *  2 points [-]

I know a lot of LW-ish people in the Bay Area and I see them explicitly thinking carefully about a lot of big life changes (e.g. moving, relationships, jobs, what habits to have) in just the way you recommended. I don't know if it has something to do with utilitarianism or not.

I'm personally more inclined to think in that way than I was a few years ago, and I think it's mostly because of the social effects of from hanging out with & looking up to a bunch of other people who do so.

Comment author: [deleted] 13 January 2014 04:04:56PM *  3 points [-]

On the Neil Degrasse Tyson Q&A on reddit, someone asked: "Since time slows relative to the speed of light, does this mean that photons are essentially not moving through time at all?"

Tyson responded "yes. Precisely. Which means ----- are you seated?Photons have no ticking time at all, which means, as far as they are concerned, they are absorbed the instant they are emitted, even if the distance traveled is across the universe itself."

Is this true? I find it confusing. Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once? In what sense does the photon 'travel' then? Or is the thought that the distance traveled, as well as the time, goes to zero?

Comment author: DanielLC 13 January 2014 11:01:29PM 1 point [-]

You can't build a clock with a photon.

You can't build a clock with an electron either. You can build one with a muon though, since it will decay after some interval. It's not very accurate, but it's something.

In general, you cannot build a clock moving at light speed. You could build a clock with two photons. Measure the time by how close they are together. But if you look at the center of mass of this clock, it moves slower than light. If it didn't, the photons would have to move parallel to each other, but then they can't be moving away from each other, so you can't measure time.

Comment author: [deleted] 14 January 2014 02:53:42AM *  1 point [-]

I'm not sure what the significance of building a clock is...but then, I'm not sure I understand what clocks are. Anyway, isn't 'you can't build a clock on a photon' just what Tyson meant by 'Photons have no ticking time at all'?

Comment author: DanielLC 14 January 2014 03:44:09AM 1 point [-]

Anyway, isn't 'you can't build a clock on a photon' just what Tyson meant by 'Photons have no ticking time at all'?

Yes. I meant that he meant that.

Comment author: gjm 13 January 2014 04:31:59PM 4 points [-]

Not quite either of those.

The first thing to say is that "at t0" means different things to different observers. Observers moving in different ways experience time differently and, e.g., count different sets of spacetime points as simultaneous.

There is a relativistic notion of "interval" which generalizes the conventional notions of distance and time-interval between two points of spacetime. It's actually more convenient to work with the square of the interval. Let's call this I.

If you pick two points that are spatially separated but "simultaneous" according to some observer, then I>0 and sqrt(I) is the shortest possible distance between those points for an observer who sees them as simultaneous. The separation between the points is said to be "spacelike". Nothing that happens at one of these points can influence what happens at the other; they're "too far away in space and too close in time" for anything to get between them.

If you pick two points that are "in the same place but at different times" for some observer, then I<0 and sqrt(-I) is the minimum time that such an observer can experience between visiting them. The separation between the points is said to be "timelike". An influence can propagate, slower than the speed of light, from one to the other. They're "too far away in time and too close in space" for any observer to see them as simultaneous.

And, finally, exactly on the edge between these you have the case where I=0. That means that light can travel from one of the spacetime points to the other. In this case, an observer travelling slower than light can get from one to the other, but can do so arbitrarily quickly (from their point of view) by travelling very fast; and while no observer can see the two points as simultaneous, you can get arbitrarily close to that by (again) travelling very fast.

Light, of course, only ever travels at the speed of light (you might have heard something different about light travelling through a medium such as glass, but ignore that), which means that it travels along paths where I=0 everywhere. To an (impossible) observer sitting on a photon, no time ever passes; every spacetime point the photon passes through is simultaneous.

So: does the distance as well as the time go to 0? Not quite. Neither distance nor time makes sense on its own in a relativistic universe. The thing that does make sense is kinda-sorta a bit like "distance minus time" (and more like sqrt(distance-squared minus time-squared)), and that is 0 for any two points in spacetime that are visited by the same photon.

(Pedantic notes: 1. There are two possible sign conventions for the square of the interval. You can say that I>0 for spacelike separations, or say that I>0 for timelike separations. I arbitrarily chose the first of these. 2. There may be multiple paths that light can take between two spacetime points. They need not actually have the same "length" (i.e., interval). Strictly, "interval" is defined only locally; then, for a particular path, you can integrate it up to get the overall interval. 3. In the case of light propagating through a medium other than vacuum, what actually happens involves electrons as well as photons and it isn't just a matter of a photon going from A to B. Whenever a photon goes from A to B it does it, by whatever path it does, at the speed of light.)

Comment author: [deleted] 14 January 2014 03:07:20AM 1 point [-]

Thanks, that was very helpful, especially the explanation of timelike and spacelike relations.

Comment author: pragmatist 14 January 2014 12:17:47PM *  6 points [-]

Other people have explained this pretty well already, but here's a non-rigorous heuristic that might help. What follows is not technically precise, but I think it captures an important and helpful intuition.

In relativity, space and time are replaced by a single four-dimensional space-time. Instead of thinking of things moving through space and moving through time separately, think of them as moving through space-time. And it turns out that every single (non-accelerated) object travels through space-time at the exact same rate, call it c.

Now, when you construct a frame of reference, you're essentially separating out space and time artificially. Consequently, you're also separating an object's motion through space-time into motion through space and motion through time. Since every object moves through space-time at the same rate, when we separate out spatial and temporal motion, the faster the object travels through space the slower it will be traveling through time. The total speed, adding up speed through space and speed through time, has to equal the constant c.

So an object at rest in a particular frame of reference has all its motion along the temporal axis, and no motion at all along the spatial axes. It's traveling through time at speed c and it isn't traveling through space at all. If this object starts moving, then some of the temporal motion is converted to spatial motion. It's speed through space increases, and its speed through time decreases correspondingly, so that the motion through space-time as a whole remains constant at c. This is the source of time dilation in relativity (as seen in the twin paradox) - moving objects move through time more slowly than stationary objects, or to put it another way, time flows slower for moving objects.

Of course, the limit of this is when the object's entire motion through space-time is directed along the spatial axes, and none of it is directed along the temporal axes. In this case, the object will move through space at c, which turns out to be the speed of light, and it won't move through time at all. Time will stand still for the object. This is what's going on with photons.

From this point of view, there's nothing all that weird about a photon's motion. From the space-time perspective, which after all is the fundamental perspective in relativity, it is moving pretty much exactly like any other object. It's only our weird habit of treating space and time as extremely different that makes the entirely spatial motion of a photon seem so bizarre.

Comment author: [deleted] 15 January 2014 03:03:26AM 1 point [-]

That is helpful, and interesting, though I think I remain a bit confused about the idea of 'moving through time' and especially 'moving through time quickly/slowly'. Does this imply some sort of meta-time, in which we can measure the speed at which one travels through time?

And I think I still have my original question: if a photon travels through space at c, and therefore doesn't travel through time at all, is the photon at its starting and its final position at the same moment? If so, in what sense did it travel through space at all?

Comment author: pragmatist 15 January 2014 04:05:31AM *  2 points [-]

Does this imply some sort of meta-time, in which we can measure the speed at which one travels through time?

No it doesn't. Remember, in relativity, time is relative to a frame of reference. So when I talk about a moving object traveling slowly through time, I'm not relativizing its time to some meta-time, I'm relativizing time as measured by that object (say by a clock carried by the object) to time as measured by me (someone who is stationary in the relevant frame of reference). So an object moving slowly through time (relative to my frame of reference) is simply an object whose clock ticks appear to me to be more widely spaced than my clock ticks. In the limit, if a photon could carry a clock, there would appear to me to be an infinite amount of time between its ticks.

I will admit that I was using a bit of expository license when I talked about all objects "moving through space-time" at the constant rate c. While one can make sense of moving through space and moving through time, moving through space-time doesn't exactly make sense. You can replace it with this slightly less attractive paraphrase, if you like: "If you add up a non-accelerating object's velocity through space and its (appropriately defined) rate of motion through time, for any inertial frame of reference, you will get a constant."

And I think I still have my original question: if a photon travels through space at c, and therefore doesn't travel through time at all, is the photon at its starting and its final position at the same moment? If so, in what sense did it travel through space at all?

Again, it's important to realize there are many different "time" parameters in relativity, one for each differently moving object. Also, whether two events are simultaneous is relative to a frame of reference.

Relative to my time parameter (the parameter for the frame in which I am at rest), the photon is moving through space, and it takes some amount of (my) time to get from point A to point B. Relative to its own time parameter, though, the photon is at point A and point B (and every other point on its path) simultaneously. Since I'll never travel as fast as a photon, it's kind of pointless for me to use its frame of reference. I should use a frame adapted to my state of motion, according to which the photon does indeed travel in non-zero time from place to place.

Again, this is all pretty non-technical and not entirely precise, but I think it's good enough to get an intuitive sense of what's going on. If you're interested in developing a more technical understanding without having to trudge through a mathy textbook, I recommend John Norton's Einstein for Everyone, especially chapters 10-12. One significant simplification I have been employing is talking about a photon's frame of reference. There is actually no such thing. One can't construct an ordinary frame of reference adapted to a photon's motion (partly because there is no meaningful distinction between space and time for a photon).

Comment author: Alejandro1 15 January 2014 03:24:52AM 3 points [-]

[Is] the photon at its starting and its final position at the same moment?

At the same moment with respect to whom? That is the question one must always ask in relativity.

The answer is: no, emission and arrival do not occur at the same moment with respect to any actual reference frame. However, as we consider an abstract sequence of reference frames that move faster and faster approaching speed c in the same direction as the photon, we find that the time between the emission and the reception is shorter and shorter.

Comment author: lmm 13 January 2014 11:05:52PM *  4 points [-]

Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once?

In the photon's own subjective experience? Yes. (Not that that's possible, so this statement might not make sense). But as another commenter said, certainly the limit of this statement is true: as your speed moving from point A to point B approaches the speed of light, the subjective time you experience between the time when you're at A and the time when you're at B approaches 0. And the distance does indeed shrink, due to the Lorentz length contraction.

In what sense does the photon 'travel' then?

It travels in the sense that an external observer observes it in different places at different times. For a subjective observer on the photon... I don't know. No time passes, and the universe shrinks to a flat plane. Maybe the takeaway here is just that observers can't reach the speed of light.

Comment author: Anatoly_Vorobey 13 January 2014 09:27:17PM 6 points [-]

There are no photons. There, you see? Problem solved.

(no, the author of the article is not a crank; he's a Nobel physicist, and everything he says about the laws of physics is mainstream)

Comment author: satt 14 January 2014 10:04:22PM 1 point [-]

There are no photons. There, you see? Problem solved.

Problem evaded. Banning a word fails to resolve the underlying physical question. Substitute "wavepackets of light" for "photons"; what then?

Comment author: Anatoly_Vorobey 14 January 2014 10:22:12PM 2 points [-]

I know, I was joking. And it was a good opportunity to link to this (genuinely interesting) paper.

... well, mostly joking. There's a kernel of truth there. "There are no photons" says more than just banning a word. "Wavepackets of light" don't exist either. There's just the electromagnetic field, its intensity changes with time, and the change propagates in space. Looking at it like this may help understand the other responses to the question (which are all correct).

When you think of a photon as a particle flying in space, it's hard to shake off the feeling that you somehow ought to be able to attach yourself to it and come along for the ride, or to imagine how the particle itself "feels" about its existence, how its inner time passes. And then the answer that for a photon, time doesn't pass at all, feels weird and counter-intuitive. If you tell yourself there's no particle, just a bunch of numbers everywhere in space (expressing the EM field) and a slight change in those numbers travels down the line, it may be easier to process. A change is not an object to strap yourself to. It doesn't have "inner time".

Comment author: satt 15 January 2014 12:59:15AM 2 points [-]

I feel I should let this go, and yet...

"Wavepackets of light" don't exist either.

But we can make them! On demand, even.

There's just the electromagnetic field, its intensity changes with time, and the change propagates in space.

By this argument, ocean waves don't exist either. There's only the sea, its height changes with time, and the change propagates in space.

Comment author: Douglas_Knight 15 January 2014 03:21:00PM 2 points [-]

By this argument, ocean waves don't exist either. There's only the sea, its height changes with time, and the change propagates in space.

You say that as a reductio ad absurdum, but it is good for some purposes. Anatoly didn't claim that one should deny photons for all purposes, but only for the purpose of unasking the original question.

Comment author: satt 16 January 2014 11:07:02PM 0 points [-]

Anatoly didn't claim that one should deny photons for all purposes, but only for the purpose of unasking the original question.

In this case, unasking the original question is basically an evasion, though, isn't it?

Denying photons may enable you to unask hen's literal question, or the unnamed Reddit poster's literal question, but it doesn't address the underlying physical question they're driving at: "if observer P travels a distance x at constant speed v in observer Q's rest frame, does the elapsed time in P's rest frame during that journey vanish in the limit where v tends to c?"

Comment author: Douglas_Knight 16 January 2014 11:31:20PM 0 points [-]

I reject the claim that your rephrasing is the "real" question being asked. By rephrasing the question, you are rejecting it just as much as Anatoly. I think it is more accurate to say that you evade the question, while he is up front about rejecting it.

In fact, I think your answer is better and probably it is generally better to rephrase problematic questions to answerable questions before explaining that they are problematic, but the latter is part of a complete answer and I think Anatoly is correct in how he addresses it.

Comment author: satt 17 January 2014 12:23:16AM -1 points [-]

I reject the claim that your rephrasing is the "real" question being asked.

That multiple different people automatically treated hen's question like it were my rephrasing backs me up on this one, I reckon.

By rephrasing the question, you are rejecting it just as much as Anatoly. I think it is more accurate to say that you evade the question, while he is up front about rejecting it.

Rephrasing a question can be the first step to confronting it head-on rather than rejecting it. If a tourist, looking for the nearest train station, wandered up to me and asked, "where station is the?", and I rearranged their question to the parseable "where is the station?" and answered that, I wouldn't say I rejected or evaded their query.

Comment author: Alejandro1 13 January 2014 04:35:42PM 1 point [-]

Assume there are observers at A and B, sitting at rest relative to each other. The distance between them as seen by them is X. Their watches are synchronized. Alice, sitting at A, emits a particle when her watch says t0; Bob, sitting at B, receives it when his watch says t1. Define T = t1-t0. The speed of the particle is V = X/T.

If the particle is massive, then V is always smaller than c (the speed of light). We can imagine attaching a clock to the particle and starting it when it is emitted. When Bob receives it, the clock's time would read a time t smaller than T, given by the equation:

t = T (1 - V^2/c^2)^(1/2) (this is the Lorentz factor equation mentioned by Plasmon).

As the speed V of the particle gets closer and closer to c, you can see that the time t that has passed "for the particle" gets closer and closer to 0. One cannot attach a clock to a photon, so the statement that "photons are not moving through time" is somewhat metaphoric and its real meaning is the limiting statement I just mentioned. The photon is not "at two places at once" from the point of view of any physical observer, be it Alice and Bob (for whom the travel took a time T = X/c) or any other moving with a speed smaller than c (for whom the time taken may be different but is never 0).

Comment author: Plasmon 13 January 2014 04:16:40PM *  4 points [-]

The Lorentz factor diverges when the speed approaches c. Because of Length contraction and time dilation, both the distance and the time will appear to be 0, from the "point of view of the photon".

(the photon is "in 2 places at once" only from the point of view of the photon, and it doesn't think these places are different, after all they are in the same place! This among other things is why the notion of an observer traveling at c, rather than close to c, is problematic)

Comment author: Luke_A_Somers 15 January 2014 02:30:54PM 0 points [-]

Getting this property for electromagnetic waves was one of the main things that led Einstein to develop Special Relativity: he looked at waves and thought, "If we do a Galileian transform so that light is standing still, the resulting field is an invalid electrostatic field"

Comment author: Risto_Saarelma 13 January 2014 09:44:33AM 12 points [-]

Should I not be using my real name?

Comment author: ChristianKl 13 January 2014 11:31:33AM *  17 points [-]

Do you want to have a career at a conservative institution such a bank or a career in politics? If so, it's probably a bad idea to have too much attack surface by using your real name.

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

If you meet people in real life might already know you from your online commentary that they have read and you don't have to start introducing yourself.

It's really a question of whether you think strangers are more likely to hurt or help you.

Comment author: Viliam_Bur 13 January 2014 02:59:18PM 14 points [-]

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government. Of course your family and some close friends would know your real name, but you would tell them that you prefer to be called by that other name, especially in public.

So, you have one identity, you make it famous and everyone knows you. Only when you want to get anonymous, you use your real name. And the advantage is that you have papers for it. So your employer will likely not notice. You just have to be careful never to use your real name together with your fake name.

Unless your first name is unusual, you can probably re-use your first name, which is how most people will call you anyway, so if you meet people who know your true name and people who know your fake name at the same time, the fact that you use two names will not be exposed.

Comment author: Curiouskid 13 January 2014 08:49:13PM 9 points [-]

This seems to be what Gwern has done.

Comment author: Viliam_Bur 14 January 2014 11:07:42AM *  3 points [-]

Exactly! He is so good example that it is easy to not even notice him being a good example.

There is no "Gwern has an identity he is trying to hide" thought running in my mind when I think about him (unlike with Yvain). It's just "Gwern is Gwern", nothing more. Instead of a link pointing to the darkness, there is simply no link there. It's not like I am trying to respect his privacy; I feel free to do anything I want and yet his privacy remains safe. (I mean, maybe if someone tried hard... but there is nothing reminding people that they could.) It's like an invisible fortress.

But if instead he called himself Arthur Gwernach (abbreviated to Gwern), that would be even better.

Comment author: pianoforte611 17 January 2014 02:52:19PM 0 points [-]

Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce "intelligence" whatever that means.

Comment author: DanArmak 18 January 2014 03:15:50PM 0 points [-]

All biological organisms, considered as signalling or information processing networks, are massively parallel: huge amounts of similar cells with slightly different state signalling one another. It's not surprising that the biological evolved brain works the same way. A turing machine-like sequential computer powerful/fast enough for general intelligence would be far less likely to evolve.

So the fact that human intelligence is slow and parallel isn't evidence for thinking you can't implement intelligence as a fast serial algorithm. It's only evidence that the design is likely to be different from that of human brains.

It's likely true that we don't have the algorithmic (or other mathematical) techniques yet to make general intelligence. But that doesn't seem to me to be evidence that such algorithms would be qualitatively different from what we do have. We could just as easily be a few specific algorithmic inventions away from a general intelligence implementation.

Finally, as far as sheer scale goes, we're on track to achieve rough computational parity with a human brain in a single multi-processor cluster within IIRC something like a decade.

Comment author: pianoforte611 18 January 2014 03:35:08PM *  0 points [-]

I'm not trying to play burden of proof tennis here but surely the fact that the only "intelligence" that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially. Unless of course the kind of AI that humans create is nothing like the human mind, in which my question is irrelevant.

But that doesn't seem to me to be evidence that such algorithms would be qualitatively different from what we do have.

But we already know that the existing algorithms (in the brain) are qualitatively different from computer programs. I'm not an expert so apologies for any mistakes but the brain is not massively parallel in the way that computers are. A parallel piece of software can funnel a repetitive task into different processors (like the same algorithm for each value of a vector). But parallelism is a built in feature of how the brain works; neurons and clusters of neurons perform computations semi-independently of each other, yet are still coordinated together in a dynamic way. The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?

Regarding computational parity: sure I never said that would be the issue.

Comment author: Locaha 18 January 2014 04:20:25PM 0 points [-]

There is no such thing as qualitatively different algorithms. Anything that a parallel computer can do, a fast enough serial computer also can do.

Comment author: DanArmak 18 January 2014 04:29:18PM 0 points [-]

the fact that the only "intelligence" that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially.

An optimization process (evolution) tried and succeeded at producing massively-parallel biological intelligence.

No optimization process has yet tried and failed to produce serial-processing based intelligence. Humans have been trying for very little time, and our serial computers may be barely fast enough, or may only become fast enough some years from now.

The fact the parallel intelligence could be created, is not evidence that other kinds of intelligence can't be created. Talking about "the only intelligence we know of" ignores the fact that no process ever tried to create a serial intelligence, and so of course none was created.

Unless of course the kind of AI that humans create is nothing like the human mind

That's quite possible.

The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?

All algorithms can be implemented on our Turing-complete computers. The questions is what algorithms we can successfully design.

Comment author: pianoforte611 18 January 2014 04:32:35PM 0 points [-]

Why do you think that intelligence can be implemented serially?

Comment author: DanArmak 18 January 2014 08:08:54PM 1 point [-]

What exactly do you mean by 'serially'? Any parallel algorithm can be implemented on a serial computer. And we do have parallel computer architectures (multicore/multicpu/cluster) that we can use for speedups, but that's purely an optimization issue.

Comment author: blacktrance 15 January 2014 08:39:23PM 0 points [-]

Average utilitarianism seems more plausible than total utilitarianism, as it avoids the repugnant conclusion. But what do average utilitarians have to say about animal welfare? Suppose a chicken's maximum capacity for pleasure/preference satisfaction is lower than a human's. Does this mean that creating maximally happy chickens could be less moral than non-maximally happy humans?

Comment author: DanielLC 16 January 2014 12:53:51AM 0 points [-]

My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken's life is equivalent to a day of a human's. A day of a chicken's life adds less to the numerator than a day of a human's, but it also adds less to the denominator.

Comment author: Dan_Weinand 16 January 2014 07:13:53AM 1 point [-]

Maybe I'm way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I'm misinterpreting average utilitarianism.

Comment author: DanielLC 17 January 2014 01:53:35AM 0 points [-]

I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself.

I am somewhat swayed by an anthropic argument. If you live in the first universe, you'll be super happy. If you live in the second, you'll be pretty darn happy. Thus, the first universe is better.

Comment author: DanArmak 18 January 2014 12:32:54PM 0 points [-]

On the other hand, you often need to consider that you're less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.

Comment author: DanielLC 19 January 2014 03:23:36AM *  0 points [-]

I don't buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There's no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there's a 100% chance of survival.

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

Comment author: DanArmak 19 January 2014 11:04:17AM 0 points [-]

You are just a random one of the observer-moments.

I don't think the word "you" is doing any work in that sentence.

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

I'm not sure I understand what you mean (I don't endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn't that just maximizing expected utility?

Comment author: DanielLC 19 January 2014 10:10:23PM 0 points [-]

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

I don't think that's relevant in this context. You are a random observer. You live.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you're not going to have as much total happiness because you don't live as long then either you're fundamentally mistaken or the argument I just gave is.

I'm not sure I understand what you mean

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

weighted by the probability of those time-lines?

If you're saying that it's more likely to be a person who has a longer life, then I guess our "different" views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.

Comment author: DanArmak 20 January 2014 09:15:57AM 0 points [-]

You are a random observer.

That's very different from saying "you are a random observer-moment" as you did before.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters.

I consider it intrinsically important to have a personal future. If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations. I don't expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all.

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don't think we should have decision theories that don't allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system.

(Also, it would be very hard to define a commensurable 'utility function' for all 'observers', rather than just for all humans and similar intelligences. And your measure function across observers - does a lizard have as many observer-moments as a human? - may capture this intuition anyway.)

I'm not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something.

If you're saying that it's more likely to be a person who has a longer life,

I didn't intend that. I think I should taboo the verb "to be" in "to be a person", and instead talk about decision theories which produce optimal behavior - and then in some situations you may reason like that.

Comment author: DanielLC 20 January 2014 07:58:45PM 0 points [-]

That's very different from saying "you are a random observer-moment" as you did before.

I meant observer-moment. That's what I think of when I think of the word "observer", so it's easy for me to make that mistake.

If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations.

If present!you anticipates something, it makes life easy for future!you. It's useful. I don't see how it applies to anthropics, though. Yous aren't in a different reference class than other people. Even if they were, it can't just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he's his own reference class.

I do not treat myself as a random person; I know which person I am.

I think you do if you use UDT or TDT.

Comment author: [deleted] 15 January 2014 08:33:50PM *  0 points [-]

Don't raw utilitarians mind being killed by somebody who thinks they suffer too much?

Comment author: DanArmak 18 January 2014 03:07:06PM 2 points [-]

Of course they mind, since they disagree and think that someone is wrong! If they don't disagree, either they've killed themselves already or it becomes an assisted suicide scenario.

Comment author: [deleted] 18 January 2014 06:32:28PM 0 points [-]

Yeah, right, thanks.

Comment author: DanielLC 16 January 2014 12:55:18AM -1 points [-]

Why would we care what someone else thinks?

As to if I thought I suffer too much, and that I wouldn't do much to help anyone else out, just because I have an explicit preference to die doesn't mean that I don't have instincts that resist it.

Comment author: [deleted] 17 January 2014 05:17:44PM 0 points [-]

Because, as long as I understand, if utilitarian thinks that killing you will have positive expected utility, he ought to kill you.

doesn't mean that I don't have instincts that resist it

So, if you were completely rational, you would kill yourself without hesitation in this situation, right?

Just in case, I didn't downvote you.