Comment author: [deleted] 16 February 2014 09:11:05PM 0 points [-]

as a counter force to arrogance

Does that even work? I'm thinking that an arrogant person will generally shrug off the mortality thing and go on with being arrogant, barring some near-death experience.

as a force to act now

Or at least "this decade" rather than "some day". But death seems like a steep cost for this benefit. Is there another way to get it? Like, if we've got immortal people anyway, we're going to want to have a retirement equivalent, but it won't be a matter of working forty years and taking the rest of your life off. What if we had a system whereby people took ten years off work after every thirty or so, with a guaranteed salary during that time that's more than sufficient for living? Then you would have a specific timeframe in which you are expected to relax, take long vacations, knock off a life goal or two, that sort of thing.

That requires reworking social security / state pensions and probably requires a lot more wealth in general to enact. But we don't currently have a cure for death, so there's time to work out how to deal with a lack of death and enact those policies.

In response to comment by [deleted] on A defense of Senexism (Deathism)
Comment author: jazmt 17 February 2014 01:39:58AM 0 points [-]

We are all arrogant to some degree or another, knowledge of or mortality helps keep it in check. What would the world look like with an unrestrained god complex?

Taking 10 years off after 30 years doesn't seem to solve the problem of the psychological issue, in today's world, as we get older we start noticing the weakness of our bodies which push us to act, since "if not now, when".

Unless we solve the various cognitive biases we suffer from, extreme longevity seems like a mixed blessing at best, and it seems to me that it would cause more problems than it solves.

I agree that these arguments don't decide the issue, but the counter argument of letting people choose doesn't seem to me effective. Also, arguments about how we would be superbeings who are totally rational, may be applicable to some post-human existence, but would not help the argument that longevity research should be pursued today (since, e.g. there would likely be wars over who gets to use it which might kill even more people, as we see in the world today the problem with world hunger and disease is not primarily one of lack of technological or economic ability but rather one of sociopolitical institutions)

In response to comment by jazmt on White Lies
Comment author: Alicorn 16 February 2014 10:06:02PM 3 points [-]

That's a very long paragraph, I'm going to do my best but some things may have been lost in the wall of text.

I understand the difference between terminal and instrumental values, but your conclusion doesn't follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of "large" is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.

a deontological obligation to maximize utility

AAAAAAAAAAAH

you should be prudent in achieving your deontological obligations

It is prudent to be prudent in achieving your deontological obligations. Putting "should" in that sentence flirts with equivocation.

won't your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations

I think it's possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it's not impossible.

In response to comment by Alicorn on White Lies
Comment author: jazmt 17 February 2014 01:18:18AM 0 points [-]

Thank you, I think I understand this now.

To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different

I still don't understand your view of utilitarian consequentialism, if 'maximizing utility' isn't a deontological obligation emanating from personhood or the like, where does it come from?

Comment author: jazmt 16 February 2014 07:04:35PM 1 point [-]

To those who think that death should be a choice. What about the benefits of knowing that we are mortal, which death by choice doesn't allow for. e.g. as a counter force to arrogance and as a force to act now, and so as we age to start reevaluating our priorities, in other words, the benefits while we live to knowing that we are mortal may outweigh the benefit of immortality. I suspect these concerns have been dealt with on this site, so if they have feel free to link me to an appropriate post instead of writing a new response,

In response to comment by jazmt on White Lies
Comment author: Alicorn 16 February 2014 06:30:03AM 2 points [-]

What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation.

The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don't fuse this prettily unless you badly misunderstand at least two of them, I'm afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don't think this works.)

I think I am misunderstanding something in your position, since it seems to me that you don't seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))

Absolutely not. Did you read Deontology for Consequentialists?

I still don't know what you mean by "emerge from the self", but if I understand the class of thing you're pointing out with the suicide example, I don't think I have any of those.

In response to comment by Alicorn on White Lies
Comment author: jazmt 16 February 2014 06:25:21PM 0 points [-]

Yes I read that post, (Thank you for putting in all this time clarifying your view)

I don't think you understood my question. since "The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. " is not viewing 'not lying' as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn't watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won't your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)

Comment author: Oscar_Cunningham 14 February 2014 06:18:59PM 2 points [-]

I agree, but the first VNM axiom doesn't: totality of the preference ordering.

VNM agents are still allowed to be indifferent.

I like Robin Hanson's post about this. Or there's this quote in from Russell and Norvig:

Refusing to act is like refusing to allow time to pass

Comment author: jazmt 16 February 2014 01:10:29AM 1 point [-]

Why isn't saying "I don't know" a reasonable approach to the issue when ones knowledge is vague enough to be useless for knowledge (and can only be made useful if the case was a bizarre thought experiment), Just because one couldtheoretically bet on something doesn't mean one is in a position to bet. (For example to say that I don't know how to cure a disease so I will go to the doctor, or I don't know what that person's name is (even though I know it isn't "Xpchtl Vaaaaaarax") so I should ask someone, Or I don't know how life began. Or I don't know how many apples are on the tree outside (even though I know it isn't 100 million))

In response to comment by jazmt on White Lies
Comment author: Alicorn 14 February 2014 12:37:20AM 3 points [-]

Is this only a linguistic argument about what to call morality?

You could re-name everything, but if you renamed my deontological rules "fleeb", I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I'm pretty sure it's not just linguistic.

Is there a reason you prefer to limit the domain of morality?

Because there's already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards - "prudence", "axiology".

Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?

Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.

Also, could you clarify the idea of obligations, are then any obligations which don't emanate from the rights of another person?

Yes; I have a secondary rule which for lack of better terminology I call "the principle of needless destruction". It states that you shouldn't go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.

Are there any obligations which emerge inherently from a person's humanity and are therefore not waivable?

"Humanity" is the wrong word; I apply my ethics across the board to all persons regardless of species. I'm not sure I understand the question even if I substitute "personhood".

In response to comment by Alicorn on White Lies
Comment author: jazmt 16 February 2014 12:45:58AM -1 points [-]

Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won't the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don't end up in a situation where we will act unjust.

I think I am misunderstanding something in your position, since it seems to me that you don't seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))

By obligations which emerge from a person's personhood which are not waivable, I mean that they emerge from the self and not in relation to another's rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)

In response to comment by MugaSofer on White Lies
Comment author: Alicorn 12 February 2014 05:52:55PM 2 points [-]

You've said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people's rights in order to protect others? That seems unfair to people in situations where there's no clearly identifiable moustache-twirling villain.

I wish everyone in this thread would be more careful about using the word "right". If you are trying to violate somebody's rights, you don't have "a right not to be stopped". You have your perfectly normal complement of rights, and some of them are getting in the way of protecting someone else's rights, so, since you're the active party, your (contextually relevant) rights are suspended. They remain in effect out of that context (if you are coming at me with a knife I may violently prevent you from being a threat to me; I may not then take your wallet and run off cackling; I may not, ten years later, visit you in prison and inform you that your mother is dead when she is not; etc.).

You have also said that people can waive any of their rights - for example, people waive their right not to have sex in order to have sex, and people waive their right not to be murdered in order to commit suicide. Doesn't this deny the existence of rape within marriage?

That's a good question, but the answer is no. A marriage does not constitute a promise to be permanently sexually available. You could opt to issue standing permission, and I gather this was customary and expected in historical marriages, but you can revoke it at any time; your rights are yours and you may assert them at will. I don't object to people granting each other standing permission to do things and sticking with it if that's how they prefer to conduct themselves, but morally speaking the option to refuse remains open.

Finally, you mention that some actions which do not violate rights are nonetheless "being a dick", and you will act to prevent and punish these acts in order to discourage them. Doesn't this imply that there are additional aspects to morality not contained by "rights"?

No. There's morality, and then there's all the many things that are not morality. Consequentialists (theoretically, anyway) assign value to everything and add it all up according to the same arithmetic - with whatever epicycles they need not to rob banks and kidnap medical test subjects - but that's not what I'm doing. Morality limits behavior in certain basic ways. You can be a huge dick and technically not do anything morally wrong. (And people can get back at you all kinds of ways, and not technically do anything morally wrong! It's not a fun way to live and I don't really recommend it.)

Do you act as a Standard-LessWrong-Consequentialist-Utilitarian™ with regards to Not Being A Dick?

No. Actually, you could probably call me sort of virtuist with respect to dickishness. I am sort of Standard-LessWrong-Consequentialist-Utilitarian™ with respect to prudence, which is a whole 'nother thing.

In response to comment by Alicorn on White Lies
Comment author: jazmt 14 February 2014 12:22:02AM 1 point [-]

"No. There's morality, and then there's all the many things that are not morality."

Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?

Also, could you clarify the idea of obligations, are then any obligations which don't emanate from the rights of another person? Are there any obligations which emerge inherently from a person's humanity and are therefore not waivable?

Comment author: Dr_Manhattan 13 January 2014 09:50:19PM 1 point [-]

Hi Anatoly,

Initially it was a shock to my wife, but I took things very slowly as far as dropping practices. This helped a lot and basically I do whatever I want now (3.5 years later). Also transferred my kids to a good public school out of yeshiva. My wife remains nominally religious, it might take another 10 years :)

My kids don's speak Russian - my wife is American-born. I prefer English myself, so I'm not "unhappy" about them not speaking Russian in particular although I'd prefer them to be bilingual in general. They read a bit of Hebrew.

I'm happy to discuss my HFA kid via PM.

Comment author: jazmt 06 February 2014 05:18:27AM 0 points [-]

Is your wife still teaching your kids religion? How do you work out conflicts with your wife over religious issues (I assume she insists on a kosher kitchen, wants the kids to learn Jewish values etc)

Comment author: Gvaerg 11 November 2013 09:19:38AM 1 point [-]

From what I know, Chang & Keisler is a bit dated and can create a wrong perspective on what model theorists are researching nowadays. Maybe you should also look at a modern textbook, like the ones from Hodges, Marker or Poizat.

Comment author: jazmt 04 February 2014 04:36:38AM 0 points [-]

Which of the 3 would you recommend? Does someone know why MIRI recommends Chang and Keisler if it is somewhat outdated?

Comment author: Kaj_Sotala 31 January 2014 08:50:33AM 4 points [-]

In case So8res wants to try this, I'd be quite curious to see the bullet points.

Comment author: jazmt 31 January 2014 07:35:55PM 1 point [-]

me too

View more: Prev | Next