Comment author: mattnewport 24 April 2009 08:30:46AM 0 points [-]

I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy).

The extent and nature of that minimal framework is an open question and is what I'm interested in establishing.

Comment author: AndySimpson 24 April 2009 08:57:27AM 0 points [-]

Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I'm firmly in Anton LaVey's corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person's judgement: How are we supposed to choose values?

Comment author: mattnewport 24 April 2009 07:35:09AM 0 points [-]

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.

There's little to discuss if you don't, because "everything is permitted."

To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is 'wrong' in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the 'freedom' to murder at will. That equilibrium can break down and I'm interested in ways to robustly maintain the 'good' equilibrium rather than the 'bad' equilibrium that has existed at certain times and in certain places in history. I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

Comment author: AndySimpson 24 April 2009 08:38:15AM 2 points [-]

Why do you think it needs to be confronted? ... I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?

Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts.

Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I'm saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.

Comment author: knb 24 April 2009 06:16:23AM *  1 point [-]

First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild.

Interesting. I'm not certain, but I think this isn't quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn't necessarily decline, it would move around. Consumption really doesn't create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn't completely neglect their own happiness.

All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa.

If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren't outed as a sociopath. Study Anton LaVey.

If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill.

Most people aren't really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.

Comment author: AndySimpson 24 April 2009 08:11:29AM *  3 points [-]

In theory, the westerners would just be sending their money to desperately poor people.

I'm not an economist, and but I think you could model that as a kind of demand. And I don't think I stipulated to there being a transfer of wealth.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

For me, the interesting question is how one goes about choosing "terminal values." I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an "objective" morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?

Comment author: mattnewport 24 April 2009 05:53:03AM *  0 points [-]

I don't think objectivity is an important feature of ethics. I'm not sure there's such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it's incomplete. If I ever develop a better explanation I might make a top level post.

Comment author: AndySimpson 24 April 2009 06:46:43AM *  0 points [-]

Ok, here is what I don't agree with:

Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

I mention objectivity because I don't think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There's little to discuss if you don't, because "everything is permitted." That said, I think ethics has to understand each person's competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more "important" thing isn't agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I'm in substantial agreement with:

Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.

And I would enjoy thoroughly a post on this topic.

Comment author: mattnewport 24 April 2009 05:11:14AM 1 point [-]

I'd agree that signing up for cryonics and being a traditional utilitarian (valuing all human life equally) aren't really compatible. I'm not a utilitarian so that's not my problem with cryonics but it does seem to be hard to reconcile the two positions. It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

Comment author: AndySimpson 24 April 2009 05:39:54AM 0 points [-]

It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don't think every "western lifestyle" is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can't all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm.

All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.

Comment author: AndySimpson 24 April 2009 05:00:15AM 3 points [-]

This may be a naïve question, but could someone make or link me to a good case for cryonics?

I know there's a fair probability that we could each be revived in the distant future if we sign up for cryonics, and that is worth the price of admission, but that always struck me as a mis-allocation of resources. Wouldn't it be better, for the time being, if we dispersed all the resources used on cryonics to worthwhile causes like Iodized salt, clean drinking water, or childhood immunization and instead gave up our organs for donation after death? Isn't the cryonics things one big fuzzy, or at least a luxury?

Comment author: AndySimpson 24 April 2009 04:35:48AM 1 point [-]

Why a 0.3 chance? Is that totally arbitrary? Also, it seems like a "boo" button would quickly become a means for people to indulge in inappropriate down-voting and feel insulated from responsibility for the outcome. It would also be a tempting false compromise between actually down-voting and doing nothing. Usually, one or the other is the right choice.

Comment author: Mulciber 23 April 2009 01:05:01AM 2 points [-]

Those of you excited about this: aside from the presumed difficulty of implementing it, would it be even better if there were an option to actually vote -0.3 on a post, instead of voting -1 with 30% probability? And would it be even more of an improvement if you could choose to vote anywhere in the [-1, 1] range, so that you could mark something -0.7 or +0.25?

Those suggestions probably seem like an exaggeration, but I really do think we're all getting too worked up over the minutia of the karma system. This isn't a game. These numbers aren't our high scores. It feels like there's too much temptation to regard them that way, and further complexity to the system will only increase that.

Comment author: AndySimpson 24 April 2009 04:22:36AM 1 point [-]

I really do think we're all getting too worked up over the minutia of the karma system.

Agreed, but:

This isn't a game.

We must admit that to a great extent, it is. We are all attempting to make ourselves appear more useful to the community, and karma is the only quantitative way to tell if we're making progress. Like so many things, it feels like it trivializes but it is there for a purpose.

In response to Escaping Your Past
Comment author: AndySimpson 23 April 2009 01:04:45AM *  2 points [-]

An important, so-often-useful distinction. This reminds me of the Buddhist notion of fetters. Fetters are personal features that impair your attainment of enlightenment and bind you to suffering. You can cast them off, but in order to do so, you have to cut the crap and practice doing without them, with the full knowledge that it may takes many lifetimes to free yourself. It is not sufficient to announce your adhesion to the creed of enlightenment. The only things that make you do better are the things that make you do better. Everything else is window-dressing, or at best means to that end.

On another note...

I feel bad blogging about rationality, given that I'm so horribly, ludicrously bad at it. I'm also horribly, ludicrously bad at writing.

Is that hyperbolic self-effacement I detect?

In response to Atheist or Agnostic?
Comment author: AndySimpson 21 April 2009 11:30:59PM *  3 points [-]

I used to be worried about this, too. Then I found this beautifully concise term that resolves the whole question and ends semantic arguments over this arbitrary, imaginary distinction: agnostic atheist. This correctly describes me and I think it describes most other people who would call themselves agnostic or atheist. I encourage you to spread the term, and, when it's necessary or convenient, collapse the term into what you mean: atheist, which signifies only a lack of positive theism.

Also, Bertrand Russell explored this question thoroughly in his essay, "Am I an Atheist or an Agnostic?" I commend it as well for anyone who is confused about how to identify themselves.

On a side-quibble, I'm also careful about saying I'm "an atheist," with the article. I'm not "an" atheist in the same way a methodist is a methodist: my atheism doesn't mean I'm part of a discrete association of people. I don't go to atheist non-church with my fellow atheists on my unholy day. Think of how odd and even offensive it would seem, for instance, if we said each person with blue eyes was "a blue-eyed." Why? Socially, we would falsely be tagging him or her as merely a part of a greater faction of blue-eyed people. This is how nouns work in English: we have a set of social assumptions about "a doctor", but no such assumptions about "someone trained in medicine."

So "I am atheist" or, if you must, "I am agnostically atheist," work well.

View more: Prev | Next