Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 28 December 2014 04:31:01PM 0 points [-]

The universe is a big, big place. It also becomes isolated relatively fast due to accelerating inflationary effects. There will probably be many intelligences out there even our most distant descendants will never meet.

Ultimately though you're making assumptions about the prior distribution of intelligent life which isn't warranted with a sample size of 1.

Comment author: lavalamp 04 January 2015 04:36:54PM 1 point [-]

An extremely low prior distribution of life is an early great filter.

Comment author: lavalamp 29 October 2014 08:04:18PM 30 points [-]

Done. Thank you for running these.

Comment author: joaolkf 23 August 2014 03:23:05AM 0 points [-]

I have been saying this for quite some time. I regret not posting it first. It would be nice to have a more formal proof of all of this with utility functions, deontics and whatnot. If you are up for it, let me know. I could help, feedback, or we could work together. Perhaps someone else has done it already. It has always struck me as pretty obvious, but this is the first time I've seen stated like this.

Comment author: lavalamp 24 August 2014 12:01:05AM 0 points [-]

Check out the previous discussion Luke linked to: http://lesswrong.com/lw/c45/almost_every_moral_theory_can_be_represented_by_a/

It seems there's some question about whether you can phrase deontological rules consequentially-- to make this more formal that needs to be settled. My first thought is that the formal version of this would say something along the lines of "you can achieve an outcome that differs by only X%, with a translation function that takes rules and spits out a utility function, which is only polynomially larger." It's not clear to me how to define a domain in such a way as to allow you to compute that X%.

...unfortunately, as much as I would like to see people discuss the moral landscape instead of the best way to describe it, I have very little time lately. :/

Comment author: DanielLC 13 August 2014 11:51:45PM 8 points [-]

In principle, you can construct a utility function that represents a deontologist who abhors murder. You give a large negative value to the deontologist who commits murder. But it's kludgy. If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

Instead, recognize that some ethical systems are better for some tasks.

If you choose your ethical system based on how it fulfils a task, you are already a consequentialist. Deontology and virtue ethics don't care about getting things done.

Comment author: lavalamp 18 August 2014 06:32:57PM 0 points [-]

(Sorry for slow response. Super busy IRL.)

If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

a) After reading Luke's link below, I'm still not certain if what I've said about them being (approximately) isomorphic is correct... b) Assuming my isomorphism claim is true enough, I'd claim that the "meaning" carried by your preferred ethical framework is just framing.

That is, (a) imagine that there's a fixed moral landscape. (b) Imagine there are three transcriptions of it, one in each framework. (c) Imagine agents would all agree on the moral landscape, but (d) in practice differ on the transcription they prefer. We can then pessimistically ascribe this difference to the agents preferring to make certain classes of moral problems difficult to think about (i.e., shoving them under the rug).

Deontology and virtue ethics don't care about getting things done.

I maintain that this is incorrect. The framework of virtue ethics could easily have the item "it is virtuous to be the sort of person who gets things done." And "Make things happen, or else" could be a deontological rule. (Just because most examples of these moral frameworks are lame doesn't mean that it's a problem with the framework as opposed to the implementation.)

Comment author: blacktrance 14 August 2014 09:10:43PM 1 point [-]

You can take a set of object-level answers and construct a variety of ethical systems that produce those answers, but it still matters which ethical system you use because your justification for those answers would be different, and because while the systems may agree on those answers, they may diverge on answers outside the initial set.

Comment author: lavalamp 14 August 2014 10:57:57PM 0 points [-]

If indeed the frameworks are isomorphic, then actually this is just another case humans allowing their judgment to be affected by an issue's framing. Which demonstrates only that there is a bug in human brains.

Comment author: NancyLebovitz 14 August 2014 01:20:35AM 2 points [-]

Can deontology and/or virtue ethics include "keep track of the effects of your actions, and if the results are going badly wrong, rethink your rules"?

Comment author: lavalamp 14 August 2014 10:35:46PM 0 points [-]

I think so. I know they're commonly implemented without that feedback loop, but I don't see why that would be a necessary "feature".

Comment author: Vulture 14 August 2014 12:38:04PM 1 point [-]

I don't believe the isomorphism holds under the (imo reasonable) assumption that rulesets and utility functions must be of finite length, correct?

Comment author: lavalamp 14 August 2014 09:57:55PM 0 points [-]

Which is why I said "in the limit". But I think, if it is true that one can make reasonably close approximations in any framework, that's enough for the point to hold.

Comment author: lukeprog 14 August 2014 12:05:45AM 5 points [-]
Comment author: lavalamp 14 August 2014 06:44:07AM 0 points [-]

Hm, thanks.

Comment author: shminux 13 August 2014 11:16:01PM *  0 points [-]

On isomorphism: every version of utilitarianism I know of leads to a repugnant conclusion of one way or another, or even multiple ones. I don't think that deontology and virtue ethics are nearly as susceptible. In other words, you cannot construct a utilitarian equivalent of an ethical system which is against suffering (without explicitly minimizing some negative utility) but does not value torture over dust specks.

EDIT: see the link in this lukeprog's comment. for limits of consequentialization.

Comment author: lavalamp 13 August 2014 11:35:04PM 0 points [-]

Are you saying that some consequentialist systems don't even have deontological approximations?

It seems like you can have rules of the form "Don't torture... unless by doing the torture you can prevent an even worse thing" provides a checklist to compare badness ...so I'm not convinced?

Ethical frameworks are isomorphic

6 lavalamp 13 August 2014 10:39PM

I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.

I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).

Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")

The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.

Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.

Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.

Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.

(ducks before accusations of misusing "isomorphic")

View more: Next