Ethical frameworks are isomorphic
I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.
I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).
Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")
The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.
Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.
Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.
Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.
(ducks before accusations of misusing "isomorphic")
AI Challenge: Ants
Aichallenge.org has started their third AI contest this year: Ants.
The AI Challenge is all about creating artificial intelligence, whether you are a beginning programmer or an expert. ... [Y]ou will create a computer program (in any language) that controls a colony of ants which fight against other colonies for domination. ... The current phase of the contest will end December 18th at 11:59pm EST. At that time submissions will be closed. Shortly thereafter the final tournament will be started. ... Upon completion the contest winner will be announced and all results will be publically available.
Ants is a multi-player strategy game set on a plot of dirt with water for obstacles and food that randomly drops. Each player has one or more hills where ants will spawn. The objective is for players to seek and destroy the most enemy ant hills while defending their own hills. Players must also gather food to spawn more ants, however, if all of a player's hills are destroyed they can't spawn any more ants.
I mentioned this in the open thread, and there was a discussion about possibly making one or more "official" LessWrong teams. D_Alex has offered a motivational prize. If this interests you, please discuss in the comments!
First, they must be convinced to play the game
I recall seeing, in one of the AI-boxing discussion threads, a comment to the effect that the first step for EY to get out was to convince the other party to even play the game at all.
It has since then occurred to me that this applies to a lot of my interactions. Many people who know me IRL and know a belief of mine which they do not agree with and do not want to be convinced of often adopt the strategy of not talking with me about it at all. For me to convince one of these people of something, first I have to convince them to talk about it at all.
(Note, I don't think this is because I'm an unpleasant person to converse with. Excuses given are along the lines of "I never win an argument with you" and "you've studied it a lot more than I have, it's an unfair discussion". I don't think I'm claiming anything too outlandish here; average humans are really bad at putting rational arguments together.)
I suppose the general form is: in order to convince someone of a sufficiently alien (to them) P, first you must convince them to seriously think about P. This rule may need to be applied recursively (e.g., "seriously think about P" may require one or more LW rationality techniques).
As a practical example, my parents are very religious. I'd like to convince them to sign up for cryonics. I haven't (yet) come up with an approach that I expect to have a non-negligible chance of success. But the realization that the first goalpost along the way is to get them to seriously engage in the conversation at all simplifies the search space. (Deconversion and training in LW rationality has, of course, the best chance of success--but still a high chance of failing and I judge a failure would probably have a large negative impact on my relationship with my parents in their remaining years. That's why I'd like to convince them of just this one thing.)
I realize that this is a fairly obvious point (an application of this--raising the sanity waterline--is the point behind this entire site!), but I haven't seen this explicitly noted as being a general pattern and now that I note it, I see it everywhere--hence this post.
Cryonic suspension where?
I want to sign up for cryonic suspension. I haven't done so yet because I haven't been able to decide which organization to use. I'm not expecting you guys to choose for me, but it would be very helpful if those of you who are signed up (or will sign up) would say which organization you went with and why.
I've found the following three organizations. Did I miss any that I should be considering?
I'm in the US midwest area, if that will make a difference. My goal would be to maximize the chance of this working. My sub-goal would be to spend the least amount of money. I'm not old yet, so I expect to be able to get funding from life insurance.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)