Happiness and Children
There is a lot of research on this topic. I know later studies contradict earlier studies, but I'm not sure who to believe here. Wondering if anyone can help.
In case it isn't clear, the basic question is the effect of having children on happiness.
Asking about polyamory in Melbourne
What communities are there where one can find polyamorous dating in Melbourne? I've decided to give it a try, primarily because my poor social skills mean I should go for the highest possible chance of success rather than anything else for the time being. If there is some degree to which I have options, requesting the best choice for somebody with good academic skills but very poor social skills and Aspergers Syndrome.
Noting that I consider success to be very unlikely here under my circumstances, but given this is Lesswrong we should all be aware that sometimes it is worth attempting something unlikely dependent on potential payoffs. At the very least, a lower potential acceptance threshold means in the worst case scenario it's metaphorical training wheels for monogamous dating.
Skepticism about Probability
I've raised arguments for philosophical scepticism before, which have mostly been argued against in a Popper-esque manner of arguing that even if we don't know anything with certainty, we can have legitimate knowledge on probabilities.
The problem with this, however, is how you answer a sceptic about the notion of probability having a correlation with reality. Probability depends upon axioms of probability- how are said axioms to be justified? It can't be by definition, or it has no correlation to reality.
Historical/Rationalistic Assesment Question
Assume a highly rational actor with as much knowledge of the world as they could realistically have. Roughly what point is the 'turning point' in history after which they should be able to clearly realise that Western democracy is superior to European-style monarchy from a perspective of human welfare?
Clarification: By superior, I mean 'overall superior'- that a variant of Western democracy is a better sort of system to think about when trying to make an ideal system for a country than a European-style monarchy.
Criticisms of the Metaethics
I'll admit that I'm using the LessWrong Board to try and figure out flaws in my own philosophical ideas. I should also make a disclaimer that I do not dispute the usefulness of Eliezer's ideas for the purposes of building a Friendy AI.
My criticisms are designed for other purposes- namely, that contrary to what I am led to believe most of this site believes Eliezer's metaethics does not work for solving ethical dilemnas except as a set of arbitrary rules, and in no way the stand-out best choice compared to any other self-consistent deontological or consequentialist system.
I'll also admit that I have something of a bias, for those looking in- I find an interesting intellectual challenge to look through philosophies and find weak points in them, so I may have been over-eager to find a bias that doesn't exist. I have been attempting to find an appropriate flaw for some time as some of my posts may have foreshadowed.
Finally, I will note that I am also attempting to dodge attacks on Elizier's ethics despite it's connections to Eliezer's epistemology.
---------------------------------------
1: My Basic Argument
Typically, people ask two things out of ethics- a reason to be ethical in the first place, and a way to resolve ethical dilemnas. Eliezer gets around the former by, effectively, appealing to the fact that people want to be moral even if there is no universially compelling argument.
The problem with Eliezer's metaethics are based around what I call the A-case after the character I invented to be in it the first time I thought up this idea. A has two options- Option 1 is the best choice from a Consequentialist perspective and A is smart enough to figure that out. However, following Option 1 would make A feel very guilty for some reason (which A cannot overcome merely by thinking about it), whereas Option 2 would feel morally right on an emotive level.
This, of course, implies that A is not greatly influenced by consequentialism- but that's quite plausible. Perhaps you have to be irrational to an intelligent non-consequentialist, but an irrational non-consequentialist smart enough to perform a utility calculation as a theoretical exercise is plausible.
How can we say that the right thing for A to do is Option 1, in such a way as be both rational and in any way convincing to A? From the premises, it is likely that any possible argument will be rejected by A in such a manner that you can't claim A is being irrational.
This can also be used against any particular deontological code- in fact more effectively due to greater plausibility- by substituting it for Consequentialism and claiming that according to said code it is A's moral duty. You can define should all you like, but A is using a different definition of should (not part of the opening scenario, but a safe inference except for a few unusual philosophers). You are talking about two different things.
-----------------------------------------------------------------
2: Addressing Counterarguments
i:
It could be argued that A has a rightness function which, on reflection, will lead A to embrace consequentialism as best for humanity as a whole. This is, however, not necessarily correct- to use an extreme case, what if A is being asked to kill A's own innocent lover, or her own baby? ("Her" because it's likely a much stronger intution that way) Some people in A's posistion have said rightness functions- it is easily possible A does not.
In addition, a follower of Lesswrong morality in it's standard form has a dilemna here. If you say that A is still morally obliged to kill her own baby, then Eleizer's own arguments can be turned against you- still pulling a child off the traintracks regardless of any 'objective' right. If you say she isn't, you've conceded the case.
A deontological theory is either founded on intuitions or not. If not Hume's Is-Ought distinction refutes it. If it is, then it faces similiar dilemnas in scenarios like this. Intuitions, however, do not add up to a logically consistent philosophy- "moral luck" (the idea a person can be more or less morally responsible based on factors outside their control) feels like an oxymoron at first, but many intuitions depend on it.
ii:
One possible counteragument is that A wants to do things in the world, and merely following A's feelings turns A into a morality pump making actions which don't make sense. However, there are several problems with this.
i- A's actions probably make sense from the perspective of "Make A feel morally justified". A can't self-modify (at least not directly), after all.
ii- Depending on the strengths of the emotions, A does not necessarily care even if A is aware of the inconsistencies in their actions. There are plenty of possible cases- a person dealing with those with whom they have close emotional ties, biases related to race or physical attractiveness, condeming large numbers of innocents to death etc.
iii:
A final counterargument would be that the way to solve this is through a Coherentist style Reflective Equilibrium. Even if Coherentism is not epistemically true, by treating intuitions as if it were true and following the Coherentist philosophy the result could feel satisfying. The problem is- what if it doesn't? If a person's emotions are strong enough, no amount of Reflective Equilibrium is strong enough to contradict them.
If you take an emotivist posistion, however, you have the problem Emotivism has no solution when feelings contradict each other.
------------------------------------------------------------------
3: Conclusions
My contention here is that we have a serious problem. The concept of right and wrong is like the concept of personal identity- merely something to be abolished for a more accurate view of what exists. It can be replaced with "Wants" (for people who have a unified moral system but various feelings), "Moralities" (systematic moral codes which are internally coherent), and "Pseudo-Moralities" with no objective morality even in the Yudowskyite sense existing.
A delusion exists of morality in most human minds, of course- just as a delusion exists of personal identity in most if not all human minds. "Moralities" can still exist in terms of groups of entities who all want similiar things or agree with basic moral rules, that can be taken to their logical conclusions.
Why can that not lead to morality? It can, but if you accept a morality on that basis it implies that rational argument (as opposed to emotional argument, which is a different matter) is in many cases entirely impossible with humans with different moralities, just as it is with aliens.
This leaves two types of rational argument possible about ethical questions:
-Demonstrating that a person would want something different if they knew all the facts- whether facts such as "God doesn't exist", facts such as "This action won't have the consequences you think it will", or facts about the human psyche.
-Showing a person's Morality has internal inconsistencies, which in most people will mean they discard it. (With mere moral Wants this is more debatable)
Arguably it also leads a third- demonstrating to a person that they do not really want what they think they want. However, this is a philosophical can of worms which I don't want to open up (metaphorically speaking) because it is highly complicated (I can think of plenty of arguments against the possibility of such, even if I am not so convinced they are true as to assert it) and because solving it does not contribute much to the main issue.
Eliezer's morality cannot even work out on that basis, however. In any scenario where an individual B:
i- Acts against Eliezer's moral code
ii- Feels morally right about doing so, and would have felt guilty for following Eliezer's ideas
Then they can argue against somebody trying to use Eliezer's ideas against them by pointing out that regardless of any Objective Morality, Eliezer still has a good case for dragging children off train tracks.
I will not delve into what proportion of humans can be said to make up a single Morality due to having basically similiar premises and intuitions. Although there are reasons to doubt it is as large as you'd think (take the A case), I'm not sure if it would work.
In conclusion- there is no Universially Compelling argument amongst humans, or even amongst rational humans.
Requesting advice- A Philosophy Idea
I'm not sure about this, but presenting it anyway for scrutiny.
I was thinking that it doesn't matter if a concept is undefined, or even cannot be defined, if hypothetically speaking said concept can exist without any ambiguity within it then it is still a tenable concept. The implications, if this is true, would be that it would knock down Quine's argument against the analytic-synthetic distinction.
Your thoughts, Lesswrong?
Requesting clarification- On the Metaethics
My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.
There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.
Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.
I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.
Lesswrong Philosophy and Personal Identity
Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.
Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.
Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious.
I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?
Greatest Philosopher in History
Since LessWrong is a major congregation point for certain philosophical ideas, and because people here tend to be more objective (in the sense of not being self-deluded) than elsewhere, I thought I'd ask people's views.
To be clear, by "Greatest Philosopher" I am referring not to the most correct philosopher in human history but the one who deserves the most credit for advancing human philosophy towards being more true.
Off the top of my head I would say that a prime candidate would be Hume- amongst other things he rejected the idea of a soul, realised to a much greater extent than his predecessors the limits of human knowledge, and opposed the idea that reason is somehow an objective force that can make priorities independent of emotions.
Aristotle deserves considerable credit relative for his time but doesn't make the list because although it wasn't his fault his ideas were dogmatically accepted and held back both science and philosophy later on.
Your thoughts?
Requesting advice: Doing Epistemology Right (Warning: Abstract mainstream Philosophy herein)
I have naturally read the material here, but am still not sure how to act on two questions.
1: I've been arguing out the question of Foundationalism v.s Coherentism v.s other similiarly basic methods of justifying knowledge (e.g. infinitism, pragmatism). The discussion left off with two problems for Foundationalism.
a: The Evil Demon argument, particularly the problem of memory. When following any piece of reason, an Evil Demon could theoretically fool my reason into thinking that it had reasoned correctly when it hadn't, or fool my memory into thinking I'd reasoned properly before with reasoning I'd never done. Since a Foundationalist either is a weak Foundationalist (and runs into severe problems) or must discard all but self-evident and incorrigible assumptions (of which memory is not one), I'm stuffed.
(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)
b: Even if that's discarded, there is still the problem of how Strong Foundationalist beliefs are justified within a Strong Foundationalist system. Strong Foundationalism is neither self-evident nor incorrigible, after all.
I know myself well enough to know I have an unusually strong (even for a non-rationalist) irrational emotive bias in favour of Foundationalism, and even I begin to suspect I've lost the argument (though some people arguing on my side would disagree). Just to confirm, though- have I lost? What should I do now, either way?
2: What to say on the question of skepticism (on which so far I've technically said nothing)? If I remember correctly Elizier has spoken of philosophy as how to act in the world, but I'm arguing with somebody who maintains as an axiom that the purpose of Philosophy is to find truth, whether useful or useless, in whatever area is under discussion.
3: Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)