Comment author: Ghatanathoah 02 January 2013 11:34:02PM *  2 points [-]

I thought of this dilemma when I was trying to sleep and found it impossible to sleep afterwards, as I couldn't stop thinking about it. For the rest of the day I had trouble doing anything because I couldn't stop worrying about it.

I think the problem might be that most people seem to feel safe when discussing these sorts of dilemmas, they're thinking about them in Far Mode and just consider them interesting intellectual toys. I used to be like that, but in the past couple years something has changed. Now when I consider a dilemma I feel like I'm in actual danger, I feel the sort of mental anguish you'd feel if you actually had to make that choice in real life. I feel like I was actually offered the lifespan dilemma and really do have to choose whether to accept it or not.

I wouldn't worry about the Lifespan Dilemma affecting most people this way. My family has a history of Obsessive Compulsive Disorder, I'm starting to suspect that I've developed the purely obsessional variety. In particular my freakouts match the "religiosity" type of POOCD, except that since I'm an atheist I worry about philosophical and scientific problems rather than religious ones. Others things I've freaked out about include:

-Population ethics

-Metaethics

-That maybe various things I enjoy doing are actually as valueless as paperclipping or cheesecaking.

-That maybe I secretly have simple values and want to be wireheaded, even though I know I don't want to be.

-Malthusian brain emulators

These freakout are always about some big abstract philosophical issue, they are never about anything in my normal day-to-day life. Generally I obsess about one of these things for a few days until I reach some sort of resolution about it. Then I behave normally for a few weeks until I find something new to freak out over. It's very frustrating because I have a very high happiness set point when I'm not in one of these funks.

Comment author: crap 03 January 2013 11:33:01AM *  2 points [-]

Look. Simple utilitarianism doesn't have to be correct. It looks like a wrong idea to me. Often, when reasoning informally, people confabulate wrong formal sounding things that loosely match their intuitions. And then declare that normative.

Is a library of copies of one book worth the same to you? Is a library of books of 1 author worth as much? Does variety ever truly count for nothing? There's no reason why u("AB") should be equal to u("A")+u("B"). People pick + because they are bad at math , or perhaps bad at knowing when they are being bad at math. edit: When you try to math-ize your morality, poor knowledge of math serves as Orwellian newspeak, it defines the way you think. It is hard to choose correct function even if there was any, and years of practice on too simple problems make wrong functions pop into your head.

Comment author: simplicio 02 January 2013 10:48:32PM 2 points [-]

Sure. The point is that "A->B; A, therefore B" is necessarily valid.

Unlike, say, "the risk of something happening is proportional to the number of times I've heard it mentioned."

Calling logic a set of heuristics dissolves a useful semantic distinction between normatively correct reasoning and mere rules of thumb, even if you can put the two on a spectrum.

Comment author: crap 02 January 2013 11:10:26PM *  2 points [-]

Ohh, I agree. I just don't think that there is a corresponding neurological distinction. (Original quote was about evolution).

Comment author: simplicio 02 January 2013 10:38:57PM 1 point [-]

A heuristic is a "rule of thumb," used because it is computationally cheap for a human brain and returns the right answer most of the time.

Analytical thinking uses heuristics, but is distinctive in ALSO using propositional logic, probabilistic reasoning, and mathematics - in other words, exceptionless, normatively correct modes of reasoning (insofar as they are done well) that explicitly state their assumptions and "show the work." So there is a real qualitative difference.

Comment author: crap 02 January 2013 10:42:58PM 2 points [-]

Propositional logic is made of many very simple steps, though.

Comment author: James_Miller 01 January 2013 07:23:05PM *  9 points [-]

Possibly, but Stanovich thinks that most heuristics were basically given to us by evolution and rather than choose among heuristics what we do is decide whether to (use them and spend little energy on thinking) or (not use them and spend a lot of energy on thinking).

Comment author: crap 02 January 2013 12:01:58PM *  3 points [-]

What is analytical thinking, but a sequence of steps of heuristics well vetted not to lead to contradictions?

Comment author: p4wnc6 28 December 2012 06:00:50PM 1 point [-]

I guess one could just expand the question like so:

1) Avoiding combat where I cause harm or death is the first priority, so if I have to go to jail or shoot myself in the foot to avoid it, so be it and if it comes to that, it's what I'll do. This is priority number one.

2) I can do things to improve my odds of never needing to face the situation described in (1) and to the extent that the behaviors are expedient (in a cost-benefit tradeoff sense) to do in my life, I'd like to do them now to help improve odds of (1)-avoidance later. Note that this in no way conflicts with being a genuine pacifist. It's just common sense. Yes, I'll avoid combat in costly ways if I have to. But I'd also be stupid to not even explore less costly ways to invest in combat-avoidance that could be better for me.

3) To the extent that (2) is true, I'd like to examine certain options, like donating to charities that assist with legal issues in conscientious objection, or which extend mental illness help to affected veterans, for their efficacy. There is still a cost to these things and given my conscientious objection preferences, I ought to weigh that cost.

I appreciate your willingness to engage me on the actual point of my question, rather than solely looking at the signal faker aspect like other commenters. But I still think there's much to discuss here.

Comment author: crap 28 December 2012 07:05:26PM 0 points [-]

Replied in PM.

Comment author: crap 28 December 2012 05:19:41PM *  1 point [-]

If worst comes to worst, refuse to sign any papers what so ever, you'll go to prison for a few years. Or shoot yourself in the foot on accident, that flips burden of proof. It's called non-violent resistance. I don't think US would allow any other form of objection (edit: besides e.g. being Amish). There are 2 types of conscription. Total war conscription to win an important war where you have a lot to lose; this one would go nuclear within the first hour. And majority enslaving minority, the only type of conscription possible in the US.

In response to comment by crap on META: Deletion policy
Comment author: handoflixue 27 December 2012 07:38:28PM 0 points [-]

If that were true, LessWrong would have such an INCREDIBLY HUGE advantage over most every major religion. LessWrong hasn't managed to raise armies and invade sovereign nations yet, after all.

Thinking in those terms, it makes me strongly suspect anyone turned away by a single bad post is engaging in some VERY motivated cognition, and probably would not have stayed long. (A high noise:signal ratio, on the other hand, would be genuinely damaging)

Comment author: crap 27 December 2012 10:26:37PM 1 point [-]

No one here felt distraught with religion? Not even a little? :)

Comment author: DanArmak 27 December 2012 02:16:09PM 2 points [-]

Different people can disagree about pretty much any moral question. Any one person's morality may be stable enough not to arrive at A and also ~A, but since the result still dependent most of all on that person's upbringing and culturally endorsed belief, morality is not very useful as logic. (Of course it is useful as morality: our brains are built that way.)

Comment author: crap 27 December 2012 02:39:52PM 1 point [-]

Difference in values is a little overstated, I think. Practically, there's little difference between what people say they'd do in Milgram experiment, but a huge difference between what they actually do.

Comment author: Wei_Dai 26 December 2012 06:32:23PM 12 points [-]

It seems to me the occasional crazy idea posted here wouldn't reflect that badly on CFAR and SIAI, if they had a policy of "LW is an open forum and we're not responsible for other people's posts", especially if the bad ideas are heavily voted down and argued against, with the authors often apologizing and withdrawing their own posts.

Comment author: crap 27 December 2012 10:32:52AM 1 point [-]

A crazy idea reflects badly on the ideology that spawned the crazy idea.

In response to comment by [deleted] on Morality Isn't Logical
Comment author: Wei_Dai 27 December 2012 01:23:15AM 2 points [-]

I should have given some examples of the kind of moral reasoning I'm referring to.

Comment author: crap 27 December 2012 09:48:32AM *  1 point [-]

1st link is ambiguity aversion.

Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people's losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it'll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It's an open ended problem 'unlike logic' because consequences are difficult to evaluate.

edit: been in a hurry.

View more: Next