Comment author: the-citizen 03 November 2014 04:25:46AM *  0 points [-]

Thanks for the group selection link. Unfortunately I'd have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who's traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven't really got a reliable map as to nature/nuture of altruism either, so I suspect the field will "evolve" further.

I've read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(

Oh I add that I agree we agree in most regards on the topic.

Comment author: Jackercrack 03 November 2014 10:44:11AM -1 points [-]

Really? I was not aware of that trend in the field, maybe I should look into it.

Well, at least I understand you now.

Comment author: Lumifer 02 November 2014 07:59:52PM *  1 point [-]

What is lacking is evidence that this particular government actually achieves those aims.

Which "this particular government"? I don't think I'm advocating any specific government. May I point you here?

Your belief must be falsifiable

My preferences neither are nor need to be falsifiable.

why do you believe what you believe?

Why do I believe what?

Comment author: Jackercrack 03 November 2014 12:41:37AM -1 points [-]

That large government is worse than small government.

Comment author: Lumifer 02 November 2014 12:25:49AM *  1 point [-]

Which particular theory? You asked why do I want the reduce the power of the government and what does that mean. I tried to answer to the best of my ability, but there is no falsifiable theory about my values. They are what they are.

Comment author: Jackercrack 02 November 2014 09:27:30AM 1 point [-]

A theory of government is not an terminal value, it is an instrumental one. You believe that that particular way of government will make people happy/autonomous/free/healthy/whatever your value system is. What is lacking is evidence that this particular government actually achieves those aims. It's a reasonable a priori argument, but so are dozens of other arguments for other governments. We need to distinguish which reality we are actually living in. By what metric can your goals be measured and where would you expect them to be highest? Are there countries/states trying this and what is the effect? Are there countries doing the exact opposite and what would you expect to be the result of that? Your belief must be falsifiable or else it is permeable to flour and meaningless. Stage a full crisis of faith if you have to. No retreating into a separate magesterium, why do you believe what you believe?

Comment author: Lumifer 01 November 2014 10:42:43PM 1 point [-]

I believe you were talking about optimal levels of power when compared to growth?

Not at all. I was talking about optimal levels of power from the point of view of my system of values.

Comment author: Jackercrack 01 November 2014 10:54:05PM 0 points [-]

Right, well would you please continue? I believe the question that started all this off was how do you know said theory corresponds to reality.

Comment author: Lumifer 01 November 2014 10:41:33PM 1 point [-]

Huh? Neuroscientists know my terminal values better than I do because they studied brains?

Sorry, that's nonsense.

Comment author: Jackercrack 01 November 2014 10:52:36PM -2 points [-]

Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you've probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they'd be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn't give such terrible advice.

Comment author: Lumifer 01 November 2014 08:14:35PM *  1 point [-]

That's true, it seems in England and Wales the number of police officers dropped by about 10% since the peak of 2009 (source).

Comment author: Jackercrack 01 November 2014 09:53:11PM -1 points [-]

Right, it's time we got back on track. Now that we using the same definition of power and we've come to the conclusion that a reduction in tax revenues can reduce physical projection of power but is unlikely to remove the laws that determine what maximum level of power is legally allowed to be projected.

I believe you were talking about optimal levels of power when compared to growth?

Comment author: Lumifer 01 November 2014 07:37:51PM 1 point [-]

Many people do not know their own terminal values.

Is there an implication that someone or something does know? That strikes me as awfully paternalistic.

Comment author: Jackercrack 01 November 2014 09:50:01PM -2 points [-]

It's a statement of fact, not a political agenda. Neuroscientists know more about people's brains than normal people do, as a result of spending years and decades studying the subject.

Comment author: the-citizen 01 November 2014 03:07:23PM *  0 points [-]

Well I've done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I've not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I've read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/

Comment author: Jackercrack 01 November 2014 03:46:34PM -1 points [-]

Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.

You may also want to read the tragedy of group selectionism.

Comment author: skeptical_lurker 31 October 2014 01:12:37PM 4 points [-]

I shall be attending (90% confidence).

Anyway, I looked through the questions, and, well, please take this as constructive criticism, but I'd have no idea about the truth of most of those statements, and they mostly seem to be fairly dry statistics. I dunno what the people who actually attended the last meeting thought, but I'd suggest maybe something more like geeky pub-quiz with probability estimates?

Comment author: Jackercrack 01 November 2014 11:16:04AM 2 points [-]

Statements can still be used for calibration even if you don't know the answer, but it's always more fun if you have at least an inkling of the answer. It's always good to add more fun to things like this, so any chance I could convince you to bring along some of the type of questions you think would be good?

Comment author: the-citizen 01 November 2014 05:44:33AM 0 points [-]

Thanks for the interesting comments. I've not been on LW for wrong and so far I'm being selective about which sequences I'm reading. I'll see how that works out (or will I? lol).

I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that's rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we're oblivious that we've screwed up. Applying a general rule of truthiness is a far more effective approach imo.

Comment author: Jackercrack 01 November 2014 10:17:03AM 0 points [-]

Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you've started down the path. So far as I can tell stopping halfway through is... risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I'm starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn't expecting at all. Which sequences are you starting with?

By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.

View more: Prev | Next