Because a larger government takes more of my money, because it limits me in certain areas where I would prefer not to be limited, and because it has scarier and more probable failure modes.
It finally makes sense, you're looking at it from a personal point of view. Consider it from the view of the average wellbeing of the entire populace. Zoom out to consider the entire country, the full system of which the government is just a small part. A larger government has more probable failure modes, but a small one simply outsources its failure modes to companies and extremely rich individuals. Power abhors a vacuum.
You and I are not large enough or typical enough for considerations about our optimality to enter into the running of a country. People are eternal and essentially unchanging, the average level of humanity rises but slowly. The only realistic way to improve their lot is to change the situation in which the decision is made. The structure of the system they flow through is too important to be left to market forces and random chance. I don't care much if it inconveniences me so long as on average the lot of humanity is improved.
Edit: I fully expect you to disagree with me, but at least that's one mystery solved.
What is lacking is evidence that this particular government actually achieves those aims.
Which "this particular government"? I don't think I'm advocating any specific government. May I point you here?
Your belief must be falsifiable
My preferences neither are nor need to be falsifiable.
why do you believe what you believe?
Why do I believe what?
That large government is worse than small government.
Which particular theory? You asked why do I want the reduce the power of the government and what does that mean. I tried to answer to the best of my ability, but there is no falsifiable theory about my values. They are what they are.
A theory of government is not an terminal value, it is an instrumental one. You believe that that particular way of government will make people happy/autonomous/free/healthy/whatever your value system is. What is lacking is evidence that this particular government actually achieves those aims. It's a reasonable a priori argument, but so are dozens of other arguments for other governments. We need to distinguish which reality we are actually living in. By what metric can your goals be measured and where would you expect them to be highest? Are there countries/states trying this and what is the effect? Are there countries doing the exact opposite and what would you expect to be the result of that? Your belief must be falsifiable or else it is permeable to flour and meaningless. Stage a full crisis of faith if you have to. No retreating into a separate magesterium, why do you believe what you believe?
I believe you were talking about optimal levels of power when compared to growth?
Not at all. I was talking about optimal levels of power from the point of view of my system of values.
Right, well would you please continue? I believe the question that started all this off was how do you know said theory corresponds to reality.
Huh? Neuroscientists know my terminal values better than I do because they studied brains?
Sorry, that's nonsense.
Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you've probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they'd be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn't give such terrible advice.
Right, it's time we got back on track. Now that we using the same definition of power and we've come to the conclusion that a reduction in tax revenues can reduce physical projection of power but is unlikely to remove the laws that determine what maximum level of power is legally allowed to be projected.
I believe you were talking about optimal levels of power when compared to growth?
Many people do not know their own terminal values.
Is there an implication that someone or something does know? That strikes me as awfully paternalistic.
It's a statement of fact, not a political agenda. Neuroscientists know more about people's brains than normal people do, as a result of spending years and decades studying the subject.
Well I've done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I've not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I've read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
You may also want to read the tragedy of group selectionism.
I shall be attending (90% confidence).
Anyway, I looked through the questions, and, well, please take this as constructive criticism, but I'd have no idea about the truth of most of those statements, and they mostly seem to be fairly dry statistics. I dunno what the people who actually attended the last meeting thought, but I'd suggest maybe something more like geeky pub-quiz with probability estimates?
Statements can still be used for calibration even if you don't know the answer, but it's always more fun if you have at least an inkling of the answer. It's always good to add more fun to things like this, so any chance I could convince you to bring along some of the type of questions you think would be good?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks for the group selection link. Unfortunately I'd have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who's traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven't really got a reliable map as to nature/nuture of altruism either, so I suspect the field will "evolve" further.
I've read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Really? I was not aware of that trend in the field, maybe I should look into it.
Well, at least I understand you now.