Wiki Contributions

Comments

Sorted by
smoofra50

What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?

I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.

But I don't think that's the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, "you know what guys I don't think being gay actually costs any utils! I guess it's fine". And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, "don't be gay" sits very securely in your web of belief. It's bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It's like moral-epistemic page rank. The "don't be gay" node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to "being gay is ok" looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you're doing that you're basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.

PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I'm undecided on that and I'll admit it might be the case that we just fundamentally disagree on values, and that "moral progress" is a random walk. Or not. Or it's a mix. I have no idea.

smoofra50

I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.

I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.

If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA on that!

There is almost no argument or evidence that can convince me to put more trust in ZFC than i do PA. I don't think I'm wrong.

I trust low-energy moral conclusions more than I will ever trust abstract metaethical foundational theories. I think it is a mistake to look for low-complexity foundations and reason from them. I think the best we can do is seek reflective equilibrium.

Now, that being said, I don't think it's wrong to study abstract metaethical theories, to ask what their consequences are, and even to believe them a little bit. The analogy with math still holds here. We study the heck out of ZFC. We even believe it more than a little at this point. But we don't believe it more than we believe the intermediate value theorem.

PS: I also don't think "shut up and calculate" is something you can actually do under utilitarianism, because there are good utilitarian arguments for obeying deontological rules and being virtuous, and pretty much every ethical debate that anyone has ever had can be rephrased as a debate about what terms should go in the utility function and what the most effective way to maximize it is.

smoofra10

I haven't. I'll see if I can show up for the next one.

smoofra00

this was also the part of Dalliard's critique I found most convincing. Shalizi's argument seems to a refutation of a straw man.

smoofra40

One thing Dalliard mentions is that the 'g' derived from different studies are 'statistically indistinguishable'. What's the technical content of this statement?

smoofra10

thanks for the link.

Not that I feel particularly qualified to judge, but I'd say Dalliard has a way better argument. I wonder if Shalizi has written a response.

smoofra10

wow that's a neat service.

smoofra20

It looks like we may have enough people interested in Probability Theory, Though I doubt we all live in the same city. I live near DC.

Depending on how many people are interested/where they live, it might make sense to meet over video chat instead.

smoofra30

So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.

Load More