The malaria thing seems like the load-bearing part of the post, so I'd really like to know the details. The GiveWell website currently says:
It costs between $3,000 and $8,000 to save a life in countries where GiveWell currently supports AMF to deliver ITN campaigns.
Should I strongly doubt that and why?
I mean, consider a trick like replacing axioms {A, B} with {A or B, A implies B, B implies A}. Of course it's what you call an "obvious substitution": it requires only a small amount of Boolean reasoning. But showing that NOR and NAND can express each other also requires only a small amount of Boolean reasoning! To my intuition there doesn't seem any clear line between these cases.
Then I guess you need to quantify "intuitively see as non-trivially different". For example, take any axiom A in PA, and any theorem T that's provable in PA. Then A can be replaced by a pair of axioms: 1) T, 2) "T implies A". Is that nontrivial enough? And there's an unlimited amount of obfuscatory tricks like that, which can be applied in sequence. Enough to confuse your intuition when you're looking at the final result.
If your question is whether an axiom of PA can be replaced by an equivalent statement which can serve as a replacement axiom and prove the old axiom as a theorem, then the answer is yes, and in a very boring way. Every mathematical statement has tons of interchangeable equivalent forms, like adding "and 1=1" to it. Then the new version proves the old version and all that jazz.
If your question is whether we should believe in PA more because it can arise from many different sets of axioms, then I'm not sure it's meaningful. By the previous point, of course PA can arise from tons of different sets of axioms; but also, why should we care about "believing" in PA? We care only whether PA's axioms imply this or that theorem, and that's an objective question independent of any belief.
If your question is whether we can have a worldview independent of any assumptions at all, the answer is that we can't. The toy example of math shows it clearly: if you have no axioms, you can't prove any theorems. You have latitude in choosing axioms, but you can't dispense of them completely.
I agree with the point about acknowledging enmity in general; I'm not shy to do so myself. But the post didn't convince me that Greenpeace in particular is my enemy. For that I'd need more detailed arguments.
I mean, do you guys, like, know why Greenpeace is against some of these market solutions? I didn't know either, but in five minutes of googling I was able to find some arguments. Here's an example argument: in the world there are poor countries and rich countries. Poor countries are not always ruled in their people's best interest; and rich countries and corporations don't always act in poor countries' best interest, either. So, what would happen if a rich country paid a dictator of a poor country a billion dollars to irrevocably mess up the poor country's environment? What would happen? Huh?
Maybe in more than five minutes you could find other arguments too. Anyway, fast-tracking your readers straight to "Greenpeace is your enemy" doesn't feel right.
Because I'm not indifferent between "I get 1 utility and Bob gets 0" and "I get 0 utility and Bob gets 1". I'm bargaining with Bob to choose a specific point on that segment, maybe {0.5,0.5}.
If there are multiple tangent lines at the point, then there's a range of possible weight ratios, and the AIs will agree to merge at any of them because they lead to the same point anyway. So there's no need for coinflips in this case.
I was thinking that the need for coinflips arises if the frontier has a flat region. For example let's say the frontier contains a straight line segment AB, and the AIs have negotiated a merge that leads to some particular point C on that segment. (For example this happens if the AIs are in an ultimatum game situation, where each side's gain is another's loss, so the frontier is a straight line and they're bargaining to pick one particular point on that line.) Then they can merge into the following AI: "upon waking up, first make a weighted coin toss with weights according to where C lies on AB; then either become a EUM agent that forever optimizes toward A, or become a EUM agent that forever optimizes toward B, according to the coin result." According to both AIs the expected utility of that is exactly the same as aiming toward point C, so they'll agree to the merge.
But yeah, it's even more subtle than that: for example if the segment AB doesn't end with corners but with smooth arcs, then there's no way to make a EUM agent optimizing toward A or B in particular. Then there needs to be a limit procedure, I guess.
Well, we can see that corporations owned by everyone (public utilities) mostly don't behave as sociopathically. They have other pathologies, but not so much this one. So, because everything is a matter of degree, I would assume that making ownership more distributed does make corporations less nasty. And the obvious explanation is that if you're high above almost all people, that in itself makes you behave sociopathically. Power disparity between people is a kind of evil-in-itself, or a cause of so many evils that it might as well be. So I stand by the view that "no billionaires" is reasonable.
Yeah, I agree. There are many theories of what makes art good, but I think almost everyone would agree that it's not about ticking boxes ("layered", etc). My current view is that making art is about making something that excites you. The problem is that it's hard to find something exciting when so much stuff has already been done by other people, including your younger self. And the best sign is when you make something and you like it, but you don't know why you like it; that means it's worth doing more of it.