I am what I like to call a "Greedy Progressive", inasmuch as my liberal instincts are not based in the guilt theory that a lot of conservatives and some liberals associate with liberalism, but on an implicit assumption that others doing well helps my life get better - and after a certain point, indeed helping others helps my quality of life in more immediately helpful ways than even spending money on myself or my family, though exactly where this point is at is subject to argument.
However, fundamentally the point is that I am not a progressive because I'm a sweet guy, but because I get a return on the investment. This implies two obvious things
A) That as improving others life also improves mine, improving my life also improves the lives of others in society. I am no less worth of living in comfort than someone in Africa either.
B) That although it helps me to help someone in Africa, it may very well help me more to help someone here, who in turn helps someone else slightly further from my sphere of influence, et al. Since this is not about me being a sweet guy, the question of who I help depends on my (perception of) return on investment.
C) Once I get below a certain point, the highest return on investment for expenditure of X money for Y personal happiness, is me. And, since I am in fact as important as anyone else, I give myself explicit permission to do that. I quit giving to my local public radio and the ACLU when I get below that point - and start again when I get above. The same for every other charity in existence.
And that's where I dislike the article. It assumes my happiness is in fact less important than the happiness of those I could help. So in point of fact, no, there is a definite limit to what i will sacrifice for random strangers, just because my happiness is no less important.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The problem is the "least convenient world" seems to involve a premise that would, in and of itself, be unverifiable.
The best example is the pascals wager issue - Omega tells me with absolute certainty that It's either a specific version of God (Not, for instance Odin, but Catholicism), or no God.
But I'm not willing to believe in an omniscient deity called God, taking it back a step and saying "But we know it's either or, because the omniscient de . . . errr . . . Omega tells you so" is just redefining an omniscient deity.
Well, if I don't believe is assuming god exists without proof, I can happily not assume Omega exists without proof. Proof is verifiably impossible, because all I can prove is that Omega is smarter than me.
Since I won't assume anything based only on the fact that someone is smarter than me - which is all I know about Omega - then no, the fact that Omega says any of this stuff and states it by fiat isn't going to convince me.
If Omega is that damn smart, it can go to the effort of proving it's statements.
Jonnan
Post-script: Which suddenly explains to me why I would pick the million dollar box, and leave the $1000 dollars alone. Because that's win win - either I get the million or I prove Omega is in fact not omniscient. He might be smarter than me (almost certainly is - the memory on this bio-computer I'm running needs upgraded something fierce, and the underlying operating system was last patched 30,000 years ago or so), but I can't prove it, I can only debunk it, and the only way to do that is to take the million.