LESSWRONG
LW

117
Dagon
13256Ω191455490
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
How valuable is money-in-market?
Dagon1d20

"Anything worth doing is worth doing for money"
   -- Gordon Gecko, Wall Street.  Also many non-profit directors and paid employees.

IMO, there is more similarity than difference between for-profit and non-profit corporate structures.  There are fairly significant tax differences, and much different takeover and board-selection mechanisms.  There is a pretty significant difference in how they describe their mission.  But there's a strong similarity in the constraint that they can't spend more than they receive, and that they need to pay people for most of their difficult or thought-requiring work.  At the whole-economy level, both styles have their place, and it's probably reasonable that non-profit is a tiny fraction of the whole thing.  Adam Smith wasn't wrong.

At a personal decision level, this doesn't generalize very well.  It'll depend on WHICH non-profits you're comparing to WHICH for-profit orgs, for WHICH goals/values you're considering in terms of "effective".  For many things you want to do, there just aren't many choices in multiple realms of how to spend your time and money.   

If there ARE effective for-profit and non-profit options for some of your goals, I tend to put more trust in the market discipline of for-profit structures - if they stop being effective, they get replaced.  Non-profits, especially those funded by long-term donations and endowments, have more leeway to be ineffective while still continuing on.  There I am generalizing again.  Don't worry about that for personal decisions - look into the actual specifics of what you want to do and how different organizations do it.


 

Reply11
Zach Stein-Perlman's Shortform
Dagon1d94

You're right that it's not the only useful model or lever.  I don't think you're right that it shouldn't be a large focus for long-term large-scale changes.  The shift from inconceivable to inevitable takes a lot of time and gradual changes in underlying beliefs, and the overton window is a pretty useful model for societal-expectation shifts.

Reply
1a3orn's Shortform
Dagon1d-4-6

There are no rationalists in an ideological disagreement.

Reply1
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Dagon3d20

I don’t think EA is a trademarked or protected term (I could be wrong).  I’m definitely the wrong person to decide what qualifies.

For myself, I do give a lot of support to local (city, state  mostly) short-term (less than a decade, say) causes.  It’s entirely up to each of us how to split our efforts among all the changes in our future lightcone we try to improve.

Reply
Dollars in political giving are less fungible than you might think
Dagon4d2-1

dollar in my DAF is approximately as good as a dollar in my normal bank account for making the world a better place.

Well, one of them has reduced your tax burden when you deposted it.  The comparison should be $DAF ~= $(cash - taxes).  

Also, there's some controversy about whether political spending is actually altruistic.  I tend to lean toward being restrictive in my giving - not even most registered charitable organizations make my cut for making the world better, and almost no political causes.  

Reply
I will not sign up for cryonics
Dagon4d42

All-or-nothing, black-or-white thinking does not serve well for most decisions.  Integrals of value-per-time-unit is a much better expected-value methodology.

What difference does it make whether I die in 60 years or in 10,000? In the end, I’ll still be dead.

What difference does it make whether you die this afternoon or in 60 years?  If life has value to you, then longer life (at an acceptible quality level) is more valuable.  

Reply
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
Dagon8d22

[note, not a utilitarian, but I strive to be effective, and I'm somewhat altruistic.  I don't speak for any movement or group.]

Effective Altruism strives for the maximization of objective value, not emotional sentiment.

What?  There's no such thing as objective value.  EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I'd argue CAN not be) objectively chosen.

Reply
Shortform
Dagon8d20

I don't see much disagreement.  My comment was intended to generalize, not to contradict.   Other comments seem like refinements or clarifications, rather than a rejection of the underlying thesis.

One could quibble about categorization of people into "bad" and "nice", but anything more specific gets a lot less punchy. 

Reply
Shortform
Dagon9d1-5

Put another way: everyone underestimates variance.  

Reply
How To Vastly Increase Your Charitable Impact
Dagon11d72

Implicit in this argument is that the path of human culture and the long-term impact of your philanthropy is sub-exponential.  Why would that be so?  If there's no way to donate NOW to things that will bloom and increase in impact over time, why would you expect that to be different in 50 years?  If you prioritize much larger impact when you're dead over immediate impact during your life, you should find causes that match your profile.  

Reply
Load More
2Dagon's Shortform
6y
92
8Moral realism - basic Q
Q
3mo
Q
12
13What epsilon do you subtract from "certainty" in your own probability estimates?
Q
1y
Q
6
4Should LW suggest standard metaprompts?
Q
1y
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
3y
Q
0
23{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
8Believable near-term AI disaster
4y
3
1Laurie Anderson talks
4y
0
Load More