This is beta-version-level thought. It isn't surprising that it still has a few rough spots or places where I haven't noticed that I need to explain one thing for another to make sense.
Sure. I don't mean to come on too forcefully.
Function as I'm intending to talk about it isn't something you fulfill, it's an ability you have: The ability to achieve the goals you're interested in achieving. Those goals vary not just from person to person, but also with time, whether they're achieved or not. Also, people do have more than one goal at any given time.
So help and harm aren't the only things in your decision-making, there are also goals. What is the relation between the two? Why can't the help-harm framework be subsumed under "goals"? This question is especially salient if "goals" is just going to be the box where you throw in everything relevant to decision making and ethics that doesn't fit with your concepts of help and harm.
Also, something that might make where I'm coming from clearer: when I was using "pain" before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure of preventing another kind of pain. I'll use the word "suffering" in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.
When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. [...] (This is based on the most coherent definition of 'ownership' that I've been able to come up with, and I'm aware that the definition is unusual; discussion is welcome.)
So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has. It also would be really strange if the fundamentals of decision-makign involved a particular and contingent socio-political construction (i.e. ownership and property). It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb "have" to refer to someone's reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this). So yes, I'd like to read a development of this notion of ownership if you want to provide one.
I also don't believe that there's any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.
A world in which 90% of goals were achieved wouldn't be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.
So help and harm aren't the only things in your decision-making, there are also goals. What is the relation between the two?
'Helpful' and 'harmful' are words that describe the effect of an action or circumstance on a person(+their stuff)'s ability to achieve a goal.
Why can't the help-harm framework be subsumed under "goals"?
This question doesn't make sense to me - 'help and harm' and 'goals' are two very different kinds of things, and I don't see how they could be lumped together into one concept.
...This question is especially salient if &q
Tyler Cowen argues in a TED talk (~15 min) that stories pervade our mental lives. He thinks they are a major source of cognitive biases and, on the margin, we should be more suspicious of them - especially simple stories. Here's an interesting quote about the meta-level: