The point of the essay is to describe the context that would make one want a hyperphone, so that
one can be motivated by the possibility of a hyperphone, and
one could get a hold of the criteria that would direct developing a good hyperphone.
The phrase "the ability to branch in conversations" doesn't do either of those.
Quoting another comment I made:
Make a hyperphone. A majority of my alignment research conversations would be enhanced by having a hyperphone, to a degree somewhere between a lot and extremely; and this is heavily weighted on the most hopeworthy conversations. (Also sometimes when I explain what a hyperphone is well enough for the other person to get it, and then we have a complex conversation, they agree that it would be good. But very small N, like 3 to 5.)
It's a makeshift stop-gradient. I less feel like I'm writing to LessWrong if I'm not publishing it immediately, and although LW is sadly the best place on the internet that I'm aware of, it's very much not in aggregate a gradient I want. Sometimes I write posts intended for LW and publish them immediately.
Make a hyperphone. A majority of my alignment research conversations would be enhanced by having a hyperphone, to a degree somewhere between a lot and extremely; and this is heavily weighted on the most hopeworthy conversations. (Also sometimes when I explain what a hyperphone is well enough for the other person to get it, and then we have a complex conversation, they agree that it would be good. But very small N, like 3 to 5.)
I'm not sure I understand your question at all, sorry. I'll say my interpretation and then answer that. You might be asking:
Is the point of the essay summed up by saying: " "Thing=Nexus" is not mechanistic/physicalist, but it's still useful; in general, explanations can be non-mechanistic etc., but still be useful, perhaps by giving a functional definition of something."?
My answer is no, that doesn't sum up the essay. The essay makes these claims:
I did fail to list "functional" in my list of "foundational directions", so thanks for bringing it up. What I say about foundational directions would also apply to "functional".
Hm, ok, thanks. I don't I fully understand+believe your claims. For one thing, I would guess that many people do think and act, under the title "Buddhism", as if they believe that desire is the cause of suffering.
If I instead said "Clinging/Striving is the cause of [painful wheel-spinning in pursuit of something missing]", is that any closer? (This doesn't really fit what I'm seeing in the Wiki pages.) I would also say that decompiling clinging/striving in order to avoid [painful wheel-spinning in pursuit of something missing] is tantamount to nihilism. (But maybe to learn what you're offering I'd have to do more than just glance at the Wiki pages.)
As you can see, the failures lie on a spectrum, and they're model-dependent to boot.
And we can go further and say that the failures lie in a high-dimensional space, and that the apparent tradeoff is more a matter of finding the directions in which to pull the rope sideways. Propagating constraints between concepts and propositions is a way to go that seems hopeworthy to me. One wants to notice commonalities in how each of one's plans are doomed, and then address the common blockers / missing ideas. In other words, recurse to the "abstract" as much as is called for, even if you get really abstract; but treat [abstracting more than what you can directly see/feel as being demanded by your thinking] as a risky investment with opportunity cost.
That seems dependent on it being difficult to scale the specific skill that went into putting together the experience at the good restaurant. Things that are more scalable, like small consumer products, can be selected to be especially good trades (the bad ones don't get popular and inexpensive).