Posts

Sorted by New

Wiki Contributions

Comments

In such cases, it more-often-than-not seems to me that the arguer has arrived at their conclusion through intuition, and is now attempting to work back to defensible arguments without those arguments being ones that would convince them, if they didn't first have the intuition.

Indeed, even knowing that in general I'm not a very jealous person, I was surprised at my own reaction to this thread: I upvoted a far greater proportion of the comments here than I usually do. I guess I'm more compersive than I thought!

There's a specific failure-mode related to this that I'm sure a lot of LW has encountered: for some reason, most people lose 10 "agency points" around their computers. This chart could basically be summarized as "just try being an agent for a minute sheesh."

I wonder if there's something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an "NPC death spiral"? It doesn't quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.

A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property.

I'd suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these "drives" or "instincts.") These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals.

Agency, then, is a bit like magnetism--it's a property that arises from your Agent-colony when you've got them all pointing the same way; when "enough of you" wants some particular outcome that there's no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.

This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I'm sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course.

Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you're an NPC. This seems dangerous somehow.

I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered "competent" at its job.

Not wanting to give anything away, I would remind you that what we have seen of Harry so far in the story was intended to resemble the persona of an 18-year-old Eliezer. Whatever Harry has done so far that you would consider to be "Beyond The Impossible", take measure of Eliezer's own life before and after a particular critical event. I would suggest that everything Harry has wrought until this moment has been the work of a child with no greater goal--and that, whatever supporting beams of the setting you feel are currently impervious to being knocked down, well, they haven't even had a motivated rationalist give them even a moment of attention, yet.

I mean, it's not like Harry can't extract a perfect copy of Hermione's material information-theoretic mass (both body and mind) using a combination of a fully-dissected time-turner, a pensieve containing complete braindumps of everyone else she's ever interacted with, a computer cluster manipulating the mirror of Erised into flipping through alternate timelines to explore Hermione's reactions to various hypotheticals, or various other devices strewn about the HP continuum. He might end up with a new baby Hermione (who has Hermione's utility function and memories) who he has to raise into being Hermione again, but just because something doesn't instantly restore her, doesn't mean it isn't worth doing. Or he might end up with a "real" copy of Hermione running in his head, which he'll then allow to manifest as a parallel-alter, using illusion charms along with the same mental hardware he uses for occlumency.

In fact, he could have probably done either of those things before, completely lacking in the motivation he has now. With it? I have no idea what will happen. A narrative Singularity-event, one might say.

Would you want to give the reader closure for the arc of a character who is, as the protagonist states, going to be coming back to life?

Personally, this reminds me more than anything of Crono's death in Chrono Trigger. Nobody mourns him--mourning is something to do when you don't have control over space and time and the absolute resolve to harness that control. And so the audience, also, doesn't get a break to stop and think about the death. They just hurl themselves, and their avatar, face-first into solving it.

Why not? Sure, you might start to recurse and distract yourself if you try to picture the process as a series of iterative steps, just as building any other kind of infinite data structure would—but that's what declarative data structure definitions were made for. :)

Instead of actually trying to construct each new label as you experience it, simply picture the sum total of your current attention as a digraph. Then, when you experience something, you add a label to the graph (pointing to the "real" experience, which isn't as easily visualized as the label—I picture objects in a scripting language's object space holding references to raw C structs here.) When you label the label itself, you simply attach a new label ('labelling') which points to the previous label, but also points to itself (a reflexive edge.) This would be such a regular occurrence of the graph that it would be easier to just visualize such label nodes as being definitionally attached to root labels, and thus able to be left out of any mental diagram in the same way Hydrogen is left out of the diagrams of organic molecules.

Actually, that brings up an interesting point—is the labelling process suggested here inherently subvocally-auditory? Can we visualize icons representing our experiences rather than subvocalizing words representing them, or does switching from Linear to Gestalt#Polis_time.2C_delta.2C_and_perception) change the effect this practice has on executive function?

In the sociological "let's all decide what norms to enforce" sense, sure, a lack of "morality" won't kill anyone. But in the more speculative-fictional "let's all decide how to self-modify our utility functions" sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one's might.

What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they'll stop cringing! It seems you first have to guide people into realizing that they can't just consciously change what they instinctively cringe about, before they'll accept any argument about what they should be consciously scorning.

Load More