Comment author: Eugine_Nier 17 January 2014 03:16:21AM 3 points [-]

So steelmanning the cold fusion crackpot's argument may have brought you to firmly believe in cold fusion, that's fine as long as you don't forget that the crackpot still believes in the right conclusion for the wrong reasons (the weak form of the argument), and is as such still a crackpot.

Of course, if you're finding that someone seems to repeatedly arrive at the right conclusion for "the wrong reasons" you should take this as evidence that said reasons are better than you thought.

Comment author: derefr 17 January 2014 05:58:53PM 6 points [-]

In such cases, it more-often-than-not seems to me that the arguer has arrived at their conclusion through intuition, and is now attempting to work back to defensible arguments without those arguments being ones that would convince them, if they didn't first have the intuition.

Comment author: Omid 14 August 2013 04:07:37PM *  12 points [-]

I just realized that after reading all of these things that I was happy for each person's achievement, and not the slightest bit jealous. I'm not sure why this is, but it's a good thing.

Comment author: derefr 22 October 2013 02:35:29AM 1 point [-]

Indeed, even knowing that in general I'm not a very jealous person, I was surprised at my own reaction to this thread: I upvoted a far greater proportion of the comments here than I usually do. I guess I'm more compersive than I thought!

Comment author: kalium 26 August 2013 03:30:41AM *  5 points [-]

#1 grates for me. If a friend goes to me in tears more than a couple of times demanding that I fix their bicycle/grades/relationship/emotional problems, I will no longer consider them a friend. If you ask politely I'll try to get you on the right track (here's the tool you need and here's how to use it/this is how to sign up for tutoring/whatever), but doing much more than that is treating the asker as less than an agent themself. Going to your friend in tears before even trying to come up with a solution yourself is not a good behavior to encourage (I've been on both sides of this, and it's not good for anyone).

Don't confuse reliability and responsibility with being a sucker.

Comment author: derefr 30 August 2013 08:06:13AM *  6 points [-]

There's a specific failure-mode related to this that I'm sure a lot of LW has encountered: for some reason, most people lose 10 "agency points" around their computers. This chart could basically be summarized as "just try being an agent for a minute sheesh."

I wonder if there's something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an "NPC death spiral"? It doesn't quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.

Comment author: Lumifer 27 August 2013 05:45:20PM 4 points [-]

Hm, interesting. I have some terminological confusion to battle through here.

My mind associates "agent" with either Bond/MiB creatures or game theory and economics. The distinction you're drawing I would describe as active and passive. "Agenty"/PC people are the active ones, they make things happen, they shape the narrative, they are internally driven to change their environment. By contrast the "complex-system"/NPC people are the passive ones, they react to events, they go with the flow, the circumstances around them drive their behavior.

I don't think of active and passive as two kinds of people. I think of them as two endpoints on an axis with most people being somewhere in the middle. It's a characteristic of a person, a dimension of her personality.

Comment author: derefr 30 August 2013 07:50:27AM *  0 points [-]

A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property.

I'd suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these "drives" or "instincts.") These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals.

Agency, then, is a bit like magnetism--it's a property that arises from your Agent-colony when you've got them all pointing the same way; when "enough of you" wants some particular outcome that there's no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.

Comment author: Brillyant 27 August 2013 09:43:38PM 2 points [-]

I suspect all people, including me, are NPC meat-computers running firmware/software that provides the persistent, conscious illusion of PC-ness (or agenty-ness). Some people are more advanced computers and, therefore, seem more agenty... but all are computers nontheless.

Modeling people this way (as very complex NPCs), as some have pointed out in the comments, seems to be a rather effective means of limiting the experience of anger and frustration... or at least making anger and frustration seem irrational, thereby causing it (at least in my experience) to lose it's appeal (catharsis, or whatever) over time. It has worked that way for me.

...

I'm curious... and perhaps someone (smarter than I) can help enlighten me...

How is a discussion of free will different (or similar to) PC vs. NPC?

Comment author: derefr 30 August 2013 07:39:16AM *  1 point [-]

This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I'm sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course.

Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you're an NPC. This seems dangerous somehow.

Comment author: Eliezer_Yudkowsky 25 August 2013 07:14:37PM 7 points [-]

PCs are also systems; they're just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn't predict exactly how you would do them, I have no choice but to model you as an 'intelligence'. But that's, well... really rare.

Comment author: derefr 30 August 2013 07:23:52AM *  0 points [-]

I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered "competent" at its job.

Comment author: Ritalin 30 June 2013 05:01:05PM *  0 points [-]

I do not, for one second, that Harry Potter is going to pull that off in his first year at Hogwarts. Or ever. I'd find it easier to believe that he beat Entropy than that he figured how to bring back the dead.

While it's true that he's already done things Beyond The Impossible before, all the rules of the setting seem to indicate that Death Is Final. Even his dreams of "immortality for everyone" seemed to be about stopping people from dying, not bringing them back.

Comment author: derefr 01 July 2013 02:11:35AM *  9 points [-]

Not wanting to give anything away, I would remind you that what we have seen of Harry so far in the story was intended to resemble the persona of an 18-year-old Eliezer. Whatever Harry has done so far that you would consider to be "Beyond The Impossible", take measure of Eliezer's own life before and after a particular critical event. I would suggest that everything Harry has wrought until this moment has been the work of a child with no greater goal--and that, whatever supporting beams of the setting you feel are currently impervious to being knocked down, well, they haven't even had a motivated rationalist give them even a moment of attention, yet.

I mean, it's not like Harry can't extract a perfect copy of Hermione's material information-theoretic mass (both body and mind) using a combination of a fully-dissected time-turner, a pensieve containing complete braindumps of everyone else she's ever interacted with, a computer cluster manipulating the mirror of Erised into flipping through alternate timelines to explore Hermione's reactions to various hypotheticals, or various other devices strewn about the HP continuum. He might end up with a new baby Hermione (who has Hermione's utility function and memories) who he has to raise into being Hermione again, but just because something doesn't instantly restore her, doesn't mean it isn't worth doing. Or he might end up with a "real" copy of Hermione running in his head, which he'll then allow to manifest as a parallel-alter, using illusion charms along with the same mental hardware he uses for occlumency.

In fact, he could have probably done either of those things before, completely lacking in the motivation he has now. With it? I have no idea what will happen. A narrative Singularity-event, one might say.

Comment author: Ritalin 30 June 2013 02:32:03AM *  2 points [-]

Well, forgive me for overstating my point in a state of emotional frustration, anguish, anger, disappointment, and just plain loathing. No, it is technically not correct to call this Fridge Stuffing. Nevertheless, the fact is that my willing suspension of disbelief is broken, and that I find that my anger is directed at you rather than at the Universe or the Rules or Fate or whatever forces make the death of a beloved main character acceptable. My brain rejects this. I've never, in my life, until now, felt like declaring a piece of fiction DisContinuity, but this is exactly how I'm feeling now. If she had died in Azkaban or from a Kiss or from a Malfoy-funded assassination, that would have perhaps felt better. But the lamest warmup boss of the canon? Offscreen? And making Harry arrive just too late? Not minutes too late, mind you, but right after the troll grabbed and crushed her?

What, would just a few paragraphs of seeing the fight from her perspective have hurt? A sense of closure, perhaps, at least on her side?

Comment author: derefr 30 June 2013 12:53:07PM *  6 points [-]

Would you want to give the reader closure for the arc of a character who is, as the protagonist states, going to be coming back to life?

Personally, this reminds me more than anything of Crono's death in Chrono Trigger. Nobody mourns him--mourning is something to do when you don't have control over space and time and the absolute resolve to harness that control. And so the audience, also, doesn't get a break to stop and think about the death. They just hurl themselves, and their avatar, face-first into solving it.

Comment author: Jonathan_Graehl 06 May 2011 08:54:55PM 3 points [-]

WARNING: never label 'labeling'!

:)

Comment author: derefr 07 May 2011 09:39:50AM *  0 points [-]

Why not? Sure, you might start to recurse and distract yourself if you try to picture the process as a series of iterative steps, just as building any other kind of infinite data structure would—but that's what declarative data structure definitions were made for. :)

Instead of actually trying to construct each new label as you experience it, simply picture the sum total of your current attention as a digraph. Then, when you experience something, you add a label to the graph (pointing to the "real" experience, which isn't as easily visualized as the label—I picture objects in a scripting language's object space holding references to raw C structs here.) When you label the label itself, you simply attach a new label ('labelling') which points to the previous label, but also points to itself (a reflexive edge.) This would be such a regular occurrence of the graph that it would be easier to just visualize such label nodes as being definitionally attached to root labels, and thus able to be left out of any mental diagram in the same way Hydrogen is left out of the diagrams of organic molecules.

Actually, that brings up an interesting point—is the labelling process suggested here inherently subvocally-auditory? Can we visualize icons representing our experiences rather than subvocalizing words representing them, or does switching from Linear to Gestalt change the effect this practice has on executive function?

Comment author: Amanojack 02 May 2011 02:14:49AM 35 points [-]

There is some ineffable something in those who are distinctly uncooperative with requests to define morality or otherwise have a rational discussion on the matter, both here and on all forums where I've discussed morality, and I think you've hit on what that something is. It is the fear of nihilism, the fear that without their moral compass they might suddenly want to do evil, deplorable things because they'd be A-okay.

What they don't see, in my opinion, is that it is their very dread at such a possibility that is really what is keeping them from doing those things. "Morality" provides no additional protection; it merely serves as after-the-fact justification of the sentiments that were already there.

We don't cringe at the thought of stealing from old ladies because it's wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought.

Comment author: derefr 02 May 2011 04:32:26AM *  2 points [-]

In the sociological "let's all decide what norms to enforce" sense, sure, a lack of "morality" won't kill anyone. But in the more speculative-fictional "let's all decide how to self-modify our utility functions" sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one's might.

What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they'll stop cringing! It seems you first have to guide people into realizing that they can't just consciously change what they instinctively cringe about, before they'll accept any argument about what they should be consciously scorning.

View more: Next