wgd comments on Value Loading - Less Wrong

3 Post author: ryjm 23 October 2012 04:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: wgd 23 October 2012 04:57:14AM *  11 points [-]

Maybe "value loading" is a term most people here can be expected to know, but I feel like this post would really be improved by ~1 paragraph of introduction explaining what's being accomplished and what the motivation is.

As it is, even the text parts make me feel like I'm trying to decipher an extremely information-dense equation.

Comment author: RichardKennaway 23 October 2012 11:22:33AM *  4 points [-]

Maybe "value loading" is a term most people here can be expected to know

It's the first time I've seen the term, and the second it has appeared at all on LessWrong.

It may be more current among "people who are on every mailing list, read every LW post, or are in the Bay Area and have regular conversations with [the SI]" (from its original mention on LW).

Comment author: Stuart_Armstrong 23 October 2012 02:16:42PM 4 points [-]

It's more an FHI term than a SI/LessWrong term.

It's often called "indirect normativity": a strategy in which instead of directly encoding the goal for an AI (or moral agent), we specify a certain way of "learning what to value/inferring human values" so that the AI can then deduce human values (and then implement it).

Comment author: Manfred 23 October 2012 08:28:31PM 2 points [-]

Ah, so it means the same thing as "value learning?" For some reason when I read "value loading" I thought of, like, overloading a function :D "I want cake, and that desire is also a carnal lust for BEES!"

Comment author: DaFranker 23 October 2012 08:57:32PM *  0 points [-]

What helped me was thinking of it in terms of: "Oh, like 'reading' human preferences as if they were an XML config file that the program loads at runtime."