# eli_sennesh comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: [deleted] 07 April 2014 04:09:07PM *  0 points [-]

Thanks. My machine-learning course last semester didn't properly emphasize the formal definition of overfitting, or perhaps I just didn't study it hard enough.

What I do want to think about here is: is there a mathematical way to talk about what happens when a learning algorithm finds the wrong correlative or causative link among several different possible links between the data set and the target function? Such maths would be extremely helpful for advancing the probabilistic value-learning approach to FAI, as they would give us a way to talk about how we can interact with an agent's beliefs about utility functions while also minimizing the chance/degree of wireheading.

Comment author: 07 April 2014 07:25:28PM 0 points [-]

is there a mathematical way to talk about what happens when a learning algorithm finds the wrong correlative or causative link among several different possible links between the data set and the target function?

That would be useful! A short search gives "bias" as the closest term, which isn't very helpful.

Comment author: [deleted] 08 April 2014 03:29:54PM *  2 points [-]

Unfortunately "bias" in statistics is completely unrelated to what we're aiming for here.

In ugly, muddy words, what we're thinking is that we give the value-learning algorithm some sample of observations or world-states as "good", and possibly some as "bad", and "good versus bad" might be any kind of indicator value (boolean, reinforcement score, whatever). It's a 100% guarantee that the physical correlates of having given the algorithm a sample apply to every single sample, but we want the algorithm to learn the underlying causal structure of why those correlates themselves occurred (that is, to model our intentions as a VNM utility function) rather than learn the physical correlates themselves (because that leads to the agent wireheading itself).

Here's a thought: how would we build a learning algorithm that treats its samples/input as evidence of an optimization process occurring and attempts to learn the goal of that optimization process? Since physical correlates like reward buttons don't actually behave as optimization processes themselves, this would ferret out the intentionality exhibited by the value-learner's operator from the mere physical effects of that intentionality (provided we first conjecture that human intentions behave detectably like optimization).

Has that whole "optimization process" and "intentional stance" bit from the LW Sequences been formalized enough for a learning treatment?

Comment author: 09 April 2014 06:08:43AM *  2 points [-]

http://www.fungible.com/respect/index.html This looks to be very related to the idea of "Observe someone's actions. Assume they are trying to accomplish something. Work out what they are trying to accomplish." Which seems to be what you are talking about.

Comment author: [deleted] 09 April 2014 08:08:05AM 0 points [-]

That looks very similar to what I was writing about, though I've tried to be rather more formal/mathematical about it instead of coming up with ad-hoc notions of "human", "behavior", "perception", "belief", etc. I would want the learning algorithm to have uncertain/probabilistic beliefs about the learned utility function, and if I was going to reason about individual human minds I would rather just model those minds directly (as done in Indirect Normativity).

Comment author: 08 April 2014 05:55:49PM 0 points [-]

Comment author: [deleted] 08 April 2014 06:22:20PM 0 points [-]

The most obvious weakness is that such an algorithm could easily detect optimization processes that are acting on us (or, if you believe such things exist, you should believe this algorithm might locate them mistakenly), rather than us ourselves.

Comment author: 16 May 2014 10:33:19AM 1 point [-]

I've been thinking about this, and I haven't found any immediately useful way of using your idea, but I'll keep it in the back of my mind... We haven't found a good way of identifying agency in the abstract sense ("was cosmic phenonmena X caused by an agent, and if so, which one?" kind of stuff), so this might be a useful simpler problem...

Comment author: [deleted] 16 May 2014 02:35:27PM 1 point [-]

Upon further research, it turns out that preference learning is a field within machine learning, so we can actually try to address this at a much more formal level. That would also get us another benefit: supervised learning algorithms don't wirehead.

Notably, this fits with our intuition that morality must be "taught" (ie: via labelled data) to actual human children, lest they simply decide that the Good and the Right consists of eating a whole lot of marshmallows.

And if we put that together with a conservation heuristic for acting under moral uncertainty (say: optimize for expectedly moral expected utility, thus requiring higher moral certainty for less-extreme moral decisions), we might just start to make some headway on managing to construct utility functions that would mathematically reflect what their operators actually intend for them to do.

I also have an idea written down in my notebook, which I've been refining, that sort of extends from what Luke had written down here. Would it be worth a post?

Comment deleted 08 April 2014 03:47:37PM [-]
Comment author: [deleted] 08 April 2014 03:50:12PM *  -1 points [-]

Keywords? I've looked through Wikipedia and the table of contents from my ML textbook, but I haven't found the right term to research yet. "Learn a causal structure from the data and model the part of it that appears to narrow the future" would in fact be how to build a value-learner, but... yeah.

EDIT: One of my profs from undergrad published a paper last year about causal-structure. The question is how useful it is for universal AI applications. Joshua Tenenbaum tackled it from the cog-sci angle in 2011, but again, I'm not sure how to transfer it over to the UAI angle. I was searching for "learning causal structure from data" -- herp, derp.

Comment author: 08 April 2014 04:26:42PM 0 points [-]