DanielLC comments on [SEQ RERUN] Nonperson Predicates - Less Wrong

1 Post author: MinibearRex 13 January 2013 09:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread. Show more comments above.

Comment author: anotherblackhat 13 January 2013 10:02:46PM 0 points [-]

Consider the intuitively simpler problem of "is something a universal turing machine?" Consider further this list of things that are capable of being a universal turing machine;

  • Computers.
  • Conway's game of life.
  • Elementary cellular automata.
  • Lots of Nand gates.

Even a sufficiently complex shopping list might qualify. And it's even worse, because knowing that A doesn't have personhood, and that B doesn't have personhood doesn't let us conclude that A+B doesn't have personhood. A single Transistor isn't a computer, but 3510 transistors might be a 6502. If we want to be 100% safe, we have to rule out anything we can't analyze, which means we pretty much have to rule out everything. We might as well make the function always return 1.

OK, as bad as that sounds, it just means we shouldn't work too hard on solving the problem perfectly, because we know we'll never be able to do so in a meaningful way. But perhaps we can solve the problem imperfectly. Spam assassin faces a very similar kind of problem, "how can we tell if a message is spam?" The technique it uses is conceptually simple; Pick a test that some messages pass and some fail. Use the test on a corpus of messages classified as spam and a corpus classified as non-spam, and use the results to assign a probability that a message is spam if it passes the test. In addition to the obvious advantage of "I can see how to do that for a non-person predicate test", such a test could also give a score for "has some person-like properties". Thus we can meaningfully approach the problem of A + B being a person even though A and B aren't by themselves.

What kind of tests can we run? Beats me, but presumably we'll have something before we can make an AI by design.

One problem with this approach is it could be wrong. It might even be very wrong. Also, training the predicate function might be an evil process - that is, training may involve purposely creating things that pass.

Comment author: DanielLC 13 January 2013 11:01:48PM 2 points [-]

The problem with training isn't purposely creating things that pass. It's purposely creating things that don't. In order to figure out what doesn't pass, we need a predicate function. Once we've figured out how to find things that won't pass, we've already found the answer.

Comment author: anotherblackhat 15 January 2013 09:39:27PM *  0 points [-]

Doesn't follow. Consider;

I claim a rock is a non-person.
I expect you accept that statement, I expect that you therefore have a non-person predicate function, yet I also expect you haven't found the answer.

I accept that in order to classify something, we need to be able to classify it.

I'm suggesting there might be a function that classifies some things incorrectly, and is still useful.