Comment author: wedrifid 30 March 2010 09:07:24PM 0 points [-]

Good point, I hadn't even thought of that implication.

Comment author: kim0 30 March 2010 09:24:52PM -2 points [-]

You all are quite good at picking up the implications, which means my post worked.

Comment author: Matt_Simpson 29 March 2010 09:22:24PM *  4 points [-]

better methodology would have been to use piecewise (or "hockey-stick") regression, which assumes the data is broken into 2 sections (typically one sloping downwards and one sloping upwards), and tries to find the right breakpoint, and perform a separate linear regression on each side of the break that meets at the break. (I almost called this "The case of the missing hockey-stick", but thought that would give the answer away.)

An even better methodology would be to allow for higher order terms in the regression model. Adding squared terms, the model would look like this:

or

This would allow for nice those nice looking curves you were talking about. And it can be combined with logistic regression. Really, regression is very flexible; there's no excuse for what they did.

Also, the scientists could have done a little model checking. If what Phil says about the U/J shaped response curve is true, the first order model would have been rejected by some sensible model selection criterion (AIC, BIC, stepwise selection, lack-of-fit F test, etc)

related side note: In my grad stat classes, "Linear Regression" usually includes things like my example above - i.e. linear functions of the (potentially transformed) explanatory variables including higher order terms. Is this different from the how the term is widely used?

unrelated side note: is there a way to type pretty math in the comments?

followup question: are scientists outside of the field of statistics really this dumb when it comes to statistics? It seems like they see their standard methods (i.e., regression) as black boxes that take data as an input and then output answers. Maybe my impression is skewed by the examples popping up here on LW.

Comment author: kim0 30 March 2010 09:54:54AM 1 point [-]

Yes. Quadratic regression is better, often. The problem is that the number of coefficients to adjust in the model gets squared, which goes against Ockhams razor. This is precisely the problem I am working on these days, though in the context of the oil industry.

Comment author: kim0 30 March 2010 09:33:56AM 3 points [-]

Thank you for a very nice article.

Comment author: kim0 18 January 2010 11:39:30PM -2 points [-]

The real dichotomy here is "maximising evaluation function" versus "maximising probability of positive evaluation function"

In paperclip making, or better, the game of Othello/Reversi, there are choices like this:

80% chance of winning 60-0, versus 90% chance of winning 33-31.

The first maximises the winning, and is similar to a paperclip maker consuming the entire universe. The second maximises the probability of succeeding, and is similar to a paperclip maker avoiding being annihilated by aliens or other unknown forces.

Mathematically, the first is similar to finding the shortest program in Kolmogorov Complexity, while the second is similar to integrating over programs.

So, friendly AI is surely of the second kind, while insane AI is of the first kind.

In response to comment by kim0 on Advice for AI makers
Comment author: kim0 19 January 2010 08:29:24AM 2 points [-]

I guess you down-voters of me felt quite rational when doing so.

And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:

There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.

My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction instead, from people I guess have not even made a decent game playing AI, but nevertheless have strong opinions on how they must be.

So, for getting intelligent rational arguments on AI, this community is useless, as opposed to Yudkowsky, Schmidhuber, Hansen, Tyler, etc. which has shown on their own sites that they have something to contribute.

To get real results in AI and rationality, I do my own math and science.

In response to comment by kim0 on Advice for AI makers
Comment author: orthonormal 18 January 2010 09:02:22PM 6 points [-]

That is something we worry about from time to time, but in this case I think the downvotes are justified. Tim Tyler has been repeating a particular form of techno-optimism for quite a while, which is fine; it's good to have contrarians around.

However, in the current thread, I don't think he's taking the critique seriously enough. It's been pointed out that he's essentially searching for reasons that even a Paperclipper would preserve everything of value to us, rather than just putting himself in Clippy's place and really asking for the most efficient way to maximize paperclips. (In particular, preserving the fine details of a civilization, let alone actual minds from it, is really too wasteful if your goal is to be prepared for a wide array of possible alien species.)

I feel (and apparently, so do others) that he's just replying with more arguments of the same kind as the ones we generally criticize, rather than finding other types of arguments or providing a case why anthropomorphic optimism doesn't apply here.

In any case, thanks for the laugh line:

You went over some peoples heads.

My analysis of Tim Tyler in this thread isn't very positive, but his replies seem quite clear to me; I'm frustrated on the meta-level rather than the object-level.

Comment author: kim0 18 January 2010 11:39:30PM -2 points [-]

The real dichotomy here is "maximising evaluation function" versus "maximising probability of positive evaluation function"

In paperclip making, or better, the game of Othello/Reversi, there are choices like this:

80% chance of winning 60-0, versus 90% chance of winning 33-31.

The first maximises the winning, and is similar to a paperclip maker consuming the entire universe. The second maximises the probability of succeeding, and is similar to a paperclip maker avoiding being annihilated by aliens or other unknown forces.

Mathematically, the first is similar to finding the shortest program in Kolmogorov Complexity, while the second is similar to integrating over programs.

So, friendly AI is surely of the second kind, while insane AI is of the first kind.

Comment author: wedrifid 18 January 2010 12:27:23AM 3 points [-]

Did you make some huge transgression that I missed that is causing people to get together and downvote your comments?

Not really, just lots of little ones involving the misuse of almost valid ideas. They get distracting.

Comment author: kim0 18 January 2010 08:39:06PM -1 points [-]

You got voted down because you were rational. You went over some peoples heads.

These are popularity points, not rationality points.

Comment author: kim0 28 September 2009 07:04:36AM 1 point [-]

I have an Othello/Reversi playing program.

I tried making it better by applying probabilistic statistics to the game tree, quite like antropic reasoning. It then became quite bad at playing.

Ordinary minimax with A-B did very well.

Game algorithms that ignore density of states in the game tree, and only focus on minimaxing, do much better. This is a close analogy to the experience trees of Eliezer, and therefore a hint that antropic reasoning here has some kind of error.

Kim0

Comment author: kim0 17 May 2009 09:14:28PM -1 points [-]

Interesting, but too verbose.

The author is clearly not aware of the value of the K.I.S.S. principle, or Ockhams razor, in this context.

Comment author: Annoyance 11 May 2009 08:09:15PM -1 points [-]

The rules of Go are perfectly clear. It's the consequences of those rules that we have a great deal of trouble understanding.

Or that you do, at least.

Comment author: kim0 12 May 2009 06:00:51AM *  0 points [-]

You are wrong. Here are some links showing that Go is not perfectly clear:

Introduction:

Discussion of a lot of problems with scoring:

Some concrete positional examples:

Comment author: loqi 10 May 2009 09:03:57PM 3 points [-]

The problem is that Go is actually not a game, while people believe that it is.

Massive semantic confusion. Just because the word "Go" is used to denote a family of games and game-like activities doesn't mean there can't be concrete realizations of the concept that capture most or all of its interesting qualities. Concluding that the game has "no true core" and giving it up, merely because its label is too broad for your taste, strikes me as very confused thinking.

Comment author: kim0 11 May 2009 03:58:53AM -1 points [-]

Giving it up is rational thinking, because there is no "it" there when the label is too broad.

In Bayesian inference, it is equivalent to P( A | B v C v D v ...), which is somewhat like underfitting. The space of possibilities becomes too large for it to be possible to find a good move. In games it is precisely the unclear parts of the game space that is interesting to the loosing part, because it is most likely there will be better moves there. But when it is not even possible to analyze those parts, then true optimal play regresses to quarreling about it, which is precisely what the Japanese tradition has done for at least some hundred years.

I have played enough Go to know that the concrete rules can make the endgame very different. The usual practice is to pretend it is not so, and stop the game before the endgame starts.

So Go is riddled with quarrels and pretense. Not a game in practice. More like politics, or Zen.

Optimal playing strategies in games can be very different from what people believe them to be, as examplified by the program Eurisko which won the Traveller TCS championships with very unconventional fleets. I suspect strongly that similar thing will happen for true Go games.

I might have found a variation of minimax that can tackle Go, but to use it, it MUST be possible to evaluate a Go position, at least in principle. So I will probably go for the Tromp-Taylor rules, if I get the time to do this. And perhaps the Japanese rules of Robert Jasiek.

View more: Prev | Next