Comment author:BT_Uytya
09 February 2012 09:31:42PM
*
-1 points
[-]

I believe Eliezer's point wasn't that any algoritm should be de-randomized. His point was that it is impossible to extract useful work from noise; useful work is lawful. It is more philosophical statement than practical advice. To admire beauty of entropy is to worship your own ignorance and so on.

Putting aside the computational complexity, space-time tradeoff, you can easily see it: if something, in principle, can be done with help of random number generator, than the same result can be obtained without using random number generator.

Random is a void; optimization processes are closing that gap by ordered patterns. If you don't know how to fill in that hole, it means that you don't have enough information about some problem; it doesn't mean that entropy is magic key to this problem.

And yes, in practice the cost of obtaining that information may render the derandomized algorithm useless in practice; for example, select of the "true median" (median can be found in O(n) time) as pivot in quicksort will slow it down.

Comment author:Bugmaster
09 February 2012 11:08:07PM
0 points
[-]

Upon reading the article, I had the same impression as jsteinhardt. I thought that Eliezer was, in fact, giving practical advice: "if your algorithm performs better with noise, figure out what's wrong with it and remove the noise". This advice is good in some cases, but impractical in others, just as you said in your closing paragraph.

I think the reason for why I read Eliezer's statement the way I did is that I don't really see the point in making "philosophical statements" about AI algorithms. These algorithms exist to efficiently solve real problems, not to score points in abstract debates.

Comment author:BT_Uytya
12 February 2012 05:54:55PM
0 points
[-]

"Philosophical statements" about AI algorithms are useful: not for algorithms themselves, but for AI researchers.

AI researcher shouldn't be mistaken about Mysterious Power Of The Entropy. The "We don't understand human intelligence, hence if I wish to create an artificial one, I must not understand it either" attitude is wrong.

So if I was to summarize this post, the summary was something like "Noise can't be inherently good; if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem."

I think it is a part of more general principle "Never leave behind parts that work, but you have no slightest idea why they work".

Comment author:Bugmaster
14 February 2012 03:52:41AM
1 point
[-]

AI researcher shouldn't be mistaken about Mysterious Power Of The Entropy

I am having trouble picturing a real AI researcher who believes in the "Mysterious Power Of The Entropy". Maybe I simply lack sufficient imagination. Still, that sounds like something a philosopher might believe in, not an AI researcher who actually implements (or designs) real AI algorithms.

if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem.

I guess it depends on what you mean by "magical". There are tons of stochasticalgorithms out there that rely on noise explicitly. Sure, there technically does exist a "lawful non-magical way" to solve the problem these algorithms are solving stochastically, but usually it's completely infeasible due to the amount of time it would take to run. Thus, one has no choice but to use the noisy algorithms.

Never leave behind parts that work, but you have no slightest idea why they work

Again, this depends on how strictly you want to interpret the rule. For example, multi-layer neural networks work very well, but they are often criticized for not being transparent. You can train the network to recognize handwriting (just for example), but you can't readily explain what all the weights mean once you'd trained it.

## Comments (99)

Old*-1 points [-]I believe Eliezer's point wasn't that any algoritm

should be de-randomized. His point was that it is impossible to extract useful work from noise; useful work is lawful. It is more philosophical statement than practical advice. To admire beauty of entropy is to worship your own ignorance and so on.Putting aside the computational complexity, space-time tradeoff, you can easily see it: if something, in principle, can be done with help of random number generator, than the same result can be obtained without using random number generator.

Random is a void; optimization processes are closing that gap by ordered patterns. If you don't know how to fill in that hole, it means that you don't have enough information about some problem; it doesn't mean that entropy is magic key to this problem.

And yes, in practice the cost of obtaining that information may render the derandomized algorithm useless in practice; for example, select of the "true median" (median can be found in O(n) time) as pivot in quicksort will slow it down.

Upon reading the article, I had the same impression as jsteinhardt. I thought that Eliezer

was, in fact, giving practical advice: "if your algorithm performs better with noise, figure out what's wrong with it and remove the noise". This advice is good in some cases, but impractical in others, just as you said in your closing paragraph.I think the reason for why I read Eliezer's statement the way I did is that I don't really see the point in making "philosophical statements" about AI algorithms. These algorithms exist to efficiently solve real problems, not to score points in abstract debates.

"Philosophical statements" about AI algorithms are useful: not for algorithms themselves, but for AI researchers.

AI researcher shouldn't be mistaken about Mysterious Power Of The Entropy. The "We don't understand human intelligence, hence if I wish to create an artificial one, I must not understand it either" attitude is wrong.

So if I was to summarize this post, the summary was something like "Noise can't be inherently good; if entropy

magicallysolves your problem, it means that there is a some more lawful non-magical way to solve this problem."I think it is a part of more general principle "Never leave behind parts that work, but you have no slightest idea why they work".

I am having trouble picturing a real AI researcher who believes in the "Mysterious Power Of The Entropy". Maybe I simply lack sufficient imagination. Still, that sounds like something a philosopher might believe in, not an AI researcher who actually implements (or designs) real AI algorithms.

I guess it depends on what you mean by "magical". There are tons of stochastic algorithms out there that rely on noise explicitly. Sure, there technically does exist a "lawful non-magical way" to solve the problem these algorithms are solving stochastically, but usually it's completely infeasible due to the amount of time it would take to run. Thus, one has no choice but to use the noisy algorithms.

Again, this depends on how strictly you want to interpret the rule. For example, multi-layer neural networks work very well, but they are often criticized for not being transparent. You can train the network to recognize handwriting (just for example), but you can't readily explain what all the weights mean once you'd trained it.