Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Manuel_Moertelmaier 13 November 2008 08:19:41AM 1 point [-]

@ comingstorm: Quasi Monte Carlo often outperforms Monte Carlo integration for problems of small dimensionality involving "smooth" integrands. This is, however, not yet rigorously understood. (The proven bounds on performance for medium dimensionality seem to be extremely loose.)

Besides, MC doesn't require randomness in the "Kolmogorov complexity == length" sense, but in the "passes statistical randomness tests" sense. Eliezer has, as far as I can see, not talked about the various definitions of randomness.

Comment author: Manuel_Moertelmaier 28 October 2008 10:01:22PM 0 points [-]

http://www.google.com/search?hl=en&q=tigers+climb+trees

On a more serious note, you may be interested in Marcus Hutter's 2007 paper "The Loss Rank Principle for Model Selection". It's about modeling, not about action selection, but there's a loss function involved, so there's a pragmatist viewpoint here, too.

In response to The Level Above Mine
Comment author: Manuel_Moertelmaier 26 September 2008 09:59:09AM 17 points [-]

In a few years, you will be as embarrassed of these posts as you are today of your former claims of being an Algernon, or that a logical paradox would make an AI go gaga, the tMoL argumentation you mentioned the last days, the Workarounds for the Laws of Physics, Love and Life Just Before the Singularity and so on and so forth. Ask yourself: Will I have to delete this, too ?

And the person who told you to go to college was probably well-meaning, and not too far from the truth. Was it Ben Goertzel ?

In response to Magical Categories
Comment author: Manuel_Moertelmaier 25 August 2008 05:51:41AM 0 points [-]

In contrast to Eliezer I think it's (remotely) possible to train an AI to reliably recognize human mind states underlying expressions of happiness. But this would still not imply that the machine's primary, innate emotion is unconditional love for all humans. The machines would merely be addicted to watching happy humans.

Personally, I'd rather not be an object of some quirky fetishism.

Monthy Python has, of course, realized it long ago:

http://www.youtube.com/watch?v=HoRY3ZjiNLU http://www.youtube.com/watch?v=JTMXtJvFV6E

In response to Invisible Frameworks
Comment author: Manuel_Moertelmaier 22 August 2008 06:05:26AM 0 points [-]

I strongly second Marcello here. When you wrote "The fact that a subgoal is convergent [] doesn't lend the subgoal magical powers in any specific goal system" in CFAI that about settled the matter in a single sentence. Why the long, "lay audience" posts, now, eight years later ?