SilasBarta comments on The Anthropic Trilemma - Less Wrong

24 Post author: Eliezer_Yudkowsky 27 September 2009 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 29 September 2009 04:19:22PM 0 points [-]

Unless I'm misunderstanding UDT, isn't speed another issue? An FAI must know what's likely to be happening in the near future in order to prioritize its computational resources so they're handling the most likely problems. You wouldn't want it churning through the implications of the Loch Ness monster being real while a mega-asteroid is headed for the earth.

Comment author: Eliezer_Yudkowsky 29 September 2009 05:33:07PM *  2 points [-]

Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.

(There are all sorts of exceptions to this principle, and they mostly have to do with "efficient" choices of representation that affect the underlying epistemology. You can view a Bayesian network as efficiently compressing a raw probability distribution, but it can also be seen as committing to an ontology that includes primitive causality.)

Comment author: SilasBarta 29 September 2009 11:33:25PM *  1 point [-]

Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.

But that path is not viable here. If UDT claims to make decisions independently of any anticipation, then it seems it must be optimal on average over all the impossibilities it's prepared to compute an output for. That means it must be sacrificing optimality in this world-state (by No Free Lunch), even given infinite computing time, so having a quick approximation doesn't help.

If an AI running UDT is just as prepared to find Nessie as to find out how to stop the incoming asteroid, it will be inferior to a program designed just to find out how to stop asteroids. Expand the Nessie possibility to improbable world-states, and the asteroid possibility to probable ones, and you see the problem.

Though I freely admit I may be completely lost on this.