wedrifid comments on Don't be Pathologically Mugged! - Less Wrong

4 Post author: Psy-Kosh 28 August 2009 09:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread.

Comment author: wedrifid 29 August 2009 03:41:09AM 5 points [-]

A decision algorithm that would tend to win in this contrived situation would tend to lose in regular situations, right?

Yes. There is No Free Lunch. For every possible algorithm it is possible to create a problem in which the algorithm fares poorly. An algorithm optimized for any problem which gives a payoff in utility for being irrational will tend lose in regular situations. Also, decisions made based on pathological priors will tend to lose. This includes having inaccurate priors about the likely behaviour of the superintelligence that you are playing with.

Comment author: SilasBarta 29 August 2009 09:32:55AM 6 points [-]

Right, which is why I say it's misguided to search for a truly general intelligence; what you want instead is an intelligence with priors slanted toward this universe, not one that has to iterate through every hypothesis shorter than it.

Making a machine that's optimal across all universe algorithms means making it very suboptimal for this universe.

Comment author: wedrifid 29 August 2009 01:04:17PM 1 point [-]

That's true. At least I think it is. I can't imagine what a general intelligence that could handle this universe and an anti-Occamian one optimally would look like.

Comment author: Vladimir_Nesov 29 August 2009 09:16:32AM *  -1 points [-]

What is "inaccurate prior"? Prior that is not posterior enough, that is state of knowledge based on too little information/evidence? Frequentist connotations.

Comment author: wedrifid 29 August 2009 12:59:56PM 1 point [-]

Good point Vladimir. What phrase would I use to convey not just having too little evidence but having evidence that just happens to be concentrated in a really inconvenient way. Perhaps I'll just go with 'bad priors'. Such as the sort of prior distribution you would have when you had just drawn three red balls out of a jar without replacement, know that the five balls left are red or blue but have no clue that you've just drawn the only three reds. Not so much lacking evidence but having evidence that is bad/pathological/improbable/bad/inconvenient.