Technoguyrob comments on My Take on a Decision Theory - Less Wrong

2 Post author: ygert 09 July 2013 10:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 09 July 2013 01:55:30PM *  5 points [-]

Here is an example:

Omega: Here are the rules, make your choice.
Decision Theory: makes the choice.
Omega: Actually, I lied. You get the opposite of what I told you, so now you have lost.

Obviously, from the set of decision theories assuming that Omega never lies, the better decision theory gets worse results in this situation.

Even worse example:

Omega: We have two boxes and... oh, I realized I don't like your face. You lose.

For each decision theory there can be an Omega disliking this specific theory, and then this theory does not win.

So, does the No Free Lunch-like theorem only predict these results, or something stronger than this?

Comment author: Technoguyrob 09 July 2013 04:14:29PM *  1 point [-]

This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S -> S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.

Comment author: Technoguyrob 09 July 2013 04:29:25PM *  1 point [-]

I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length <= n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won't ever have enough physical bits to represent that percentage of strings simultaneously).

Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega's as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega's source code, even though a good choice exists.

Comment author: ThisSpaceAvailable 15 July 2013 12:43:35AM 0 points [-]

I don't understand what you're saying here. You mean you pick a has function, and then store both the hashed value and the parameters for the hash function? But hash functions are lossy.

Comment author: ThisSpaceAvailable 15 July 2013 12:39:25AM 0 points [-]

Why does perfection in an encryption algorithm require compression? Did you mean to say "compression algorithm"?

Some notes on compression algorithms: If one defines a compression algorithm as a bijective map on all strings, and a decompression algorithm as an inverse of a compression algorithm, then according to this definition, all decompression algorithm are also compression algorithms. Suppose we apply a compression algorithm to a string, and then apply the algorithm to the result, and so on and so on. Suppose we call this the orbit of the string. Every orbit will be finite or infinite. A finite orbit will eventually come back to the original string. The length of strings in an orbit cannot be strictly monotonically decreasing, and if they are constant, then the orbit must be finite. In an infinite orbit, the length of the strings will tend towards infinity. So every compression algorithm will either simply cycle between some strings, or eventually makes things bigger.

Comment author: Technoguyrob 15 July 2013 06:39:33PM 0 points [-]

Yes, thank you, I meant compression algorithm.

Comment author: David_Gerard 09 July 2013 08:15:00PM -1 points [-]

That's the precise NFL I had in mind.

Comment author: Luke_A_Somers 10 July 2013 07:32:35PM *  1 point [-]

I don't see how this is comparable. It's not like you're shuffling around bad decisions and good decisions into an optimal arrangement. You just try to make better decisions, and the bad decisions can go take a hike.

(edit: basically, what Wedifrid said at 2:25)

Comment author: David_Gerard 10 July 2013 07:41:19PM *  0 points [-]

Yeah, as I noted above. It's reasonably obvious I'm not actually any sort of mathematician :-)