Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: So8res 16 April 2014 04:57:13PM *  5 points [-]

Incorrect -- your implementation itself also affects the environment via more than your chosen output channels. (Your brain can be scanned, etc.) If you define waste heat, neural patterns, and so on as "output channels" then sure, we can say you only interact via I/O (although the line between I and O is fuzzy enough and your control over the O is small enough that I'd personally object to the distinction).

However, AIXI is not an agent that communicates with the environment only via I/O in this way: if you insist on using the I/O model then I point out that AIXI neglects crucial I/O channels (such as its source code).

until I see the actual math

In fact, Botworld is a tool that directly lets us see where AIXI falls short. (To see the 'actual math', simply construct the game described below with an AIXItl running in the left robot.)

Consider a two-cell Botworld game containing two robots, each in a different cell. The left robot is running an AIXI, and the left square is your home square. There are three timesteps. The right square contains a robot which acts as follows:

1. If there are no other robots in the square, Pass.
2. If an other robot just entered the square, Pass.
3. If an other robot has been in the square for a single turn, Pass.
4. If an other robot has been in the square for two turns, inspect its code.
.. If it is exactly the smallest Turing machine which never takes any action,
.. move Left.
5. In all other cases, Pass.

Imagine, further, that your robot (on the left) holds no items, and that the robot on the right holds a very valuable item. (Therefore, you want the right robot to be in your home square at the end of the game.) The only way to get that large reward is to move right and then rewrite yourself into the smallest Turing machine which never takes any action.

Now, consider the AIXI running on the left robot. It quickly discovers that the Turing machine which receives the highest reward acts as follows:

1. Move right
2. Rewrite self into smallest Turing machine which does nothing ever.

The AIXI then, according to the AIXI specification, does the output of the Turing machine it's found. But the AIXI's code is as follows:

1. Look for good Turing machines.
2. When you've found one, do it's output.

Thus, what the AIXI will do is this: it will move right, then it will do nothing for the rest of time. But while the AIXI is simulating the Turing machine that rewrites itself into a stupid machine, the AIXI itself has not eliminated the AIXI code. The AIXI's code is simulating the Turing machine and doing what it would have done, but the code itself is not the "do nothing ever" code that the second robot was looking for -- so the AIXI fails to get the reward.

The AIXI's problem is that it assumes that if it acts like the best Turing machine it found then it will do as well as that Turing machine. This assumption is true when the AIXI only interacts with the environment over I/O channels, but is not true in the real world (where eg. we can inspect the AIXI's code).

Comment author: khafra 21 April 2014 05:18:20PM 0 points [-]

If you define waste heat, neural patterns, and so on as "output channels" then sure, we can say you only interact via I/O (although the line between I and O is fuzzy enough and your control over the O is small enough that I'd personally object to the distinction).

Also, even with perfect control of your own cognition, you would be restricted to a small subset of possible output strings. Outputting bits on multiple channels, each of which is dependent on the others, constrains you considerably; although I'm not sure whether the effect is lesser or greater than having output as a side effect of computation.

As I mentioned in a different context, it reminds me of UDT, or of the 2048 game: Every choice controls multiple actions.

Comment author: raisin 21 April 2014 03:54:08PM 0 points [-]

Any guides on how to do that?

Comment author: khafra 21 April 2014 04:15:36PM 2 points [-]

Rejection Therapy is focused in that direction.

Comment author: khafra 16 April 2014 01:09:55PM 2 points [-]

Yet another possible failure mode for naive anthropic reasoning.

Comment author: khafra 09 April 2014 02:14:29PM 6 points [-]

Since one big problem with neural nets is their lack of analyzability, this geometric approach to deep learning neural networks seems probably useful.

In response to Polling Thread
Comment author: Gunnar_Zarncke 07 April 2014 03:12:51PM 0 points [-]

Do you practice some form of vegetarism or other diet or consumption restriction and if yes which?

What are your reasons for follow this practice? Please answer only if you do follow a specific restriction. Do not use this poll to state your reason why you do not follow such restrictions. If you are interested in such please post a separate poll.

This practice improves my personal health

This practice benefits the overall population health

This practice improves human working/living conditions

This practice avoids harm to animals (and if applicable plants)

This practice allows to better feed more humans, reduce hunger and starvation

This practice helps preserve the environment/use it sustainably

This practice has economic benefits

This practice in encouraged in my social circles

This practice has other benefits (please elaborate in the comments)

Submitting...

Comment author: khafra 08 April 2014 11:18:33AM 1 point [-]

My "other" vote is alternate-day fasting, which I've been doing all year. Not sure if that's what you're looking for, but I feel like it's a dietary restriction, and benefits my health.

Comment author: wuncidunci 03 April 2014 08:26:53AM 6 points [-]

A video of the whole talk is available here.

Comment author: khafra 03 April 2014 01:53:38PM 4 points [-]

And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.

Comment author: Douglas_Knight 29 March 2014 09:56:15PM 2 points [-]

Homotopy type theory differs from ZFC in two ways. One way is that it, like ordinary type theory, is constructive and ZFC is not. The other is that it is based in homotopy theory. It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.

Tegmark is quite explicit that he has no measure and thus no prior. Switching foundations doesn't help.

Comment author: khafra 31 March 2014 11:24:30AM *  0 points [-]

It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.

I found a textbook after reading the slides, which may be clearer. I really don't think their mathematical aspirations are limited to homotopy theory, after reading the book's introduction--or even the small text blurb on the site:

Homotopy type theory offers a new “univalent” foundation of mathematics, in which a central role is played by Voevodsky’s univalence axiom and higher inductive types. The present book is intended as a first systematic exposition of the basics of univalent foundations, and a collection of examples of this new style of reasoning

Comment author: asr 28 March 2014 05:32:42PM 1 point [-]

the implied prior

Which implied prior? My understanding is that the problem with Multiverse theories is that we don't have a way to assign probability measures to the different possible universes, and therefore we cannot formulate an unambiguous prior distribution.

Comment author: khafra 28 March 2014 06:12:40PM -1 points [-]

Well, I don't really math; but the way I understand it, computable universe theory suggests Solomonoff's Universal prior, while the ZFC-based mathematical universe theory--being a superset of the computable--suggests a larger prior; thus weirder anthropic expectations. Unless you need to be computable to be a conscious observer, in which case we're back to SI.

Comment author: khafra 28 March 2014 04:43:26PM 1 point [-]

Apparently, founding mathematics on Homotopy Type Theory instead of ZFC makes automated proof checking much simpler and more elegant. Has anybody tried reformulating Max Tegmark's Level IV Multiverse using Homotopy Type Theory instead of sets to see if the implied prior fits our anthropic observations better?

Comment author: khafra 27 March 2014 11:43:09AM 1 point [-]

View more: Next