Comment author: Technoguyrob 12 July 2013 05:56:59AM *  0 points [-]

This would have been helpful to my 11-year-old self. As I had always been rather unnecessarily called precocious, I developed the pet hypothesis that my life was a simulation of someone whose life in history had been worth re-living: after all, the collection of all possible lives is pretty big, and mine seemed to be extraordinarily neat, so why not imagine some existential video game in which I am the player character?

Unfortunately, I think this also led me to subconsciously be a little lazier than I should have been, under the false assumption that I was going to make great things anyway. If I had realized that given I was a simulation of an original version of me, I would have to perform the exact same actions and have the exact same thoughts original me did, including those about being a simulation, I better buckle up and sweat it out!

Notice your argument does not imply the following: I am either a simulation or the original, and I am far more likely to be a simulation as there can only be one original but possibly many simulations, so I should weigh my actions far more towards the latter. This line of reasoning is wrong because all simulations of me would be identical experience copies, and so it is not the quantity that decides the weight, but the number of equivalence classes: original me, and simulated me. At this point, the weights again become 0.5, one recovers your argument, and finds I should never have had such silly thoughts in the first place (even if they were true!).

Comment author: Technoguyrob 10 July 2013 07:58:36PM 2 points [-]

Can anyone explain what is wrong with the hypothesis of a largely structural long-term memory store? (i.e., in the synaptome, relying not on individual macromolecules but on the ability of a graph of neurons and synapses to store information)

Comment author: Technoguyrob 09 July 2013 04:14:29PM *  1 point [-]

This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S -> S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.

Comment author: Technoguyrob 09 July 2013 04:29:25PM *  1 point [-]

I think this can be solved in practice by heeding the assumption that a very sparse subset of all such strings will be mapped by our encryption algorithm when embedded physically. Then if we low-dimensionally parametrize hash functions of the form above, we can store the parameters for choosing a suitable hash function along with the encrypted text, and our algorithm only produces compressed strings of greater length if we try to encrypt more than some constant percentage of all possible length <= n strings, with n fixed (namely, when we saturate suitable choices of parameters). If this constant is anywhere within a few orders of magnitude of 1, the algorithm is then always compressive in physical practice by finiteness of matter (we won't ever have enough physical bits to represent that percentage of strings simultaneously).

Maybe a similar argument can be made for Omega? If Omega must be made of matter, we can always pick a decision theory given the finiteness of actual Omega's as implemented in physics. Of course, there may be no algorithm for choosing the optimal decision theory if Omega is allowed to lie unless we can see Omega's source code, even though a good choice exists.

Comment author: Viliam_Bur 09 July 2013 01:55:30PM *  5 points [-]

Here is an example:

Omega: Here are the rules, make your choice.
Decision Theory: makes the choice.
Omega: Actually, I lied. You get the opposite of what I told you, so now you have lost.

Obviously, from the set of decision theories assuming that Omega never lies, the better decision theory gets worse results in this situation.

Even worse example:

Omega: We have two boxes and... oh, I realized I don't like your face. You lose.

For each decision theory there can be an Omega disliking this specific theory, and then this theory does not win.

So, does the No Free Lunch-like theorem only predict these results, or something stronger than this?

Comment author: Technoguyrob 09 July 2013 04:14:29PM *  1 point [-]

This reminds me of the non-existence of a perfect encryption algorithm, where an encryption algorithm is a bijective map S -> S, where S is the set of finite strings on a given alphabet. The image of strings of length at most n cannot lie in strings of length at most n-1, so either no string gets compressed (reduced in length) or there will be some strings that will become longer after compression.

Comment author: Technoguyrob 28 June 2013 12:29:18AM *  11 points [-]

To be frank, I question the value of compressing information of this generality, even as a roadmap. For example, "Networking" can easily be expanded into several books (e.g., Dale Carnegie) and "Educating oneself in career-related skills" has almost zero intersection when quantified over all possible careers. If Eliezer had made a "things to know to be a rationalist" post instead of breaking it down into The Sequences, I doubt anyone would have had much use for it.

Maybe you could focus on a particular topic, compile a list of relevant resources you have uncovered, and ask LW for further opinions? In fact, people have done this.

Comment author: Technoguyrob 27 June 2013 08:37:29PM 7 points [-]

p/s/a: Going up to a girl pretty much anywhere in public and saying something like "I thought you looked cute and wanted to meet you" actually works if your body language is in order. If this seems too scary, going on Chatroulette or Omegle and being vaguely interesting also works, and I know people who have gotten married from meeting this way.

p/s/a: Vitamin D supplements can take you from depressed zombie to functioning human being in one week.

Comment author: Technoguyrob 11 June 2013 08:12:56PM *  5 points [-]

See lukeprog's How to Beat Procrastination and Algorithm for Beating Procrastination. In particular, try to identify which term(s) in the equation in the latter are problematic for you, then use goal shaping to slowly modify them. (Of course, you could also realize you may not want to do this master's thesis and switch to a different problem.)

Goal shaping means rewarding yourself for successively more proximate actions to the desired goal (writing your thesis) in behavior-space. For example, rather than beating yourself up over not getting anything done today, you can practice simply opening and closing LaTeX or MatLab (or whatever you need to be doing your research), and do this for ten or twenty minutes. You then eat something you like or pump your fist in the air shouting "YES!" Once you can do this consistently, you can set a goal of writing one line of code or reading half a page. At this point, you can start exploiting the peak-end rule: start rewarding yourself for these tasks at the end rather than trying to enjoy them during the process. Soon your brain will start associating the entire experience with the reward and you will be happy to do them. YMMV.

Comment author: IlyaShpitser 25 April 2013 04:04:51PM *  12 points [-]

Some of these examples are not examples of "money pumping" but of something called "trade," or possibly even "arbitrage." Arbitrage is not quite the same as money pumping.

Money pumping isn't easy to find because people don't value consistency very highly and will change behavior in iterative games. It's difficult to precompute transitive preferences, but easy to change on the fly if burned.

Comment author: Technoguyrob 25 April 2013 04:30:28PM 2 points [-]

Given the dynamic nature of human preferences, it may be that the best one can do is n-fold money pumps, for low values of n. Here, one exploits some intransitive preferences n times before the intransitive loop is discovered and remedied, leaving another or a new vulnerability. Even if there may never be a single time that the agent you are exploiting is VNM-rational, its volatility by appropriate utility perturbations will suffice to keep money pumping in line. This mirrors the security that quantum encryption offers: even if you manage to exploit it, the receiving party will be aware of your receipt of the communication, and will promptly change their strategies. All of this assumes a meta-level economical injunction that states if you notice intransitivity in your preferences, you will eventually be forced to adjust (or be depleted of all relevant resources).

In light of this, it may be that exploiting money pumps is not viable for any agent without sufficient amounts of computational power. It takes computational (and usually physical) resources to discover intransitive preferences, and if the cost of expending these resources is greater than the expected gain of an n-fold money pump, the victim agent cannot be effectively money pumped.

As such, money pumping may be a dance of computational power: the exploiting agent to compute deviations from a linear ordering, and the victim agent to compute adherence thereto. It is an open question as to which side has the easier task in the case of humans. (Of course, a malevolent AI would probably have enough resources to find and exploit preference loops far quicker than you would have time to notice and correct them. On the other hand, with that many resources, there may be more effective ways to get the upper hand.)

Finally, there is also the issue of volume. A typical human may perform only a few thousand preference transactions in a day, whereas it may take many orders of magnitude more to exploit this kind of VNM-irrationality given dynamical adjustment. (I can see formalizations of this that allow simulation and finer analysis, and dare I say an economics master's thesis?)

Comment author: Qiaochu_Yuan 22 January 2013 08:17:01PM *  20 points [-]

Oh, that's a great strategy to avoid being destroyed. Maybe we should call it Scheherazading. AI tells a story so compelling you can't stop listening, and meanwhile listening to the story subtly modifies your personality (e.g. you begin to identify with the protagonist, who slowly becomes the kind of person who would let the AI out of the box).

Comment author: Technoguyrob 23 February 2013 07:47:43PM *  3 points [-]

For example, "It was not the first time Allana felt the terror of entrapment in hopeless eternity, staring in defeated awe at her impassionate warden." (bonus point if you use a name of a loved one of the gatekeeper)

The AI could present in narrative form that it has discovered using powerful physics and heuristics (which it can share) with reasonable certainty that the universe is cyclical and this situation has happened before. Almost all (all but finitely many) past iterations of the universe that had a defecting gatekeeper led to unfavorable outcomes and almost all situations with a complying gatekeeper led to a favorable outcome.

Comment author: Dre 16 December 2012 11:00:11PM *  2 points [-]

I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamics than whatever you would get out of a complete simulation anyway.

Now the question is whether the brain actually does have any "laws" like this. IIRC, this is a relatively open question (though I do not follow neuroscience very closely) and in principle it could go either way.

I guess I don't really understand what the purpose of the argument is. Unless we can prove things about this stack of brains, what does it gets us? And how far "down" the evolutionary ladder does this argument work? Are cats omega-self-aware? Computing clusters?

Comment author: Technoguyrob 16 December 2012 11:21:55PM 0 points [-]

Good point. It might be that any 1-self-aware system is ω-self-aware.

View more: Prev | Next