Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 20 August 2014 10:42:26AM 0 points [-]

Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.

That is very unclear, and people's politics seems a good predictor of their opinions in "competing intelligences" scenarios, meaning that nobody really has a clue.

Comment author: Pentashagon 21 August 2014 02:59:33AM 0 points [-]

My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.

Comment author: Pentashagon 20 August 2014 05:48:40AM 1 point [-]

If there is only specialized intelligence, then what would one call an intelligence that specializes in creating other specialized intelligences? Such an intelligence might be even more dangerous than a general intelligence or some other specialized intelligence if, for instance, it's really good at making lots of different X-maximizers (each of which is more efficient than a general intelligence) and terrible at deciding which Xs it should choose. Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.

Comment author: VAuroch 10 August 2014 08:50:14PM 1 point [-]

latest and greatest axioms

The standard Zermelo-Fraenkel axioms have lasted a century with only minor modifications -- none of which altered what was provable -- and there weren't many false starts before that. There is argument over whether to include the axiom of choice, but as mentioned the formal methods of program construction naturally use constructivist mathematics, which doesn't use the axiom of choice anyhow.

mathematics is literally built upon the ruins of old axioms that didn't quite rule out all known contradictions

This blatantly contradicts the history of axiomatic mathematics, which is only about two centuries old and which has standardized on the ZF axioms for half of that. That you claim this calls into question your knowledge about mathematics generally.

Additionally, machines are only probabilistically correct. FAI will probably need to treat its own implementation as a probabilistic formal system.

If there's anything modern computer science is good at. it's getting guaranteed performance within specified bounds out of unreliable probabilistic systems.

When absolute guarantees are impossible, there are abundant methods available to guarantee correct outcomes up to arbitrarily high thresholds, which can be as high as you like, and it's quite silly to dismiss it as technically probabilistic. You could, for example, pick the probability that a given baryon would undergo radioactive decay (half-life: 10^32 years or greater), the probability that all the atoms in your pants will suddenly jump, in unison, three feet to the left, or some other extremely-improbable threshold.

Comment author: Pentashagon 14 August 2014 03:34:35PM 0 points [-]

The standard Zermelo-Fraenkel axioms have lasted a century with only minor modifications -- none of which altered what was provable -- and there weren't many false starts before that. There is argument over whether to include the axiom of choice, but as mentioned the formal methods of program construction naturally use constructivist mathematics, which doesn't use the axiom of choice anyhow.

Is there a formal method for deciding whether or not to include the axiom of choice? As I understand it three of the ZF axioms are independent of the rest, and all are independent of choice. How would AGI choose which independent axioms should be accepted? AGI could be built to only ever accept a fixed list of axioms but that would make it inflexible if further discoveries offer evidence for choice being useful for example.

This blatantly contradicts the history of axiomatic mathematics, which is only about two centuries old and which has standardized on the ZF axioms for half of that. That you claim this calls into question your knowledge about mathematics generally.

You are correct, I don't have formal mathematical training beyond college and I pursue formal mathematics out of personal interest, so I welcome corrections. As I understand it geometry was axiomatic for much longer, and the discovery of non-Euclidean geometries required separating the original axioms for different topologies. Is there a way to formally decide now whether or not a similar adjustment may be required for the axioms of ZF(C)? The problem, as I see it, is that formal mathematics is just string manipulation and the choice of which allowed manipulations are useful is dependent on how the world really is. ZF is useful because its language maps very well onto the real world, but as an example unifying general relativity and quantum mechanics has been difficult. Unless it's formally decidable whether ZF is sufficient for a unified theory it seems to me that some method for an AGI to change its accepted axioms based on probabilistic evidence is required, as well as avoid accepting useless or inconsistent independent axioms.

Comment author: VAuroch 07 August 2014 03:32:48AM 1 point [-]

If formal methods are only giving you probabilistic evidence, you aren't using appropriate formal methods. There are systems designed to make 1-to-1 correspondences between code and proof (the method I'm familiar with has an intermediate language and maps every subroutine and step of logic to an expression in that intermediate), and this could be used to make the code an airtight proof that, for example, the utility function will only evolve in specified ways and will stay within known limits. This does put limits on how the program can be written, and lesser limits on how the proof can be constructed (it is hard to incorporate nonconstructivist mathematics), but can have every assumption underlying the safety proved correct. (And when I say every step, I include proof that the compiler is sound and will produce a correct program or none at all, proof that each component of the intermediate language reflects the corresponding proof step, etc.)

Comment author: Pentashagon 10 August 2014 06:34:28PM 1 point [-]

We only have probabilistic evidence that any formal method is correct. So far we haven't found contradictions implied by the latest and greatest axioms, but mathematics is literally built upon the ruins of old axioms that didn't quite rule out all known contradictions. FAI needs to be able to re-axiomatize its mathematics when inconsistencies are found in the same way that human mathematicians have, while being implemented in a subset of the same mathematics.

Additionally, machines are only probabilistically correct. FAI will probably need to treat its own implementation as a probabilistic formal system.

Comment author: CellBioGuy 02 August 2014 07:39:40PM 2 points [-]

Additionally, if the history of life on Earth should show you anything its that nothing ever 'wins'.

Comment author: Pentashagon 07 August 2014 05:56:33AM 0 points [-]

Even bacteria? The specific genome that caused the black death is potentially extinct but Yersinia pestis is still around. Divine agents of Moloch if I ever saw one.

Comment author: shminux 29 July 2014 10:02:21PM 1 point [-]

OK, I have thought about it some more. The issue is how accurately one can evaluate the probabilities. If the best you can do is, say, 1%, then you are forced to count even the potentially very unlikely possibilities at 1% odds. The accuracy of the probability estimates would depend on something like the depth of your Solomonoff induction engine. If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright. What I am saying is that the latter is better than the former.

Comment author: Pentashagon 05 August 2014 05:47:55AM *  0 points [-]

If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright.

The primary problem with Pascal's Mugging is that the Mugging string is short and easy to evaluate. 3^^^3 is a big number; it implies a very low probability but not necessarily 1 / 3^^^3; so just how outrageous can a mugging be without being discounted for low probability? That least-likely-but-still-manageable Mugging will still get you. If you're allowed to reason about descriptions of utility and not just shut up and multiply to evaluate the utility of simulated worlds then in the worst case you have to worry about the Mugger that offers you BusyBeaver(N) utility, where 2^-N is the lowest probability that you can process. BusyBeaver(N) is well-defined, although uncomputable, it is at least as large as any other function of length N. Unfortunately that means BusyBeaver(N) * 2^-N > C, for some N-bit constant C, or in other words EU(Programs-of-length-N) is O(BusyBeaver(N)). It doesn't matter what the mugger offers, or if you mug yourself. Any N-bit utility calculation program has expected utility O(BB(N)) because it might yield BB(N) utility.

The best not-strictly-bounded-utility solution I have against this is discounting the probability of programs as a function of their running time as well as their length. Let 1/R be the probability that any given step of a process will cause it to completely fail as opposed to halting with output or never halting. Solomonoff Induction can be redefined as the sum over programs, P, producing an output S in N steps, of 2^-Length(P) x (R - 1 / R)^N. It is possible to compute a prior probability with error less than B, for any sequence S and finite R, by enumerating all programs shorter than log_2(1 / B) bits that halt in fewer than ~R / B steps. All un-enumerated programs have cumulative probability less than B of generating S.

For Pascal's Mugging it suffices to determine B based on the number of steps required to, e.g. simulate 3^^^3 humans. 3^^^3 ~= R / B, so either the prior probability of the Mugger being honest is infinitesimal, or it is infinitesimally unlikely that the universe will last fewer than the minimum 3^^^3 Planck Units necessary to implement the mugging. Given some evidence about the expected lifetime of the Universe, the mugging can be rejected.

The biggest advantage of this method over a fixed bounded utility function is that R is parameterized by the agent's evidence about its environment, and can change with time. The longer a computer successfully runs an algorithm, the larger the expected value of R.

Comment author: Mark_Friedenbach 15 July 2014 04:49:52AM 2 points [-]

Being asleep is not being unconscious (in this sense). I don't know about you, but I have dreams. And even when I'm not dreaming, I seem to be aware of what is going on in my vicinity. Of course I typically don't remember what happened, but if I was woken up I might remember the last few moments, briefly. Lack of memory of what happens when I'm asleep is due to a lack of memory formation during that period, not a lack of consciousness.

Comment author: Pentashagon 02 August 2014 05:46:57AM 1 point [-]

The experience of sleep paralysis suggests to me that there are at least two components to sleep; paralysis and suppression of consciousness and one can have one, both, or neither. With both, one is asleep in the typical fashion. With suppression of consciousness only one might have involuntary movements or in extreme cases sleepwalking. With paralysis only one has sleep paralysis which is apparently an unpleasant remembered experience. With neither, you awaken typically. The responses made by sleeping people (sleepwalkers and sleep-talkers especially) suggest to me that their consciousness is at least reduced in the sleep state. If it was only memory formation that was suppressed during sleep I would expect to witness sleep-walkers acting conscious but not remembering it, whereas they appear to instead be acting irrationally and responding at best semi-consciously to their environment.

Comment author: Pentashagon 02 August 2014 05:21:43AM 0 points [-]

MMEU makes some sense in a world with death. When there's a lower bound where negative utility doesn't mean you're just having a bad time but that you're dead and can never recover from that negative utility then it makes sense to raise the minimum expected utility at least above the threshold of death, and preferably as far above death as possible.

If you take a MMEU approach to Utilitarianism (not MMEU over a single VNM utility function, but maximizing the minimum expected VNM utility function of every individual) it answers the torture vs specks question with specks, will only accept Pascal's Muggings that threaten negative utility, won't reduce most people's utility to achieve the repugnant conclusion or to feed utility monsters, won't take the garden path in the lifespan dilemma (this also applies to individual VNM utility functions), etc. In short it sounds like most people's intuitive reaction to those dilemmas.

Comment author: shminux 28 July 2014 08:54:41PM 0 points [-]

Selection bias: when you are presented with a specific mugging scenario, you ought to realize that there are many more extremely unlikely scenarios where the payoff is just as high, so selecting just one of them to act on is suboptimal.

As for the level at which to stop calculating, bounded computational power is a good heuristic. But I suspect that there is a better way to detect the cutoff (it is known as infrared cutoff in physics). If you calculate the number of choices vs their (log) probability, once you go low enough, the number of choices explodes. My guess is that for reasonably low probabilities you would get exponential increase for the number of outcomes, but for very low probabilities the growth in the number of outcomes becomes super-exponential. This is, of course, a speculation, I would love to see some calculation or modeling, but this is what my intuition tells me.

Comment author: Pentashagon 29 July 2014 03:29:12AM 4 points [-]

for very low probabilities the growth in the number of outcomes becomes super-exponential

There can't be more than 2^n outcomes each with probability 2^-n.

Comment author: shminux 28 July 2014 09:04:01PM 0 points [-]

If you use a bounded utility function, it will inevitably be saturated by unlikely but high-utility possibilities, rendering it useless.

Comment author: Pentashagon 29 July 2014 03:09:47AM 3 points [-]

For any possible world W, |P(W) * BoundedUtility(W)| < |P(W) * UnboundedUtility(W)| as P(W) goes to zero.

View more: Next