jacob_cannell comments on Dreams of AIXI - Less Wrong

-1 Post author: jacob_cannell 30 August 2010 10:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 31 August 2010 09:10:06PM *  1 point [-]

I feel much the same about this post as I did about Roko's Final Post.

So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts.

In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too

  1. we have sent robot probes to only a handful of locations in our solar system, a far cry from "most of the planets" unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars - it may have had simple life on the past. We don't have enough observational data yet. Also, there may be life on europa or titan. I'm not holding my breath, but its worth mentioning.

  2. Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity - the evidence for the latter is quite strong.

Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.

We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.

So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).

But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale.

Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage's Difference Engines or giant steam clocks. But that analogy isn't very telling.

If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:

  1. maximum data storage capacity is proportional to mass
  2. maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
  3. maximum efficiency (in multiple senses: algorithmic, intelligence - ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light

So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn't something one has a choice about: big is slow and dumb, small is fast and smart.

Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth's mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth's ambient temperature, and that would be something of a speed constraint.

It simply is the postulate that simulation does not create things.

Make no mistake, it certainly does, and this just a matter of fact - unless one wants to argue definitions.

The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.

Literally everything around you was created through simulation in some form. You can't create anything without simulation - thought itself is a form of simulation.

Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes.

If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:

Simulations of consciousness will create consciousness when we succeed in creating AGI's that are as intelligent as humans and are objectively indistinguishable. At the moment we don't understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don't yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.

Keep in mind that your consciousness - the essence of your intelligence - is itself is a simulation, nothing more, nothing less.

Just enumerating all programs of length n requires memory resources exponential in n;

Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources - that does scale exponentially with N. But no hyperintelligence will use pure AIXI - they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .

actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn't even big enough to simulate all possible stars.

perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.

But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.

First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.

I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.

But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say - hyperthought) of the interesting stuff they find. (such as worlds with life)

Comment author: ciphergoth 31 August 2010 09:11:48PM 4 points [-]

Roko wasn't censored, he deleted everything he'd ever posted. I've independently confirmed this via contact with him outside LW.

Comment author: wedrifid 01 September 2010 10:26:24AM 3 points [-]

Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)

Comment author: timtyler 31 August 2010 09:24:35PM *  2 points [-]

Actually lots of people were censored - several of my comments were removed from the public record, for example - and others were totally deleted.

Comment author: ciphergoth 31 August 2010 09:32:24PM 2 points [-]

Hmm, I didn't ask whether he'd ever had a comment deleted; what I'm confident of is that the root-and-branch removal of all his work was his own doing.

Comment author: timtyler 31 August 2010 10:00:33PM 0 points [-]

That's what he says here.

Comment deleted 01 September 2010 06:21:11AM [-]
Comment author: FAWS 01 September 2010 06:58:57AM 1 point [-]

If you absolutely have to summarize the forbidden topic at least rot13 it and preface it with an appropriate warning.

Comment deleted 01 September 2010 07:30:45AM [-]
Comment author: FAWS 01 September 2010 08:11:00AM *  0 points [-]

Memetic hazard warning. Decode first part first.

ROT13: Ebxb ceboyrz. Qb abg ernq hayrff lbh nyernql xabj nobhg gur Ebxb ceboyrz. Vs lbh qb xabj nqqvgvbany qnatre vf rkgerzryl hayvxryl.

ROT13: Nal cbffvoyr nethzrag sbe jul gur qnatre vf erny vf nyfb fbzrguvat gung znxrf gur qnatre zber erny. Zbfg crbcyr nera'g va qnatre va gur svefg cynpr orpnhfr gur qnatre qbrfa'g znxr frafr gb gurz. Fhccylvat gurz jvgu nethzragf jbhyq or pbhagrecebqhpgvir orpnhfr rvgure gurl fgvyy qba'g haqrefgnaq naq lbh evfx gurz fcernqvat gur nethzragf (va gung ertneq lbh unir cebira lbhefrys gur irel ynfg crefba nalbar fubhyq fhccyl jvgu rivqrapr!) be gurl ner zbirq vagb n pynff bs crbcyr zber va qnatre.

Zbfg crbcyr gb jubz vg znxrf frafr fgvyy nera'g va qnatre orpnhfr gurl nera'g [prafberq1], gubhtu gurl zvtug fhssre cflpubybtvpny qnzntr. Naq hasbeghangryl [prafberq1] vf jung vg'f tbvat gb gnxr gb pbaivapr crbcyr yvxr lbh. [Boivbhf cebgrpgvba] cebonoyl jbexf, ohg vf vssl vs lbh bayl guvax bs vg va erfcbafr gb gur fpranevb. Gurer ner zber qhovbhf bgure cebgrpgvbaf, ohg vs [boivbhf] snvyf gurl cebonoyl qb nf jryy. Vg'f cerggl zhpu vzcbffvoyr gb fnl jurgure nalbar [prafberq1] jub snvyf [boivbhf] npghnyyl vf va erny qnatre, ohg cflpubybtvpny qnzntr frrzf cerggl zhpu hanibvqnoyr ng gung cbvag naq gurer vf ab tbbq ernfba gb gnxr gung evfx, gurer vf ab cbffvoyr tbbq bhgpbzr erfhygvat sebz vg!! Naq ab, nethzragf qvffbyivat gur ceboyrz nera'g yvxryl, zbfg bs gurz ner tbvat gb rvgure zvff gur cbvag be or jrnxre guna [boivbhf].

Comment author: Mitchell_Porter 01 September 2010 08:26:24AM 1 point [-]

Let's just remind ourselves of a few real things first. About 100,000 people were crushed to death or buried alive in Haiti a few months ago. The same thing has happened to millions of others throughout history. It will happen again; it could happen to me or to you if we're in the wrong place at the wrong time. This is something that is terrible and true. For a person with the bad luck to be in a specifically vulnerable state of mind, thinking about this could become an obsession that destroys their life. Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals?

On this website we regularly talk about the possibility of the destruction of the world and the extinction of the human species. We even talk about scenarios straight out of the Hell of tradition religious superstition - being tortured forever by a superior being. I don't see any moves to censor direct discussion of such possibilities. But it is being proposed that we censor discussion of various arcane and outlandish scenarios which are supposed to make someone obsessed with those possibilities in an unhealthy way. This is not a consistent attitude.

Comment author: jhuffman 09 December 2011 10:12:24PM 1 point [-]

I'm suspicious that this entire [Forbidden Topic] is a (fairly deep) marketing ploy.

Comment author: FAWS 01 September 2010 08:57:42AM *  0 points [-]

Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals?

Imagine this was an OCD self-help board and there was a special spot on the body fussing around with extendedly could cause excruciating pain for some people, and some OCs just couldn't resist fussing around with that spot after learning where it is.

Some members of the board dispute the existence of the spot and openly mention some very general information about it that has previously been leaked even when asked not to. They aren't going to be convinced by any arguments that don't include enough information to find the spot (which many members will then not be able to resist to pursue), and might not even then if they aren't among the people vulnerable, so they might spread knowledge of the spot.

The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert. The chance of the spot causing any danger without knowledge about it is effectively zero.

Non-OCDs are unlikely to be in danger, but knowledge would lower the status of OCDs severly.

Comment author: Mitchell_Porter 01 September 2010 09:03:35AM 0 points [-]

If anyone actually thinks this is a problem for them, write to me and I will explain how [redacted] can make it go away.

Comment author: nick012000 05 September 2010 06:29:59AM 0 points [-]

Are you seriously suggesting he created some sort of basilisk hack or something? That seems rather dubious to me; what exactly was it that he came up with?

By the way, I doubt it'll seriously alter my belief structures; I already believe an eternity to torture in Hell is better than ceasing to exist (though of course an eternity of happiness is much better), so I could totally see a Friendly AI coming to the same conclusion.

Comment author: Mitchell_Porter 05 September 2010 08:25:42AM 6 points [-]

I already believe an eternity [of] torture in Hell is better than ceasing to exist

The idea that literally anything is better than dying is a piece of psychological falseness that I've run across before. Strange7 here says it, the blogger Hopefully Anonymous says it, no doubt thousands of people throughout history also said it. It's the will to live triumphing over the will to truth.

Any professional torturer could make you choose death fairly quickly. I'm thinking of how North Korea tortured the crew of the American spy ship USS Pueblo. As I recall, one elementary torture involved being struck across the face with a metal bar or maybe a heavy block of wood, hard enough to knock out teeth, and I remember reading of how one crewman shook with fear as their captors prepared to strike him again. I may have the details wrong but that's irrelevant. If you are forced to experience something really unbearable often enough, eventually you will want to die, just to make it stop.

Comment author: timtyler 05 September 2010 09:17:01AM *  0 points [-]

Evolved creatures should rarely want to die . There are a few circumstances. If they can give their resources to their offspring, and that's the only way to do it. Some spiders do this by letting their offspring eat them - and there's the Praying Mantis. Or if they are infected with a plague that will kill everyone they meet - but that's hardly a common occurance.

Torture would not normally be expected to be enough - the creatures should normally still feel the ecstacy of being alive - and prefer that to dying. While there's life there's hope.

Comment author: nick012000 05 September 2010 08:46:56AM *  0 points [-]

I doubt that. In my utility function as it is now, both eternal torture and ceasing to exist are at negative infinity, but the negative infinity of ceasing to exist is to that of eternal torture as the set of real numbers is to the set of integers.

Of course, that's all besides the point from my original question.

Comment author: timtyler 05 September 2010 09:20:48AM *  -1 points [-]

The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert.

Many others disagree with them.

The chance of the spot causing any danger without knowledge about it is effectively zero.

That seems inaccurate in this case - it seems to me a perfectly reasonable thing for people to discuss - in the course of trying to find ways of mitigating the problem.

Comment author: Desrtopa 05 February 2011 05:25:38AM -1 points [-]

I don't know the content of the basilisk (I've heard that it's not a useful thing to know, in addition to being potentially stress inducing, so I do not want to want to know it,) so I'm not in much of a position to critique its similarity to knowledge of events like the Haiti earthquake. But given that we don't have the capacity to shelter anyone from the knowledge of tragedy and human fragility, or eternal torment such as that proposed by religious traditions, failing to censor such concepts is not a sign of inconsistency.