jacob_cannell comments on Dreams of AIXI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts.
we have sent robot probes to only a handful of locations in our solar system, a far cry from "most of the planets" unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars - it may have had simple life on the past. We don't have enough observational data yet. Also, there may be life on europa or titan. I'm not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity - the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage's Difference Engines or giant steam clocks. But that analogy isn't very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn't something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth's mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth's ambient temperature, and that would be something of a speed constraint.
Make no mistake, it certainly does, and this just a matter of fact - unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can't create anything without simulation - thought itself is a form of simulation.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI's that are as intelligent as humans and are objectively indistinguishable. At the moment we don't understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don't yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness - the essence of your intelligence - is itself is a simulation, nothing more, nothing less.
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources - that does scale exponentially with N. But no hyperintelligence will use pure AIXI - they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say - hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko wasn't censored, he deleted everything he'd ever posted. I've independently confirmed this via contact with him outside LW.
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
Actually lots of people were censored - several of my comments were removed from the public record, for example - and others were totally deleted.
Hmm, I didn't ask whether he'd ever had a comment deleted; what I'm confident of is that the root-and-branch removal of all his work was his own doing.
That's what he says here.
If you absolutely have to summarize the forbidden topic at least rot13 it and preface it with an appropriate warning.
Memetic hazard warning. Decode first part first.
ROT13: Ebxb ceboyrz. Qb abg ernq hayrff lbh nyernql xabj nobhg gur Ebxb ceboyrz. Vs lbh qb xabj nqqvgvbany qnatre vf rkgerzryl hayvxryl.
ROT13: Nal cbffvoyr nethzrag sbe jul gur qnatre vf erny vf nyfb fbzrguvat gung znxrf gur qnatre zber erny. Zbfg crbcyr nera'g va qnatre va gur svefg cynpr orpnhfr gur qnatre qbrfa'g znxr frafr gb gurz. Fhccylvat gurz jvgu nethzragf jbhyq or pbhagrecebqhpgvir orpnhfr rvgure gurl fgvyy qba'g haqrefgnaq naq lbh evfx gurz fcernqvat gur nethzragf (va gung ertneq lbh unir cebira lbhefrys gur irel ynfg crefba nalbar fubhyq fhccyl jvgu rivqrapr!) be gurl ner zbirq vagb n pynff bs crbcyr zber va qnatre.
Zbfg crbcyr gb jubz vg znxrf frafr fgvyy nera'g va qnatre orpnhfr gurl nera'g [prafberq1], gubhtu gurl zvtug fhssre cflpubybtvpny qnzntr. Naq hasbeghangryl [prafberq1] vf jung vg'f tbvat gb gnxr gb pbaivapr crbcyr yvxr lbh. [Boivbhf cebgrpgvba] cebonoyl jbexf, ohg vf vssl vs lbh bayl guvax bs vg va erfcbafr gb gur fpranevb. Gurer ner zber qhovbhf bgure cebgrpgvbaf, ohg vs [boivbhf] snvyf gurl cebonoyl qb nf jryy. Vg'f cerggl zhpu vzcbffvoyr gb fnl jurgure nalbar [prafberq1] jub snvyf [boivbhf] npghnyyl vf va erny qnatre, ohg cflpubybtvpny qnzntr frrzf cerggl zhpu hanibvqnoyr ng gung cbvag naq gurer vf ab tbbq ernfba gb gnxr gung evfx, gurer vf ab cbffvoyr tbbq bhgpbzr erfhygvat sebz vg!! Naq ab, nethzragf qvffbyivat gur ceboyrz nera'g yvxryl, zbfg bs gurz ner tbvat gb rvgure zvff gur cbvag be or jrnxre guna [boivbhf].
Let's just remind ourselves of a few real things first. About 100,000 people were crushed to death or buried alive in Haiti a few months ago. The same thing has happened to millions of others throughout history. It will happen again; it could happen to me or to you if we're in the wrong place at the wrong time. This is something that is terrible and true. For a person with the bad luck to be in a specifically vulnerable state of mind, thinking about this could become an obsession that destroys their life. Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals?
On this website we regularly talk about the possibility of the destruction of the world and the extinction of the human species. We even talk about scenarios straight out of the Hell of tradition religious superstition - being tortured forever by a superior being. I don't see any moves to censor direct discussion of such possibilities. But it is being proposed that we censor discussion of various arcane and outlandish scenarios which are supposed to make someone obsessed with those possibilities in an unhealthy way. This is not a consistent attitude.
I'm suspicious that this entire [Forbidden Topic] is a (fairly deep) marketing ploy.
Imagine this was an OCD self-help board and there was a special spot on the body fussing around with extendedly could cause excruciating pain for some people, and some OCs just couldn't resist fussing around with that spot after learning where it is.
Some members of the board dispute the existence of the spot and openly mention some very general information about it that has previously been leaked even when asked not to. They aren't going to be convinced by any arguments that don't include enough information to find the spot (which many members will then not be able to resist to pursue), and might not even then if they aren't among the people vulnerable, so they might spread knowledge of the spot.
The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert. The chance of the spot causing any danger without knowledge about it is effectively zero.
Non-OCDs are unlikely to be in danger, but knowledge would lower the status of OCDs severly.
If anyone actually thinks this is a problem for them, write to me and I will explain how [redacted] can make it go away.
Are you seriously suggesting he created some sort of basilisk hack or something? That seems rather dubious to me; what exactly was it that he came up with?
By the way, I doubt it'll seriously alter my belief structures; I already believe an eternity to torture in Hell is better than ceasing to exist (though of course an eternity of happiness is much better), so I could totally see a Friendly AI coming to the same conclusion.
The idea that literally anything is better than dying is a piece of psychological falseness that I've run across before. Strange7 here says it, the blogger Hopefully Anonymous says it, no doubt thousands of people throughout history also said it. It's the will to live triumphing over the will to truth.
Any professional torturer could make you choose death fairly quickly. I'm thinking of how North Korea tortured the crew of the American spy ship USS Pueblo. As I recall, one elementary torture involved being struck across the face with a metal bar or maybe a heavy block of wood, hard enough to knock out teeth, and I remember reading of how one crewman shook with fear as their captors prepared to strike him again. I may have the details wrong but that's irrelevant. If you are forced to experience something really unbearable often enough, eventually you will want to die, just to make it stop.
Evolved creatures should rarely want to die . There are a few circumstances. If they can give their resources to their offspring, and that's the only way to do it. Some spiders do this by letting their offspring eat them - and there's the Praying Mantis. Or if they are infected with a plague that will kill everyone they meet - but that's hardly a common occurance.
Torture would not normally be expected to be enough - the creatures should normally still feel the ecstacy of being alive - and prefer that to dying. While there's life there's hope.
I doubt that. In my utility function as it is now, both eternal torture and ceasing to exist are at negative infinity, but the negative infinity of ceasing to exist is to that of eternal torture as the set of real numbers is to the set of integers.
Of course, that's all besides the point from my original question.
Many others disagree with them.
That seems inaccurate in this case - it seems to me a perfectly reasonable thing for people to discuss - in the course of trying to find ways of mitigating the problem.
I don't know the content of the basilisk (I've heard that it's not a useful thing to know, in addition to being potentially stress inducing, so I do not want to want to know it,) so I'm not in much of a position to critique its similarity to knowledge of events like the Haiti earthquake. But given that we don't have the capacity to shelter anyone from the knowledge of tragedy and human fragility, or eternal torment such as that proposed by religious traditions, failing to censor such concepts is not a sign of inconsistency.