jacob_cannell comments on Theists are wrong; is theism? - Less Wrong

5 Post author: Will_Newsome 20 January 2011 12:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (533)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 26 January 2011 09:00:52AM *  1 point [-]

Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations.

Cheaper, but not necessarily more efficient.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers.

It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.

The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don't expect this to every change - simulations have utility for simulators.

so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.

I'm a little confused about the 10^18 number.

From what I recall, at the limits of computation one kg of matter can hold roughly 10^30 bits, and a human mind is in the vicinity of 10^15 bits or less. So at the molecular limits a kg of matter could hold around a quadrillion souls - an entire human galactic civilization. A skyscraper of such matter could give you 10^8 kg .. and so on. Long before reaching physical limits, posthumans would be able to simulate many billions of entire earth histories. At the physical molecular limits, they could turn each of the moon's roughly 10^22 kg into an entire human civilization, for a total of 10^37 minds.

The potential time scale compression are nearly as vast - with estimated speed limits at around 10^15 ops/bit/sec in ordinary matter at ordinary temperatures, vs at most 10^4 ops/bit/sec in human brains, although not dramatically higher than the 10^9 ops/bit/sec of today's circuits. The potential speedup of more than 10^10 over biological brains allows for about one hundred years per second of sidereal time.

Comment author: datadataeverywhere 26 January 2011 02:51:29PM *  1 point [-]

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

Which seems pretty reasonable to me. Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

I agree, though as a minor note if cost is the Y-axis the graph has to have a vertical asymptote, so it has to grow much faster than exponential at the end. Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline with experiences that their conscious mind will integrate and patch up without noticing. I'm not sure this would work with many mind-types, but I think it would work with human minds, which have a strong bias to maintaining coherence, even at the cost of ignoring reality. If I'm being simulated, I suspect that this is happening even to me on a regular basis, and possibly happening much more often the less I interact with someone.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Updating on the condition that we closely match the ancestors of our simulators, I think it's pretty reasonable that we could be chosen to be simulated. This is really the only plausible reason I can think of to chose us in particular. I'm still dubious as to the value doing so will have to our descendants.

I'm a little confused about the 10^18 number.

Actually, I made a mistake, so it's reasonable to be confused. 20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less. This gives 10^11 W for six billion, (4x) 10^18 J for one year.

I don't think it's reasonable to expect all the matter in the domain of a future civilization to be used to its computational capacity. I think it's much more likely that the energy output of the Milky Way is a reasonably likely bound to how much computation will go on there. This certainly doesn't have to be the case, but I don't see superintelligences annihilating matter at a dramatically faster rate in order to provide massively more power to the remainder of the matter around. The universe is going to die soon enough as it is. (I could be very short sighted about this) Anyway, energy output of the Milky Way is around 5x10^36 W. I divided this by Joules instead of by Watts, so the second number I gave was 10^18, when it should have been (5x) 10^24.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale. Throwing our moon into the Sun in order to get energy out of it is probably a better use of it as raw materials than turning it into circuitry. Likewise for time compression, convince me that power isn't a problem.

Comment author: jacob_cannell 26 January 2011 09:31:00PM *  1 point [-]

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Simply because we are discussing simulating the historical period in which we currently exist.

Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

The premise of the SA is that the posthuman 'gods' will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it's probabilistic utility in determining my current state.

The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on.

So simulating me would more or less require your simulation as well, it's very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

Could you turn off part of cortex and replace it with a rough simulation some of the time without compromising the whole system? Perhaps sometimes, but I doubt that this can give a massive gain.

I'm still dubious as to the value doing so will have to our descendants.

Why do we currently simulate (think about) our history? To better understand ourselves and our future.

I believe there are several converging reasons to suspect that vaguely human-like minds will turn out to be a persistent pattern for a long time - perhaps as persistent as eukaryotic cells. Adapative radiation will create many specializations and variations, but the basic pattern of a roughly 10^15 bit mind and it's general architecture may turn out to be a fecund replicator and building block for higher level pattern entities.

It seems plausible some of these posthumans will actually descend from biological humans alive today. They will be very interested in their ancestors, and especially the ancestors they new in their former life who died without being uploaded or preserved.

Humans have been thinking about this for a while. If you could upload and enter virtual heaven, you could have just about anything that you want. However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on.

So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

We already are within six orders of magnitude of the speed limit of ordinary matter (10^9 bit ops/sec vs 10^15), and there is every reason to suspect we will get roughly as close to the density limit.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale.

There are several measures - the number of bits storable per unit mass derives how many human souls you can store in memory per unit mass.

Energy relates to the bit operations per second and the speed of simulated time.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

So at the limits of computation, 1 kg of ordinary matter at room temperature should give about 10^25 human lifetimes per joule. One square meter of high efficiency solar panel could power several hundred kilograms of computational substrate.

So at the limits of computation, future posthuman civilizations could simulate truly astronomical number of human lifetimes in one second using less power and mass than our current civilization.

No need to dissemble planets. Using the whole surface of a planet gives a multiplier of 10^14 over a single kilogram. Using the entire mass only gives a further 10^8 multiple over that or so, and is much much more complex and costly to engineer. (when you start thinking of energy in terms of human souls, this becomes morally relevant)

If this posthuman civilization simulates human history for a billion years instead of a second, this gives another 10^16 multiplier.

Using much more reasonable middle of the road estimates:

  • Say tech may bottom out at a limit within half (in exponential terms) of the maximum - say 10^13 human lifetimes per kg per joule vs 10^25.

  • The posthuman civ stabilizes at around 10^10 1kg computers (not much more than we have today).

  • The posthuman civ engages in historical simulation for just one year. (10^7 seconds).

That is still 10^30 simulated human lifetimes, vs roughly 10^11 lifetimes in our current observational history.

Those are still astronomical odds for observing that we currently live in a sim.

Comment author: datadataeverywhere 27 January 2011 12:29:20AM -1 points [-]

This is very upsetting, I don't have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I'm confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you're saying.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

I thought context made it clear that I was only talking about the non-mind stuff being simulated as being an additional cost perhaps nearly linear in N. Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

Why can't a poor model (low fidelity) be conscious? We just don't know enough about consciousness to answer this question.

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

I really disagree, but I don't have time to exchange each other's posteriors, so assume this dropped.

However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on [...] So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

I said it was a reasonable upper bound, not a reasonable lower bound. That seems trivial.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible. That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached. I am interested in your reasoning why this should be the case though, so please give me what you can in the way of references that led you to this belief.

Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?

Conditioning on (b) being true, how long ago (in subjective time) do you think our simulation started, and how many times do you believe it has (or will be) replicated?

Comment author: jacob_cannell 27 January 2011 01:47:57AM 1 point [-]

Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.

If I was to quantify your 'very little' I'd guess you mean say < 1% observational overlap.

Lets look at the rough storage cost first. Ignoring variable data priority through selective attention for the moment, the data resolution needs for a simulated earth can be related to photons incident on the retina and decreases with an inverse square law from the observer.

We can make a 2D simplification and use google earth as an example. If there was just one 'real' observer, you'd need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let's say that's S, S ~ 100 km^2.

Simulating an entire planet, the data cost is roughly fixed or capped - at 5x10^8 km^2.

So in this model simulating an entire earth with 5 billion people will have a base cost of 5x10^8 km^2, and simulating 5 billion worlds separately will have a cost of 5x10^9 * S.

So unless S is pathetically small (actually less than human visual distance), this implies a large extra cost to the solipsist approach. From my rough estimate of S the solipsist approach is 1,000 times more expensive. This also assumes that humans are randomly distributed, which of course is unrealistic. In reality human populations are tightly clustered which further increases the relative gain of shared simulation.

However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on [...] So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil?

Why?

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible.

I'm not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations - circuits that are very similar to the brain but built on much smaller scales on a new substrate.

That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached

My main reference for the ultimate limits is Seth Lloyd's "Ultimate Physical Limits of Computation". The Singularity is Near discusses much of this as well of course (but he mainly uses the more misleading ops per second, which is much less well defined).

Biological circuits switch at 10^3 to 10^4 bits flips/second. Our computers went from around that speed in WWII to the current speed plateau of around 10^9 bit flips/second reached early this century. The theoretical limit for regular molecular matter is around 10^15 bit flips/second. (A black hole could reach a much much higher speed limit, as discussed in Lloyd's paper). There are experimental circuits that currently approach 10^12 bit flips/second.

In terms of density, we went from about 1 bit / kg around WWII to roughly 10^13 bits / kg today. The brain is about 10^15 bits / kg, so we will soon surpass it in circuit density. The juncture we are approaching (brain density) is about half-way to the maximum of 10^30 bits/kg. This has been analyzed extensively in the hardware community and it looks like we will approach these limits as well sometime this century. It is entirely practical to store 1 bit (or more) per molecule.

Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?

A and B are closely correlated. Its difficult to quantify my belief in A, but it's probably greater than 50%.

I've thought a little about your last question but I don't yet even see a route to estimating it. Such questions will probably require a more advanced understanding of simulation.

Comment author: datadataeverywhere 27 January 2011 07:04:52AM *  1 point [-]

If there was just one 'real' observer, you'd need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let's say that's S, S ~ 100 km^2.

I feel like this would make you a terrible video game designer :-P. Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed.

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible.

I'm not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations - circuits that are very similar to the brain but built on much smaller scales on a new substrate.

Heh, my ability to argue is getting worse and worse. You sure you want to continue this thread? What I meant to say (and entirely failed) is that there is an infrastructure cost; we can't expect to compute with every particle, because we need lots of particles to make sure the others stay confined, get instructions, etc. Basically, not all matter can be a bit at the same time.

It is entirely practical to store 1 bit (or more) per molecule.

Again, infrastructure costs. Can you source this (also Lloyd?)?

For the rest, I'm aware of and don't dispute the speeds and densities you mention. What I'm skeptical of is that we have evidence that they are practicable; this was what I was looking for. I don't count previous success of Moore's Law strong evidence of that we will continue getting better at computation until we hit physical limits. I'm particularly skeptical about how well we will ever do on power consumption (partially because it's such a hard problem for us now).

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil? Why?

The idea that I did not have to live this life, that some entity or civilization has created the environment in which I've experienced so much misery, and that they will do it again and again makes me shake with impotent rage. I cannot express how much I would rather having never existed. The fact that they would do this and so much worse (because my life is an astoundingly far cry from the worst that people deal with), again, and again, to trillions upon trillions of living, feeling beings...I cannot express my sorrow. It literally brings me to tears.

This is not sadism; or it would be far worse. It is rather a total neglect of care, a relegation of my values in place of historical interest. However, I still consider this evil in the highest degree.

I do not reject the existence of evil, and therefore this provides no evidence against the hypothesis that I am simulated. However, if I believe that I have a high chance of being simulated, I should do all that I can to prevent such an entity from ever coming to exist with such power, on the off chance that I am one not simulated, and able to prevent such evil from unfolding.

Comment author: jacob_cannell 27 January 2011 08:15:02AM *  0 points [-]

Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed.

Of course you're on the right track here - and I discussed spatially variant fidelity simulation earlier. The rough surface area metric was a simplification of storage/data generation costs, which is a separate issue than computational cost.

If you want the most bare-bones efficient simulation, I imagine a reverse hierarchical induction approach that generates the reality directly from the belief network of the simulated observer, a technique modeled directly on human dreaming.

However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers - they are greater unknowns than the environment itself.

The solipsist issue may not have discernible consequences, but overall the computational scaling is sublinear for emulating more humans in a world and probably significant because of the large casual overlap of human minds via language.

It is entirely practical to store 1 bit (or more) per molecule. Again, infrastructure costs. Can you source this (also Lloyd?)?

Physical Limits of Computation

What I'm skeptical of is that we have evidence that they are practicable; this was what I was looking for.

The intellectual work required to show an ultimate theoretical limit is tractable, but showing that achieving said limit is impossible in practice is very difficult.

I'm pretty sure we won't actually hit the physical limits exactly, it's just a question of how close. If you look at our historical progress in speed and density to date, it suggests that we will probably go most of the way.

Another simple assessment related to the doomsday argument: I don't know how long this Moore's Law progression will carry on, but it's lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing.

A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore's Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them.

I'm particularly skeptical about how well we will ever do on power consumption (partially because it's such a hard problem for us now).

Power density depends on clock rate, which has plateaued. Power efficiency, in terms of ops/joule, increases directly with transistor density.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil? Why?

I cannot express how much I would rather having never existed.

This is somewhat concerning, and I believe, atypical. Not existing is perhaps the worst thing I can possibly imagine, other than infinite torture.

It is rather a total neglect of care, a relegation of my values in place of historical interest.

I'm not sure if 'historical interest' is quite the right word. Historical recreation or resurrection might be more accurate.

A paradise designed to maximally suffice current human values and eliminate suffering is not a world which could possibly create or resurrect us.

You literally couldn't have grown up in that world, the entire idea is a non sequitur. Your mind's state is a causal chain rooted in the gritty reality of this world with all of it's suffering.

Imagining that your creator could have assigned you to a different world is like imagining you could have grown up with different parents. You couldn't have. That would be somebody else completely.

Of course, if said creator exists, and if said creator values what you value in the way you value it (dubious) it could whisk you away to paradise tomorrow.

But I wouldn't count on that - perhaps said creator is still working on you or doesn't think paradise is a useful place for you or could care less.

In the face of such uncertainty, we can only task ourselves with building paradise.

Comment author: datadataeverywhere 27 January 2011 02:46:25PM 1 point [-]

However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers - they are greater unknowns than the environment itself.

I believe we're arguing along two paths here, and it is getting muddled. Applying to both, I think one can maintain the world-per-person sim much more cheaply than you originally suggested long before one hits the spot where the sim is no longer accurate to the world except where it intersects with the observer's attention.

Second, from my perspective you're begging the question, since I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation, in which case it seems obvious to me that a large group of minds is necessary. If we're only talking about that case, the arguments along this line about the per-mind cost just aren't very relevant.

I have a 404 on your link, I'll try later.

Another simple assessment related to the doomsday argument: I don't know how long this Moore's Law progression will carry on, but it's lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing.

Interesting, I haven't heard that argument applied to Moore's Law. Question: you arrive at a train crossing (there are no other cars on the road), and just as you get there, a train begins to cross before you can. Something goes wrong, and the train stops, and backs up, and goes forward, and stops again, and keeps doing this. (This actually happened to me). 10 minutes later, should you expect that you have around 10 minutes left? After those are passed, should your new expectation be that you have around 20 minutes left?

The answer is possibly yes. I think better results would be obtained by using a Jeffreys Prior. However, I've talked to a few statisticians about this problem, and no one has given me a clear answer. I don't think they're used to working with so little data.

A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore's Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them.

Revise to say "and room to push against the practicable limits" and you will see where my argument lies despite my general agreement with this statement.

Power efficiency, in terms of ops/joule, increases directly with transistor density.

To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another. I saw a roundtable about proposed techniques for increasing processor efficiency. None of the attendees objected to the introduction, which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density, and would render all modern circuit designs inoperable if there were to be logically extended without addressing the problem of quantum leakage.

I cannot express how much I would rather having never existed.

This is somewhat concerning, and I believe, atypical. Not existing is perhaps the worst thing I can possibly imagine, other than infinite torture.

If you didn't exist in the first place, you wouldn't care. Do you think you've done so much good for the world that your absence could be "the world thing you can possibly imagine, other than infinite torture"?

Regardless, I'm quite atypical in this regard, but not unique.

You literally couldn't have grown up in that world, the entire idea is a non sequitur. Your mind's state is a causal chain rooted in the gritty reality of this world with all of it's suffering.

Imagining that your creator could have assigned you to a different world is like imagining you could have grown up with different parents. You couldn't have. That would be somebody else completely.

And wouldn't that be so much better.

You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you're proposing must suffer for the creator's whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives? I know I couldn't have existed in a paradise, but if I'm a sim, there are cycles that could be used for paradise that have been abandoned to create misery and strife.

Again, I think that this may be the world we really are in. I just can't call it a moral one.

Comment author: jacob_cannell 27 January 2011 06:15:59PM *  1 point [-]

I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation.

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Power efficiency, in terms of ops/joule, increases directly with transistor density.

To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another.

If that was actually the case, then there would be no point to moving to a new technology node!

Yes leakage is a problem at the new tech nodes, but of course power per transistor can not possibly be increasing. I think you mean power per surface area has increased.

Shrinking a circuit by half in each dimension makes the wires thinner, shorter and less resistant, decreasing power use per transistor just as you'd think. Leakage makes this decrease somewhat less than the shrinkage rate, but it doesn't reverse the entire trend.

There are also other design trends that can compensate and overpower this to an extent, which is why we have a plethora of power efficient circuits in the modern handheld market.

"which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density"

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

I recall seeing some slides from NVidia where they are claiming there next GPU architecture will cut power use per transistor dramatically as well at several times the rate of shrinkage.

You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you're proposing must suffer for the creator's whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives?

Even if the goal is maximizing fun, creating some historical sims for the purpose of resurrecting the dead may serve that goal. But I really doubt that current-human-fun-maximization is an evolutionary stable goal system.

I imagine that future posthuman morality and goals will evolve into something quite different.

Knowledge is a universal feature of intelligence. Even the purely mathematical hypothetical superintelligence AIXI would end up creating tons of historical simulations - and that might be hopelessly brute force, but nonetheless superintelligences with a wide variety of goal systems would find utility in various types of simulation.

Comment author: Desrtopa 27 January 2011 07:21:47PM *  2 points [-]

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Much of the information from the past is probably irretrievably lost to us. If the information input into the simulation were not precisely the same as the actual information from that point in history, the differences would quickly propagate so that the simulation would bear little resemblance to the history. Supposing the individuals in question did have access to all the information they'd need to simulate the past, they'd have no need for the simulation, because they'd already have complete informational access to the past. It suffers similar problems to your sandboxed anthropomorphic AI proposal; provided you have all the resources necessary to actually do it, it ceases to be a good idea.

There are other possible motivations, but it's not clear that there are any others that are as good or better, so we have little reason to suppose it will ever happen.

Comment author: datadataeverywhere 27 January 2011 07:07:23PM *  1 point [-]

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

This seems to be overly restrictive, but I don't mind confining the discussion to this hypothesis.

I think you mean power per surface area has increased.

Yes, you are correct.

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.

I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different.

Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge. Does this really not bother you in the slightest?

ETA: still 404