Nobel laureate Marie Curie died of aplastic anemia, the victim of radiation from the many fascinating glowing substances she had learned to isolate.

How could she have known?  And the answer, as far as I can tell, is that she couldn't.  The only way she could have avoided death was by being too scared of anything new to go near it.  Would banning physics experiments have saved Curie from herself?

But far more cancer patients than just one person have been saved by radiation therapy.  And the real cost of banning physics is not just losing that one experiment - it's losing physics.  No more Industrial Revolution.

Some of us fall, and the human species carries on, and advances; our modern world is built on the backs, and sometimes the bodies, of people who took risks.  My father is fond of saying that if the automobile were invented nowadays, the saddle industry would arrange to have it outlawed.

But what if the laws of physics had been different from what they are?  What if Curie, by isolating and purifying the glowy stuff, had caused something akin to a fission chain reaction gone critical... which, the laws of physics being different, had ignited the atmosphere or produced a strangelet?

At the recent Global Catastrophic Risks conference, someone proposed a policy prescription which, I argued, amounted to a ban on all physics experiments involving the production of novel physical situations - as opposed to measuring existing phenomena.  You can weigh a rock, but you can't purify radium, and you can't even expose the rock to X-rays unless you can show that exactly similar X-rays hit rocks all the time.  So the Large Hadron Collider, which produces collisions as energetic as cosmic rays, but not exactly the same as cosmic rays, would be off the menu.

After all, whenever you do something new, even if you calculate that everything is safe, there is surely some probability of being mistaken in the calculation - right?

So the one who proposed the policy, disagreed that their policy cashed out to a blanket ban on physics experiments.  And discussion is in progress, so I won't talk further about their policy argument.

But if you consider the policy of "Ban Physics", and leave aside the total political infeasibility, I think the strongest way to frame the issue - from the pro-ban viewpoint - would be as follows:

Suppose that Tegmark's Level IV Multiverse is real - that all possible mathematical objects, including all possible physical universes with all possible laws of physics, exist.  (Perhaps anthropically weighted by their simplicity.)

Somewhere in Tegmark's Level IV Multiverse, then, there have undoubtedly been cases where intelligence arises somewhere in a universe with physics unlike this one - i.e., instead of a planet, life arises on a gigantic triangular plate hanging suspended in the void - and that intelligence accidentally destroys its world, perhaps its universe, in the course of a physics experiment.

Maybe they experiment with alchemy, bring together some combination of substances that were never brought together before, and catalyze a change in their atmosphere.  Or maybe they manage to break their triangular plate, whose pieces fall and break other triangular plates.

So, across the whole of the Tegmark Level IV multiverse - containing all possible physical universes with all laws of physics, weighted by the laws' simplicity:

What fraction of sentient species that try to follow the policy "Ban all physics experiments involving situations with a remote possibility of being novel, until you can augment your own intelligence enough to do error-free cognition";

And what fraction of sentient species that go ahead and do physics experiments;

Survive in the long term, on average?

In the case of the human species, trying to ban chemistry would hardly have been effective - but supposing that a species actually could make a collective decision like that, it's at least not clear-cut which fraction would be larger across the whole multiverse.  (We, in our universe, have already learned that you can't easily destroy the world with alchemy.)

Or an even tougher question:  On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?

New Comment
22 comments, sorted by Click to highlight new comments since:

"So, across the whole of the Tegmark Level IV multiverse - containing all possible physical universes with all laws of physics, weighted by the laws' simplicity"

This is not evidence. There is no way you can extract an answer to the question that doesn't influence your decision-making. Weighting the universes is epiphenomenal. We can only look at the laws and content of out world in the search for the answers.

In the last similar thread someone pointed out that we're just talking about increasing existential risk in the tiny zone were we observe (or reasonably extrapolate) each other existing, not the entire universe. It confuses the issue to talk about destruction of the universe.

Really this is all recursive to Joy's "Grey goo" argument. I think what needs to be made explicit is weighing our existential risk if we do or don't engage in a particular activity. And since we're not constrained to binary choices, there's no reason for that to be a starting point, unless it's nontransparent propaganda to encourage selection of a particular unnuanced choice.

A ban on the production of all novel physics situations seems more extreme than necessary (although the best arguments for that should probably be heard and analyzed). But unregulated, unreviewed freedom to produce novel physics situations also seems like it would be a bit extreme. At the least, I'd like to see more analysis of the risks of not engaging in such experimentation. This stuff is probably very hard to get right, and at some point we'll probably get it fatally wrong in one way or another and all die. But let's play the long odds with all the strategy we can, because the alternative seems like a recursive end state (almost) no matter what we do.

Has anyone done an analysis to rule out existential risks from the possibility of time travel at LHC? Also, what about universe creation (in a way not already occurring in nature), which raises its own ethical issues?

Both of these do seem very improbable; I would bet that they can be ruled out completely through some argument that has not yet been spelled out in a thorough way. They also seem like issues that nonphysicists are liable to spin a lot of nonsense around, but that's not an excuse.

One response to the tougher question is that the practicality of advising said species to hold on to its horse saddles a wee bit longer is directly proportional to the remaining time (AGI_achieved_date, less today's_date) within the interval between transistors and AGI. If it's five years beyond transistors and thousands of years yet to go to AGI, with a low likelihood of success, ever; then maybe the reward attributable to breakthrough (and breakout of some theoretically insufferably stagnant present) outweighs the risk of complete annihilation (which also is maybe not all that bad -- or at least fear and pain free -- if it's instantaneous and locally universal). If, on the other hand, there is good reason to believe that said species is only 5, 50, or maybe even 500 years away from AGI that could prevent Global Inadvertent Total Annihilation (GITA? probably not a politically correct acronym, ITA is less redundant anyway), perhaps the calculus favors an annoying yet adaptive modicum of patience which a temporary "limit up" in the most speculative physics futures market might represent.

Heh, somedbody's been reading Unicorn Jelly ...

Silverton,

Biological intelligence enhancement and space travel (LHC on Mars?) do not appear to be thousands of years away.

Funny you should mention that thing about destroying a universe by breaking a triangular world plate..... http://unicornjelly.com/uni296.html

Eliezer: "Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?"

But--if you're right about the possibility of an intelligence explosion and the difficulty of the Friendliness problem, then building a novel AI is much, much more dangerous than creating novel physical conditions. Right?

I would advise them that if they were to do novel physics experiments that they also take time to exercise, eat less, sleep well and be good to others. And to have some fun. Then at least they probably would have improved their experience of life regardless as to the outcome of their novel experiments. That advice might also lead to clearer insights for their novel experiments.

Off for my day hike into the wilderness :)

But what if we'll need the novel-physics-based superweapons to take out the first rogue AI? What a quandary! The only solution must be to close down all sciencey programmes immediately.

I am fond of this kind of multiverse reasoning. One place I look for inspiration is Wolfram's book A New Kind of Science. This book can be thought of as analogous to the early naturalists' systematic exploration of the biological world, with their careful diagrams and comparisons, and attempts to identify patterns, similarities and differences that would later be the foundation for the organization system we know today. Wolfram explores the multiverse by running a wide variety of computer simulations. He is often seen as just using CA models, but this is not true - he tries a number of computational models, but finds the same basic properties in all of them.

Generally speaking, there are four kinds of universes: static, repeating, random, and chaotic. Chaotic universes combine stability with a degree of dynamism. It seems that only chaotic universes would be likely abodes of life.

The question is whether there are likely to be universes which are basically stable, with predictable dynamics, except that when certain patterns and configurations are hit, there is a change of state, and the new pattern is the seed for an explosive transition to a whole new set of patterns. And further, this seed pattern must be quite rare and never be hit naturally. Only intelligence, seeking to explore new regimes of physics, can induce such patterns to exist. And further, the intelligence does not anticipate the explosive development of the seed, they don't know the physics well enough.

From the Wolfram perspective, it seems that few possible laws of physics would have these properties, at least if we weight the universes by simplicity. A universe should have the simplest possible laws of physics that allow life to form. For these laws to incidentally have the property that some particular arrangement of matter/energy would produce explosive changes, while other similar arrangements would do nothing, would seem to require that the special arrangement be pre-encoded into the laws. That would add complexity which another universe without the special arrangement encoding would not need, hence such universes would tend to be more complex than necessary.

It's worth noting that coming up with configurations that require intelligence (or at least life) to produce isn't that hard. The only really obvious one I know of in our universe is bulk refrigeration below 2.7K, but given the simplicity of that one I strongly suspect there are others.

On the likelihood of such a state inducing a large-scale phase change, I'm in agreement. It seems implausible unless the universe is precisely tuned to allow it.

Maybe this is naive of me but why would you not just do the standard act-utilitarian thing? Having all of future scientific knowledge before intelligence augmentation is worth let's say a 10% chance of destroying the world right now, future physics knowledge is 10% of total future knowledge, knowledge from the LHC is 1% of future physics knowledge, so to justify running it the probability of it destroying the world has to be less than 10^-4. The probability of an LHC black hole eating the world is the probability that LHC will create micro black holes, times the probability that they won't Hawking-radiate away or decay in some other way, times the probability that if they survive they eat the Earth fast enough to be a serious problem, which product does indeed work out to much less than 10^-4 for reasonable probability assignments given available information including the new Mangano/Giddings argument. Repeat analysis for other failure scenarios and put some probability on unknown unknowns (easier said than done but unavoidable). Feel free to argue about the numbers.

steven: do you mean "(having all of future scientific knowledge [ever]) before intelligence augmentation" or "having [now] (all of future scientific knowledge before intelligence augmentation)"? Also, if (physics|LHC) knowledge is X% of (all|physics) knowledge by bit count, it doesn't follow that it has X% of the value of (all|physics) knowledge; in particular, I would guess that (at least in those worlds where the LHC isn't dangerous) high-energy physics knowledge is significantly less utility-dense than average (the utility of knowledge currently being dominated by its relevance to existential risk).

"Yes," if Schrodinger's Cat is found to be dead; "no" if it is found to be alive.

a ban on all physics experiments involving the production of novel physical situations
Each instant brings a novel physical situation. How do you intend to stop the flow of time?

Shall we attempt to annihilate speech, lest someone accidentally pronounce the dread and fearsome name of God? Someone might try to write it - better cut off all fingers, too.

I am deeply honored.

But I am deeply dishonored, having turned out to be the dead Schrodinger's Cat.

It's interesting to me that you refer to "AI" as a singular monolithic noun. Have you fleshed out your opinion of what AI is in a previous post?

An alternative view is that our intelligence is made of many component intelligences. For example, a professor of mine is working on machine vision. Many people would agree that a machine that can distinguish between the faces of the researchers that built it would be more "intelligent" than one that could not, but that ability itself does not define intelligence. We also have many other behaviors besides visual pattern recognition that are considered intelligent. What do you think?

"Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?"

Yes I would. What's tough about this? Just a matter of whether novel physics experiments are more likely to create new existential risks, or mitigate existing ones in a more substantial manner.

Actually, for it to be justified to carry out the experiments, it would also be required that there weren't alternative recipients for the funding and/or other resources, such that the alternative is more likely to mitigate existential risks.

Let's not succumb to the technophile instinct of doing all science now, since it's cool. Most science could well wait for a couple thousand years, if there are more important things to do. We know of good as-yet-undone ways to (1) mitigate currently existing existential risk and (2) increase our intelligence/rationality, so there is no need to go expensively poking around in the dark searching for unexpected benefits, while we haven't yet reached out to low-hanging benefits we can already see. Let's not poke around in the dark before we have exhausted the relatively easy ways to increase the brightness of our light sources.

[-][anonymous]20

life arises on a gigantic triangular plate hanging suspended in the void

manage to break their triangular plate, whose pieces fall and break other triangular plates.

Heh, Unicorn Jelly references.