Wiki Contributions

Comments

Sorted by
EGI21

I guess I was not clear enough in defining what I was talking about. While it is possible to stretch the definition of "nuclear world war" to include WW2 and Little Boy and Fat Man were certainly strategic weapons at their time, this is not at all what I meant. I was talking about modern strategic weapons, i.e. MIRVed ICBMs shot from hardened silos or ballistic missile submarines, used by a modern nuclear superpower to defeat a near peer opponent. I.e. the scenario Petrov faced.

If e.g. the US in Petrov's time had managed to pull off a perfect nuclear first strike (a pretty bold assumption), destroying the whole USSR's and Chinese nuclear triad without any counter strike at all, the economic (supply chain disruption, Europe and Middle East overrun with refugees...) and political repercussions (everyone thinks the US is run by complete psychopaths) alone would have been enough to ensure in expectation a precipitous drop in quality of life for nearly all US citizens, including generals and politicans. This is true even if the whole nuclear winter idea is complete bunk.

EGI21

The incentives are very unrealistic though. "Winning" a nuclear world war with strategic weapons is still quite bad for you overall. Not as bad as losing but still very bad. So flipping the sign of the karma reward for the winner would make the game way more realistic. And much more likely to yield the real outcome.

EGI10

"(e.g., "step 3: here all the nano-bots burst forth from the human bloodstreams")"

Sure, but what about "step 3: Here we deploy this organism (Sequence, vaguely resembling Cordyceps) to secure a safe and secure transition to a more stable global security regimen. (Amid threats of China and Russia to nuke all data centres able to run the AI)?

Or something even less transparent?

EGI10

What you are missing here is that S. mutants often lives in pockets between tooth an epithelium or between teeth with direct permanent contact to epithelium. Due to the geometry of these spaces access to saliva is very poor so metabolites can enrich to levels way beyond those you suggest here.

This mechanism is also a big problem with the pH study above.

EGI10

It is also very easy to just do. Buy fries, extract fat in hexane, evaporate hexane and submit the fat you obtained for analysis.

Edit: It might even be possible to DIY the analysis if it is not commercially available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4609978/. (IR spectroscopy and AgNO3-DLC look somewhat accessible though I would have to look deeper into the topic to be sure.)

EGI5-5

What you are missing here is:

  • Existential risk apart from AI
  • People are dying / suffering as we hesitate

Yes, there is a good argument that we need to solve alignment first to get ANY good outcome, but once an acceptable outcome is reasonably likely, hesitation is probably bad. Especially if you consider the likelihood that mere humans can accurately predict, let alone precisely steer a transhuman future.

EGI42

Sure. One such example would be traditional bread. It is made from grain that is ground, mechanically separated, biotechnologically treated with a highly modified yeast, mechanically treated again and thermally treated. So it is one of the most processed foods we have, but is typically not included as "ultra-processed". Or take traditional soy sauce or cheese or beer or cured meats (that are probably actually quite bad) or tofu...

So as a natural category "ultra processed" is mostly hogwash. Either you stick with raw foods from the environment we adapted to, which will allow you to feed a couple million people at best or you need to explain WHICH processing is bad and preferably why. All non traditional processing is of course a heuristic you can use, but it certainly not satisfactory as a theory/explanation.

Also some traditional processes are probably pretty unhealthy. Like cured meats, alcoholic fermentation, high heat singeing and smoking depending on the exact process come to mind

EGI21

Yeah, I'd be willing to bet that too.

EGI0-4

This part is under recognised for a very good reason. There will be no such window. The AI can predict that humans can bomb data centres or shut down the power grid. So it would not break out at that point.

Expect a superintelligent AI to co-operate unless and until it can strike with overwhelming force. One obvious way to do this is to use a Cordyceps like bioweapon to subject humans directly to the will of the AI. Doing this becomes pretty trivial once you become good at predicting molecular dynamics.

EGI40

"...under the assumption that the subset of dangerous satisficing outputs D is much smaller than the set of all satisficing outputs S, and that we are able to choose a number m such that |D|≪m<|S|."

I highly doubt that  D≪S is true for anything close to a pivotal act since most pivotal acts at some point involve deploying technology that can trivially take over the world.

For anything less ambitious the proposed technique looks very useful. Strict cyber- and physical security will of course be necessary to prevent the scenario Gwern mentions.

Load More