I'm somewhat worried about this virus-immune bacterium outcompeting regular life because it can drop all the anti-viral adaptations.
It's a conceptually simple find/replace on functionally identical codons, which should make the bacterium immune to all viruses barring something like 60000 specific viral mutations happening at once.
Viruses cause massive selection pressure:
"The rate of viral infection in the oceans stands at 1 × 10^23 infections per second, and these infections remove 20–40% of all bacterial cells each day." - https://www.nature.com/articles/nrmicro2644 (could not find good figures for land, plausibly they are a fair bit lower, but still likely high enough to be a huge deal)
without them I expect evolution to be able to come up with all sorts of ways to make it much better at all the other parts of life.
A big part of my model is that a large part of the reason we have species diversity is that the more successful a species is the bigger a target it is for viral infection, removing that feedback loop entirely while at the same time giving a particular species a huge boost by letting it drop all sorts of systems developed to prevent viruses killing it seems very risky.
This is fundamentally different from anything evolution has ever or could reasonably cook up, since it removes a set of the basic codons in a way which requires foresight (the replacement of each of the huge number of low-use codons has no value independently, and the removal of the ability to process those low-use codons (i.e. removing the relevant tRNA) is reliably fatal before all the instances are replaced).
To clarify: I don't think this is an x-risk, but it could wreak the biosphere in a way which would cause all sorts of problems.
They do claim to be trying to avoid it being able to survive in the wild:
For safety, they also changed genes to make the bacterium dependent on a synthetic amino acid supplied in its nutrient broth. That synthetic molecule does not exist in nature, so the bacterium would die if it ever escaped the lab.
Which is only mildly reassuring if this thing is going to be grown at scale, as the article suggests, since there is (I think?) potential for that modification to be reversed by mutation, given enough attempts and the fact that a modification that makes the cell die if it does not run into a certain amino acid seems like it should be selected against if that amino acid is ever scarce.
Followed up the containment procedure, and the tests seem inadequate to bet the biosphere on:
[...] several experiments involving 100 billion or more cells and lasting up to 20 days did not reveal a single microbe capable of surviving in the absence of the artificial supplement.
Sorry, I'm not too familiar with the community, so not sure if this question is about AI alignment in particular or risks more broadly. Assuming the latter: I think the most overlooked problem is politics. I worry about rich and powerful sociopaths being able to do evil without consequences or even without being detected (except by the victims, of course). We probably can't do much about the existence of sociopaths themselves, but I think we can and should think about the best ways to increase transparency and reduce inequality. For what it's worth, I'm a negative utilitarian.
I worry about rich and powerful sociopaths being able to do evil without consequences or even without being detected (except by the victims, of course).
Many methods used to avoid detection by general population also work on the victims, including:
I'm worried about persuasion tools and the deterioration of collective epistemology they would likely bring. (I guess it's the deterioration I'm worried about, and persuasion tools are one way it could happen.) I hope to write a post about this soon.
I'm worried that much or most of the risk we're facing over the next 100 or so years come from technologies that are not even on our radar. We do not seem to have a great track record for predicting which advancements are coming, and we seem to be at least as bad at predicting how they will be used or which further advancements they will enable. It seems likely to me that AI will make discovery faster and possibly cheaper.
It makes sense to me that we focus on problems that are already on our radar (AI alignment, synthetic biology), and some of our efforts to mitigate those risks might transfer to whatever else we find ourselves up against. And some people do seem to be worried about risks from emerging tech in a very broad sense (Bostrom and others at FHI come to mind). But I'm not sure we're taking seriously enough the problem of dealing with entirely unforeseen technological risks.
On the other hand, to my knowledge, we haven't thought of any important new technological risks in the past few decades, which is evidence against many such risks existing.
That we run out of easily usable cheap energy sources and our civilization reverts to a kind of permanent static feudal / subsistence structure at best, if not outright collapse.
I think this is ignored for a number of reasons.
On the right, there is an assumption that fossil fuels are not going to run out, that fuel for nuclear reactors is basically infinite, etc. So no problem other than the "hoax" of global warming. I think this is wrong because a) GW is not a hoax, according to several months I spent looking into the issue, b) fossil fuels are going to run out fairly soon in terms of fuels that take less energy to extract than they produce, c) nuclear fuels would run out about as fast as fossil fuels if used at scale. The only exception being we could last a couple hundred years using breeder reactors but who wants 5,000 nuclear bomb raw material makers spread around the planet. And the faith in technology progress is overdone; nuclear fusion still seems a long way away, and the state of the art in batteries has only been getting better at about 7% per year.
On the left there is the belief that renewables will solve the problem. I have so far spent a couple of months trying to put together a picture of what a renewable economy would look like. So far it does not look like it adds up. The fundamental problems are 3-fold 1) the low density of renewables making the infrastructure extremely resource, energy and cost intensive 2) Batteries are nowhere near good enough for many key requirements (air travel, shipping, heavy transport and seasonal energy storage) 3) We don't have a solution for others (concrete, steel manufacture). There are a lot of unproven ideas but if you try to put together a solution from proven technologies it does not add up so far.
It is very difficult to impossible even to match existing economic output with renewables, but when you take into account locked in population growth and assume the rest of the world catches up to first world living standards then energy consumption becomes a huge multiple of current levels and it becomes absurdly out of reach.
Most proposed solutions have as a vital step "then a miracle occurs*". Out whole civilization is based on cheap energy. Without that, all our cleverness is in vain.
*As one example I was recently informed that my analysis had not taken into account the ability to mine uranium from asteroids and was thus faulty.
c) nuclear fuels would run out about as fast as fossil fuels if used at scale.
A decade ago everyone talked about peak oil and we are now at a moment of time where increased oil production seriously reduced oil prices so that faciliities get shut down.
There are possibilities to extract uranium from seawater and if there would be a higher demand for uranium there would be a lot of funding going into making that process efficient.
I've run into people arguing this a few times, but so far no one continues the conversation when I start pulling out papers with recent EROEI of solar and the like (e.g. https://sci-hub.do/10.1016/j.ecolecon.2020.106726 which is the most recent relevant paper I could find, and says "EROIs of wind and solar photovoltaics, which can provide the vast majority of electricity and indeed of all energy in the future, are generally high (≥ 10) and increasing.").
Perhaps you will break the streak!
I am curious about the details of your model and the sources you're dr...
What about Thorium? A back of the envelope calculation suggests thorium reactors could supply us with energy 100-500 years. I got this from a few sources. First used the figure of the 170 GW days produced per metric tonne of fuel (Fort St Vrain HTR) and the availability of fuel (500-2500 ktonnes according to Wikipedia) to estimate 10-50 years out of Thorium reactors if we keep using 15TW of energy. And that's not even accounting for breeding reactors, which can produce their own fuel. So if we do go with the theoretical maximum, then we should multiply thi...
Time travel. As I understand it, you don't need to hugely stretch general relativity for closed timelike curves to become possible. If adding a closed timelike curve to the universe adds an extra constraint on the initial conditions of the universe, and makes most possibilities inconsistent, does that morally amount to probably destroying the world? Does it create weird hyper-optimization toward consistency?
I'm pretty sure we can leave this problem to future beings with extremely advanced technology, and more likely than not there are physical reasons why it's not an issue, but I think about it from time to time.
Could you expand on "you don't need to hugely stretch general relativity for closed timelike curves to become possible"? After all, the only thing keeping everyone from flying off of the planet is a minus sign in different equations relating to gravity, but we don't worry about that as a risk. Are the changes to general relativity similar to that, or more like "relativity allows it but it has some huge barrier that we can't currently surpass"?
There are a few things that concern me. CRISPR proliferation with gene drive tech is a near and looming threat. It won't be long before large numbers of individuals will be able to design and deploy genetically modified organisms with minimal regulation or oversight. The ways in which this can go wrong is limited only by your imagination.
In order of magnitude:
Psychological and political consequences of climate change leading to a significantly larger likelihood of botching AI and killing all present and future humans.
I can safely assert that if the expected consequences of climate change do show up, our society will become a lot more dumber and a lot more likely to screw up on this.
Imagine mankind being as close to AGI as we will be in ten years during a political situation similar to the Cold War. Things are likely to get at least as bad if most powers feel like they have to contend resources against each other, not to mention that we are likely to get a lot of nationalist governments when the global situation will get this chaotic and people will start to panic. From there to "if we don't improve our AIs the neighbouring hated enemy will and will crush for sure" it's a pretty brief step. Not to count "but we do have to fix this mess somehow, I'm sure you have already been careful enough and we are ready to do this", "or I'm sure we have been cautious enough, people are dying as we speak, we have to do this now".
Environmental and social consequences of climate change managing to collapse the existing civilisation by putting too much stress on the vulnerable systems needed to keep it alive (especially water and food) and such stress going into a positive feedback circle, with the result of giving permanent death to any present human who hasn't already received cryogenic and hidden his body/brain somewhere safe.
From what I've seen the models on climate change consequences are overly optimistic, simply because we have no real data on what actually happens if so many things change at once in the environment. These models describe consequences like one third of all animal and vegetal species going extinct, and one billion climate refugees. These are disasters one order of magnitude greater than anything we have directly observed so far, and it's a safe bet such big changes would produce a lot other effects.
Industrial overproduction and overcompsumtion of resources.
To actually get a society that won't crash horribly for resource exhaustion before we get AGI right, with consequences similar to the above point, it's urgent that we dial back industrial production a lot.
We are overproducing so much and wasting so many resources and outputting so much unnecessary pollution that it's ridiculous.
If you actually read the report of the Ipcc their suggested solutions are, very semplified: mostly don't waste, replace everything you can with cleaner energy sources, invest heavily in trying to reabsorb excess greenhouse gasses. Most of the analysis I saw, instead, claimed the climate crisis was unsolvable because renewables couldn't possibly provide all the energy that we need RIGHT NOW, not really considering how much are we using compared to what we need.
All of this excess production and pollution isn't even doing anything positive, it's not improving out living standards in any way, it's just a waste and a way for a small percentage of very rich people/corporations to make an increased profit (which doesn't fall back in any way to the general populace, as wealth concentration and quality of life clearly shows).
But if you mention we need to use less resources and decrease industrial production some people just seem to assume that you are a Luddite or that you want everyone to go back to the caves. I think there is something similar to cheering for the technology+industry team and that's a pretty dangerous bias.
It seems like the nanotech we get soon isn't grey goo based but protein folding based. Risks from having solved protein folding and being able to customly design proteins for specific ends seem not to be talked about.
I'm afraid that we're technologically developing too slowly and are going to lose the race to extraterrestrial civilizations that will either proactively destroy us on Earth or limit our expansion. One of the issues with this risk is that solving it runs directly counter to the typical solutions for AI-risk and other We're Developing Too Quickly For Our Own Good-style existential risks.
To prevent misaligned AI we should develop technology more slowly and steadily; MIRI would rather we develop AI 50 years from now than tomorrow to give them more time to solve the alignment problem. From my point of view that 50 years of slowed development may be what makes or breaks our potential future conflict with Aliens.
As technology level increases, smaller and smaller advantages in technological development time will necessarily lead to one-sided trounces. The military of the future will be able to absolutely crush the military from just a year earlier. So as time goes on it becomes more and more imperative that we increase our speed of technological development to make sure we never get trounced.
Risk due to weaponizing space and multiple countries having tungsten rods that are as destructive as nuclear weapons in orbit seem to be underappreciated.
Instead of uploading humans to create a large mess of AIs, let's connect humans together as soon as it's safe to do so (maybe at first only the elderly and bedridden, eventually anyone who can wear a hat) then add machines and maybe even animals (sup elephants and dolphins) to create a single gigantic worldbrain. As computer simulations of brain tissue get better, the AI will go from being mostly human to mostly artificial. The death of a fully integrated human body wouldn't cause an interruption in that human's consciousness, because most of it would be distributed across the entire worldbrain.
I believe that extant technology could be used to do this and actually wrote up a technical proposal that I didn't disseminate (it wasn't great and I didn't see anyone being persuaded by it so I trashed it). The technical risk is mostly in testing and some assumptions about the way the brain works that I view as 'plausible' given the state of the art, but far from 'proven'
There are a few things I'm worried about which I have not seen discussed much, and it makes me wonder what we're collectively missing.
This seems like a question which has likely been asked before, but my Google-fu did not find it.
You don't need to make a watertight case for something being important in order to share your concern, brief explanations are fine if the high bar of writing something detailed would put you off posting.