Views my own, not my employers.
The most likely scenario by far is that a mirrored bacteria would be outcompeted by other bacteria and killed by achiral defenses due to [examples of ecological factors]
I think this is the crux of the different feelings around this paper. There are a lot of unknowns here. The paper does a good job of acknowledging this and (imo) it justifies a precautionary approach, but I think the breadth of uncertainty is difficult to communicate in e.g. policy briefs or newspaper articles.
It's a good connection to draw - I wonder if increased awareness about AI is sparking increased awareness of safety concepts in related fields. It's a particularly good sign for awareness and action of the safety concepts present in the overlap between AI and biotechnology.
I think you're right that there's very little benefit compared to the risks for mirror life which is not seen as true with AI - on top of the general truth that biotech is harder to monetise.
Can you explain more about why you think [AGI requires] a shared feature of mammals and not, say, humans or other particular species?
It's very field-dependent. In ecology & evolution, advisor-student fit is very influential and most programmes are direct admit to a certain professor. The weighting seems different for CS programs, many of which make you choose an advisor after admission (my knowledge is weaker here).
In the UK it's more funding dependent - grant-funded PhDs are almost entirely dependent on the advisor's opinion, whereas DTPs/CDTs have different selection criteria and are (imo) more grades-focused.
From discussing AI politics with the general public [i.e. not experts], it seems that the public perception of AI progress is bifurcating on two parallel lines:
A) Current AI progress is sudden and warrants a response (either acceleration or regulation)
B) Current AI progress is a flash-in-the-pan or a nothingburger.
(This is independent from responding to hypothetical AI-in-concept.)
These perspectives are largely factual rather than ideological. In conversation, the active tension between these two incompatible perspectives is really obvious. It makes it hard to hold meaningful conversations without being overbearing or ?accusatory.
Where does this divide come from? Is it the image hangover from the public's interaction with the first ChatGPT? How can we bridge this when speaking to the average person?
It is worth noting that UKRI is in the process of changing their language to Doctoral Landscape Awards (replacing DTP) and Doctoral Focal Awards (CDT). The announcements for BBSRC and NERC have already been done, but I can't find what EPSRC is doing.
I agree that evolutionary arguments are frequently confused and oversimplified, but your argument is proving too much.
[the difference between] AI and genetic code is that genetic code has way less ability to error-correct than basically all AI code, and it's in a weird spot of reliability where random mutations are frequent enough to drive evolution, but not so frequent as to cause organisms to outright collapse within seconds or minutes.
This "weird spot of reliability" is itself an evolved trait, and even with the effects of mutation rate variation between species, the variation within populations is heavily constrained (see Lewontin's paradox of diversity). Even discounting purely genetic/code-based(?) factors, the amount of plasticity (?search) in behaviour is also an evolvable trait (see canalisation) - I think it's likely there are already terms for this within the AI field but it's not obvious to me how best to link the two ideas together. I'm more curious about the value drift evolutionary arguments but I don't see an a priori reason that these ideas don't apply.
It would be good if we could understand the conditions under which greater plasticity/evolvability is selected for, and whether we expect its effects to occur in a timeframe relevant to near-term alignment/safety.
Another reason is that effective AI architectures can't go through simulated evolution, since that would use up too much compute for training to work (We forget that evolution had at a lower bound 10e46 FLOPs to 10e48 FLOPs to get to humans).
It's not obvious to me that this is a sharp lower-bound, particularly when AI are already receiving the benefits of prior human computation in the form of culture. Human evolution had to achieve the hard part of reifying the world into semantic objects whereas AI has a major head-start. If language is the key idea (as some have argued), then I think there's a decent chance that the lower bound is smaller than this.
There's a connection to the idea of irony poisoning here, and I do not think it is good for the person in question to pretend to hold extremist views. This is a parallel issue with the fact that it's terrible optics and creates a difficult tension with this website's newfound interest in doing communications/policy/outreach work.
I think PauseAI would be more effective if it could mobilise people who aren't currently associated with AI safety, but from what I can see it largely draws from the same base as EA. It is important to involve as wide a section of society as possible in the x-risk conversation and activism could help achieve this.