Context for posting link:
Sixteen months ago, I read a draft by a researcher whom few in AI Safety know about, Forrest Landry.
Forrest claimed something counter-intuitive and scary about AGI safety. He argued toward a stark conclusion, claiming he had nailed the coffin shut. I felt averse about the ambiguity of the prose and the (self-confirming?) confidence of the author.
There was no call to action – if the conclusion was right, were we not helpless to act?
Yet, profound points were made and stuck. I could not dismiss it.
But busy as I was, running research programs and all that, the matter kept dropping aside. It took a mutual contact – who had passed on the draft, and had their own doubts – to encourage me to start summarising the arguments for LessWrong.
Just before, I tried to list where our like-minded community fails to "map the territory". In at least six blindspots, we tended to overlook aspects relevant to whether work we scale up, including in AI safety, ends up having a massive negative impact. Yet if we could bridge the epistemic gap to different-minded outsiders, they could point out the aspects.
Forrest’s writings had a hippie holistic vibe that definitely marked him as a different-minded outsider. Drafting my first summary, I realised the arguments fell under all six blindspots.
Forrest wrote back feedback, which raised new questions for me. We set up a call.
Eleven months ago, Forrest called. It was late evening. I said I wanted to probe the arguments. Forrest said this would help me deal with common counter-arguments, so I knew how to convince others in the AI Safety community. I countered that my role was to find out whether his arguments made sense in the first place. We agreed that in practice, we were aligned.
Over three hours, Forrest answered my questions. Some answers made clear sense. Others slid past like a word salad of terms I could not grog (terms seemed to be defined with respect to each other). This raised new questions, many of which Forrest dismissed as side-tangents. It felt like being forced blindly down a narrow valley of argumentation – by some unknown outsider.
That was my perspective as the listener. If you click the link, you will find Forrest’s perspective as the explainer. Text is laid out in his precise research note-taking format.
I have probed at, nuanced, and cross-checked the arguments to understand them deeply. Forrest’s methods of defining concepts and their argumentative relations turned out sensible – they felt weird at first because of my unfamiliarity with them.
Now I can relate from the side of the explainer. I call with technical researchers who are busy, impatient, disoriented, counter-argumentative, and straight-up averse to get into this shit – just like I was!
The situation would be amusing, if it was not so grave.
If you want to probe at the arguments yourself, please be patient – perhaps start here.
If you want to cut to the chase instead – say obtain a short, precisely formalised, and intuitively followable summary of the arguments – this is not going to work.
Trust me, I tried to write seven summaries.
Each needed much one-on-one clarification of the premises, term definitions and reasoning steps to become more comprehensible to a few persons who were patient enough to ask clarifying questions, paraphrase back the arguments, and listen curiously.
Better to take months to dig further, whenever you have the time, like I did.
If you want to inquire further, there will be a project just for that at AI Safety Camp.
I get that this comes across as a strong claim, because it is.
So I do not expect you to buy that claim in one go (it took me months of probing the premises and the logic of Forrest’s arguments). It’s reasonable and epistemically healthy to be curiously skeptical at the onset, and try to both gain new insights from the writing and probe for inconsistencies.
Though I must say I’m disappointed that based on your light reading, you dismiss Forrest’s writings (specifically, the few pages you read) as crankery. Let me get back on that point.
Excerpting from Forrest's general response:
For one thing, it is not just the second law of thermodynamics that "prohibits" (ie, 'makes impossible') perpetual motion machines – it is actually the notion of "conservation law" – ie, that there is a conservation of matter and energy, and that the sum total of both together, in any closed/contained system, can neither be created nor destroyed. This is actually a much stronger basis on which to argue, insofar as it is directly an instance of an even more general class of concept, ie, one of symmetry.
All of physics – even the notion of lawfulness itself – is described in terms of symmetry concepts. This is not news, it is already known to most of the most advanced theoretical working physicists.
Basically, what [Paul] suggests is that anything that asserts or accepts the law of the conservation of matter and energy, and/or makes any assertion based strictly on only and exactly such conservation law, would be a categorical example of "an exaggerated claim", and that therefore he is suggesting that we, following his advise, should regard conservation law – and thus actually the notion of symmetry, and therefore also the notion of 'consistent truth' (ie, logic, etc) as an "insufficient basis" of proof and/or knowing.
This is, of course, too high a standard, insofar as, once one is rejecting of symmetry, there is no actual basis of knowing at all, of any kind at all, beyond such a rejection – there is simply no deeper basis for the concept of truth that is not actually about truth. That leaves everyone reading his post implicitly with him being the 'arbiter' of what counts as "proof". Ie, he has explicitly declared that he rejects the truth of the statement that it is "100% possible to know...", (via the laws of conservation of matter and energy, as itself based on only the logic of symmetry, which is also the basis of any notion of 'knowing'), "...that real perpetual motion machines are 100% impossible" to build, via any engineering technique at all, in the actual physical universe.
The reason that this is important is that the same notion – symmetry – is also the very most essential essence of what it means to have any consistent idea of logical truth. Ie, every transition in every math proof is a statement in the form "if X is true, then by known method Y, we can also know that Z is true". Ie, every allowed derivation method (ie, the entire class (set 'S') of accepted/agreed Y methods allowable for proof) is effectively a kind of symmetry – it is a 'truth preserving transformation', just like a mirror or reflection is a 'shape preserving transformation'. Ie, for every allowable transformation, there is also an allowed inverse transformation, so that "If Z is true, then via method inverse Y, we can also know that X is true". This sort of symmetry is the essence of what is meant by 'consistent' mathematical system.
It is largely because of this common concept – symmetry – that is the reason that both math and physics work so well together.
Yet we can easily notice that anything that is a potential outcome of “perpetual general benefit machines” (ie. AGI) results in all manner of exaggerated claims.
Turning to my response:
Perhaps by your way of defining the statement “100% possible to know” is not only that a boolean truth is consistently knowable within a model premised on 100% repeatedly empirically verified (ie. never once known to be falsified by observation) physical or computational theory?
Rather, perhaps the claim “100% possible to know” would in your view require additionally the unattainable completeness of past and future observation-based falsification of hypotheses (Solomonoff induction in a time machine)? Of course, we can theorise about how you model this.
I would ask: how then given that we do not and cannot have "Solomonoff induction in a time machine" can we soundly establish any degree of probability of knowing? To me, this seems like theorising about the extent to which idealised Bayesian updating would change our minds without our minds having access to the idealised Bayesian updating mechanism.
So to go back on your analogy, how would we soundly prove, by contradiction, that a perpetual motion machine is impossible?
My understanding is that you need more than consistent logic to model that. The formal model needs to be grounded in empirically sound premises about how the physical world works – in this case, the second law of thermodynamics based on the even more fundamental law of conservation of matter and energy.
You can question the axioms of the model – maybe if we collected more observations, the second law of thermodynamics turns out not to be true in some cases? Practically, that’s not a relevant question, because all we’ve got to go on is the observations we’ve got until now. In theory, this question of receiving more observations is not relevant to whether you can prove (100% soundly know) within the model that the machine cannot (is 100% impossible) work into perpetuity – yes, you can.
Similarly, take the proposition of an artificial generally-capable machine (“AGI”) working in “alignment with” continued human existence into perpetuity. How would you prove that proposition to be impossible, by contradiction?
To prove based on sound axioms that the probability of AGI causing outcomes out of line with a/any condition needed for the continued existence of organic life converges on 100% (in theory over infinity time; in practice actually over decades or centuries), you would need to ground the theorem in how the physical world works.
I imagine you reacting skeptically here, perhaps writing back that there might be future observations that contradict the conclusions (like everyone not dying) or updates to model premises (like falsification of information signalling underlying physics theory) with which we would end up falsifying the axioms of this model.
By this use of the term “100% possible to know” though, I guess it is also not 100% possible to know that 2 + 2 = 5 is 100% impossible as a result?
Maybe we’re wrong about axioms of mathematics? Maybe at some point mathematicians falsify one of the axioms as not soundly describing how truth content is preserved through transformations? Maybe you actually have not seen anyone yet write out the formal reasoning steps (ie. you cannot tell yet if the reasoning is consistent) for deriving 2 + 2 = 4 ? Maybe you misremember the precise computational operations you or other mathematicians performed before and/or the result derived, leading you to incorrectly conclude that 2 + 2 = 4?
I’m okay with this interpretation or defined use of the statement “100% possible to know”. But I don’t think we can do much regarding knowing the logic truth values of hypothetical outside-of-any-consistent-model possibilities, except discuss them philosophically.
That interpretation cuts both ways btw. Clearly then, it is by far not 100% possible to know whether any specific method(s) would maintain the alignment of generally-capable self-learning/modifying machinery existing and operating over the long term (millennia+) such not to cause the total extinction of humans.
To be willing to build that machinery, or in any way lend public credibility or resources to research groups building that machinery, you’d have to be pretty close to validly and soundly knowing that it is 100% possible that the machinery will stay existentially safe to humans.
Basically, for all causal interactions the changing machinery has with the changing world over time, you would need to prove (or guarantee above some statistical threshold) that the consequent (final state of the world) “humans continue to exist” can be derived as a near-certain possibility from the antecedent (initial state of the world).
Or inversely, you can do the information-theoretically actually much easier thing of proving that while many different possible final states of the world could result from the initial state of the world, the one state of the world excluded from all possible states as a possibility is “humans continue to exist.”
Morally, we need to apply the principle of precaution here – it is much easier for new large-scale technology to destroy the needed physical complexity for humans to live purposeful and valued lives than to support a meaningful increase in that complexity.
By that principle, the burden of proof – for that the methods you publicly communicate could or would actually maintain alignment of the generally-capable machinery – is on you.
You wrote the following before in explaining your research methodology:
“But it feels to me like it should be possible to avoid egregious misalignment regardless of how the empirical facts shake out — it should be possible to get a model we build to do at least roughly what we want.”
To put it frankly: does the fact that you write “it feels like” let you off the hook here?
Ie. since you were epistemically humble enough to not write that you had any basis to make that claim (you just expressed that it felt like this strong claim was true), you have a social license to keep developing AGI safety methods in line with that claim?
Does the fact that Forrest does write that he has a basis for making the claim – after 15 years of research and hundreds of dense explanatory pages (to try bridge the inferential gap to people like you) – that long-term safe AGI is 100% impossible, mean he is not epistemically humble enough to be taken seriously?
Perhaps Forrest could instead write “it feels like that we cannot build an AGI to do and keep doing roughly what we want over the long term”. Perhaps then AI Safety researchers would have resonated with his claim and taken it as true at face value? Perhaps they’d be motivated to read his other writings?
No, the social reality is that you can claim “it feels that making the model/AGI work roughly like we want is possible” in this community, and readers will take it at face value as prima facie true.
Forrest and I have claimed – trying out various pedagogical angles and ways of wording – that “it is impossible to have AGI work roughly as we want over the long term” (not causing the death of all humans for starters). So far, of the dozens of AI safety people who had one-on-one exchanges with us, most of our interlocutors reacted skeptically immediately and then came up with all sorts of reasons not to continue reading/considering Forrest's arguments. Which is exactly why I put up this post about "presumptive listening" to begin with.
You have all of the community’s motivated reasoning behind you, which puts you in the socially safe position of not being pressed any time soon by more than a few others in the community to provide a rigorous basis for your “possibility” claim.
Slider's remark on that your commentary seems to involve an isolated demand for rigour resonated for me. The phrase in my mind was "double standards". I'm glad someone else was willing to bring this point up to a well-regarded researcher, before I had to.
I will clarify a key distinction between building an AGI (ie. not just any AI) and having a kid:
One thing about how the physical world works, is that in order for code to be computed, this needs to take place through a physical substrate. This is a necessary condition – inputs do not get processed into outputs through a platonic realm.
Substrate configurations in this case are, by definition, artificial – as in artificial general intelligence. This as distinct from the organic substrate configurations of humans (including human kids).
Further, the ranges of conditions needed for the artificial substate configurations to continue to exist, function and scale up over time – such as extreme temperatures, low oxygen and water, and toxic chemicals – fall outside the ranges of conditions that humans and other current organic lifeforms need to survive.
Hope that clarifies a long-term-human-safety-relevant distinction between building AGI (that continues to scale) and having a kid (who grows up to adult size).
Paul, you read one overview essay where Forrest briefly outlined how his proof method works in an analogy to theory that a mathematician like you already knows about and understands the machinery of (Galois' theory). Then, as far as I can tell, you concluded that since Forrest did not provide the explicit proof (that you expected to find in that essay) and since the conclusion (as you interpret it) seemed unbelievable, that the “entire” scientific community would (according to you) probably consider his writing crankery.
By that way of "discerning" new work, if Kurt Gödel would have written an outline for researchers in the field to understand his unusual methodology, with the concise conclusion “it is 100% knowable that it is 100% impossible for a formal axiomatic system to be both consistent and complete” a well-known researcher in the (Hilbert’s) field would have read that and concluded that they had not immediately given them a proof yet and that the conclusion was unbelievable (such a strong statement!) therefore Gödel was probably a crank and should be denounced publicly in the forum as such.
Your judgement seems based on first impressions and social heuristics. On one hand you admit this, and on the other hand you seem to have no qualms with dismissing Forrest’s reasoning a priori.
In effect, you are acting as a gatekeeper – "protecting" others in the community from having to be exposed and meaningfully engage with new ideas. This is detrimental to research on the frontiers that falls outside of already commonly-accepted paradigms (particularly paradigms of this community).
The red flag for us was when you treated 'proof' as probable opinion based on your personal speculative observation, as proxy, rather than as a finite boolean notion of truth based on valid and sound modeling of verified known world states.
Note by Forrest on this:
I notice also that Gödel’s work, if presented for the first time today, would not be counted by him as "a very clear argument". The Gödel proof, as given then, was actually rather difficult and not at all obvious. Gödel had to construct an entire new language and self reference methodology for the proof to even work. The inferential distance for Gödel was actually rather large, and the patience needed to understand his methods, which were not at all common at the time, would not have passed the "sniff test" being applied by this person here, in the modern era, where the expectation is that everything can be understood on a single pass reading one post on some forum somewhere while on the way to some other meeting. Modern social media simply does not work well for works of these types. So the Gödel work, and the Bell Theorem, and/or anything else similarly both difficult and important, simply would not get reviewed by most people in today's world.”
Noting that your writing in response also acts as a usual filter to us. It does not show willingness yet to check the actual form or substance of the arguments. This distinguishes someone who is not available to reason with (they probably have no time, patience, or maybe no actual interest) but who nonetheless seems motivated to signal they have an opinion to their ingroup as pertaining to the outgroup.
The claim that 'nothing is knowable for sure' and 'believe in the possibilities' (all of that maybe good for humanity) is part of the hype cycle. It ends up being marketing. So the crank accusation ends up being the filter of who believes the marketing, and who does not – who is in the in-crowd and who are 'the outsiders'.
Basically, he accepts that a violation of symmetry would be (should be) permissible -- hence allowing maybe at least some slight possibility that some especially creative genius type engineering type person might someday eventually actually make a working perpetual motion machine, in the real universe. Of course, every crank wants to have such a hope and a dream -- the hype factor is enormous – "free energy!" and "unlimited power!!" and "no environmental repercussions" – utopia can be ours!!!.
You only need to believe in the possibility, and reject the notion of 100% certainty. Such a small cost to pay. Surely we can all admit that sometimes logic people are occasionally wrong?
The irony of all of this is that the very notion of "crank" is someone who wants dignity and belonging so badly that they will easily and obviously reject logic (ie, symmetry), such that their 'topic arguments' have no actual merit. Moreover, given that a 'proof' is something that depends on every single transformation statement actually being correct, even a single clear rejection of their willingness to adhere to sensible logic is effectively a clear signal that all other arguments (and communications) by that person – now correctly identified as the crank – are to be rejected, as their communications is/are actually about social signaling (a kind of narcissism or feeling of rejection – the very essence of being a crank) rather than about truth. Hence, once someone has made even one single statement which is obviously a rejection of a known truth, ie, that they do not actually care about the truth of their arguments, then everything they say is/are to be ignored by everyone else thereafter.
And yet the person making the claim that my work is (probably) crankery, has actually done exactly that, engage in crankery, by their own process. He has declared that he rejects the truth of the statement (and moreover has very strongly suggested that everyone else should also reject the idea) that it is 100% possible to know, via the laws of conservation of matter and energy, (as itself based on only the logic of symmetry, which is also the basis of the notion of 'knowing'), that real perpetual motion machines are 100% impossible to build, via any engineering technique at all, in the actual physical universe.
In ancient times, a big part of the reason for spicy foods was to reject parasites in the digestive system. In places where sanitary conditions are difficult (warmer climates encourage food spoilage), spicy foods tend to be more culturally common. Similar phenomena occur can occur in communication – 'reading' and 'understanding' as a kind of mental digestive process – via the use of 'spicy language'. The spice I used was the phrase "It is 100% possible to know that X is 100% impossible". It was put there by design – I knew and expected it would very likely trigger some types of people, and thus help me to identify at least a few of the people who engage in social signaling over rigorous reasoning – even if they are also the ones making the same accusation of others. The filter goes both ways.
So that leaves your last point, about self-awareness:
Forrest is not an identifiable actively contributing member to “AI safety” (and also therefore not part of our ingroup).
Thus, Forrest pointing out that there are various historical cases where young men kept trying to solve impossible problems — for decades, if not millennia — all the while claiming those problems must be possible to solve after all through some method, apparently says something about Forrest and nothing at all about there being a plausible analogy with AGI Safety research…?