Previously: Based Beff Jezos and the Accelerationists

Based Beff Jezos, the founder of effective accelerationism, delivered on his previous pledge, and did indeed debate what is to be done to navigate into the future with a highly Worthy Opponent in Connor Leahy.

The moderator almost entirely stayed out of it, and intervened well when he did, so this was a highly fair arena. It’s Jezos versus Leahy. Let’s get ready to rumble!

I wanted to be sure I got the arguments right and fully stated my responses and refutations, so I took extensive notes including timestamps. On theme for this debate, this is a situation where you either do that while listening, or once you have already listened you are in practice never going to go back.

That does not mean you have to read all those notes and arguments. It is certainly an option, I found it interesting and worthwhile to study everything, if only sometimes on the level of an anthropologist, and to be sure I had gone the extra mile and had not missed anything.

There is however another option. Before I give my detailed notes, I will attempt a summary of the important takeaways from the debate, and attempt to build a model of what Jezos and Leahy were claiming and advocating. You can then check the transcript and notes for more details as desired.

Or you can Read the Whole Thing. If you do, I recommend skipping over the summary until after you have read the details, to see if your overall impressions match my own.

Actually Based Beff Jezos (ABBJ)

We were introduced in this debate to a character one could call Actually Based Beff Jezos (Analytical? Academic? Antihero? Apprehensive? Aligned?), or the Good Jezos, or the Motte Jezos, or the Reasonable Jezos.

Sometimes in this debate he is the one talking. Sometimes he is not.

I like Actually Based Jeff Jezos. We still very much have some issues, I think he is still importantly wrong about some very central things, but this would be someone I could be happy to work with or seek truth alongside in various ways. Actually Based Beff Jezos talks price.

Actually Based Beff Jezos starts from a bunch of positions that he takes farther than I would, but where I am much closer to his position than I am to the mainstream position on these questions:

  1. He is a softspoken physicist and libertarian.
  2. He has a deeply justified skepticism of government and institutions.
  3. He is correctly wary of regulations that in practice will never get reexamined.
  4. He is correctly wary of regulatory capture and its inevitability over time.
  5. He understands the long term benefits of sustained economic growth.
  6. He understands the long term benefits of free markets.
  7. He understands the dangers of the Socialist Calculation Debate.
  8. He sees the real almost limitless upside potential for development of AI.
  9. He sees the real catastrophic downside if AI was used to enable tyranny.
  10. He calls upon us to fix our institutions so we have a functioning civilization. He calls for more competition between and renewal of institutions.
  11. He by default favors actual real competition over appearing to be nice.

He also acknowledges these (and other) things that I believe to be true:

  1. We should choose actions based on what method produces the best outcomes.1
  2. Humans and civilization are good and we should care about them.
  3. People are allowed to have different terminal values.
  4. Some technologies are net negative in their consequences to the point where it is good to restrict or ban them.
  5. The optimal amount of regulation and hierarchical organization and power in general is importantly not zero.
  6. He favors anti-monopoly legislation, other regulations too if they favor growth.
  7. Decentralized systems are often impossible to steer and out of our control.
  8. A fully decentralized system is not going to happen.
  9. AI is moving super fast and we do not know where it is headed.
  10. That those doing maximum growth at any cost will wipe out those not doing it.
  11. YIMBY. You love to see it.
  12. Covid-19 was probably a lab leak. What conclusions one draws can vary.
  13. The odds are against us and the situation is grim.2
  14. We need a balance between order and disorder, not max entropy.
  15. e/acc is effectively treating Physics as their (primary or only) God.3

This is not a complete list. But it is clear that in terms of our models of how the world works, we are actually Not So Different, and Connor observes this as well. All three of our models are remarkably similar, although with very impactful and important differences that change key decisions.

Actually Based Beff Jezos (ABBJ) also poses some excellent questions to Connor, many of which lack good answers.

Bold Based Beff Jezos (BBBJ)

We were also introduced in this debate to a character one could call Biased (Bailey? Borderline? Balanced? Biased? Blunt? Brash? Baiting? Bro?4) Based Beff Jezos, or the Neutral Jezos, or the Civil Jezos.

I like this person a lot less than Actually Based Beff Jezos, especially given his tendency to conflate himself with ABBJ. I wish he would pick a side and fully own or disown each of his positions, would more often talk price, not get too carried away with the physics metaphors and so on.

He is still, however, someone who I would welcome into a civil discussion. It would be great if this was the Based Beff Jezos that was driving the e/acc movement, and more people in such circles acted like this. I could work with that. Alas.

A term used often in the discussion by both participants was ‘bait and switch,’ as opposed to the old motte and bailey.

This was happening a lot. Jezos would claim something very reasonable one minute, then switch to making a far stronger unreasonable (or at least, I believe, false) claim the next, and often switch back and forth several times.

In several cases he says close parallels of ‘all we are saying is’ or ‘all we are asking for is’ when this is very clearly not the case if you expand your search even a few minutes.

Then there is a combination of assertions from BBBJ, and also some assertions that still might be thought to come from ABBJ, where the whole operation goes off the rails.

This list is highly incomplete, but here are the key places where I feel BBJ went off the rails (or at least was saying that which is not) in this debate with his assertions, and which version of him was saying what at the time, apologies for the overlap but I want to be sure I hammer these home properly:

  1. Both BBJs fundamentally seem to be failing to make the is/ought distinction. This is the naturalistic fallacy. In this case, the nature in question is physics itself.
  2. Thus, many times, he conflates ‘the laws of physics’ or ‘what physics wants or rewards’ or similar concepts to what we should value and work towards.
  3. Translating into my terminology because I think it is better here: BBBJ does not believe there is ever any way to fight against Moloch, to cooperate in game theory problems and find a superior equilibrium. If a strategy is available that would if allowed to be used triumph over time, there is no fighting it without losing. That humans and their organizations will always be maximally greedy.
  4. BBBJ asserts that you must only want what ‘physics wants’ in this sense, similarly to how a Christian thinks you must let Jesus into your heart, whereas ABBJ thinks you get to choose your terminal values.
  5. BBBJ actually seems to be completely fine with scenarios that wipe out humanity. ABBJ is not fine with that, although he is remarkably willing to take that risk, and is totally fine with human loss of control over the future. Seeks it, even.
  6. ABBJ says we should look to maximize the growth of free energy over time. BBBJ says this absolutely, with no other considerations, no matter the consequences, ABBJ will admit that there exceptions in theory but will deny that one can exist in practice, and find ways to say that anything he supports is also pro-growth. I believe this is based on two mistakes. One, Jezos is confusing is/ought, and two that he is confusing the metric with the thing he wants to measure.
  7. When confronted with examples where the tails come apart and maximizing free energy would very clearly not maximize the things Jezos actually cares about, Jezos says such scenarios are unrealistic and would not come about, while maintaining ambiguity over whether he is fully confusing is/ought and actually thinks the free energy is a terminal value or not. 5
  8. Both ABBJ and BBBJ often take physics concepts and laws, and try to apply them directly to social dynamics and other decisions as if they were still working as strict laws, where instead I would say they can only offer intuition pumps and some evidence.
  9. ABBJ wants to reform our institutions, BBBJ says this is hopeless.
  10. BBBJ asserts that any institution that humans control, or any control mechanism at all, will over time fall to malintent, and therefore humans cannot be allowed any control over our future, only full decentralization and uncontrolled competition is possible. ABBJ says we need hierarchical control systems that balance order and disorder.
  11. BBBJ’s fear of regulatory capture and regulatory ramp-up and inertia is absolute and you never do it for any reason, whereas ABBJ says ‘all we want is to take careful consideration before acting’ and also to ‘wait for a stable situation’ that he does not believe will ever arise.
  12. BBBJ believes that regulations and grants of authority are fully one way doors and the primary way that humans lose control over the future. Both BBJs treat those in power as always becoming over time alien beings that do not value what we value even more than the way I think of future AIs as not valuing what we value, and just as capable of asserting permanent control if AI is involved.
  13. ABBJ believes that it is possible for humans to augment our intelligence sufficiently to ‘make us more of a player at the big boy table’ after ASI is on the scene, that we would still meaningfully intellectually contribute and matter. He believes that human economic activity will shrink as a portion of overall activity once ASI is available to all and roaming free to compete, but that our absolute portion will not shrink, and that balance of power and ‘AI mercenaries’ and other adversarial dynamics will keep us safe. BBBJ is asserting6 that these scenarios will work out and be great, whereas ABBJ is asserting that the alternatives are even worse and that the upside is so high we must roll the dice.
  14. Jezos asserts that the push for regulations and safety are driven by the interests of a few Big Tech companies (presumably OpenAI/Anthropic/Google) looking for regulatory capture as their primary or sole motivation, that these are the people writing the laws. The safety movement in general, and EA, are in cahoots, not genuine, acting in bad faith. Connor did not challenge this. I believe this to be mostly false.
  15. Jezos also said the recent incident at OpenAI was a ‘decapitation attempt’ by safety advocates, which he might or might not believe himself, but which we know was simply not the case, as I have written. Again Connor did not challenge.

A good summary of key points that I felt were being claimed might be:

  1. No is/ought distinction, or at most a highly confused one.
  2. Competition and maximum growth are inevitable, you can’t fight it.
  3. Maximizing growth and free energy is The Way to maximize utility.
  4. Regulations are essentially never taken back, always captured.
  5. Humans having control over the future at all inevitably means tyranny.
  6. AIs having control over the future, or humans losing control over it, on the other hand, could work out fine.
  7. AIs will never render humans economically uncompetitive and incapable of survival, even under this intense competition.
  8. Many (most? all?) important opponents of this agenda are in bad faith.

Again, none of these lists are complete even based only on the debate. While there were a lot of things that were said several times, there really is a lot going on here.

Caustic Based Beff Jezos (CBBJ)

We must also take note of the third face, the one we see on Twitter, the Combative (Combative? Condescending? Careless? Cruel? Core? Copious? Callous? Crazy? Certifiable?), or the Evil Jezos, the Warring Jezos, the Alter Ego.

That guy is, to put it exceedingly generously, a lying, trolling, raging a*******.

That person did not show up to the debate. He does, however, continuously show up on Twitter, and continues to do so.

His thesis is something like:

  1. All hail the thermodynamic God, growth, free energy. The humans likely die.
  2. Accelerate. No, mor than that. No, more than that.
  3. Any restriction of any kind, anything holding back technology of any kind, evil.
  4. In particular, the government should never lift a finger to interfere in any way.
  5. Our vibe is the superior vibe. We are therefore good, if you oppose us you are evil.
  6. Technology cannot be stopped and also you evil bastards might stop it.
  7. Accuse those opposed to you of anything and everything including corruption.
  8. Be as rude and condescending and vile as possible. Meme hard. It helps.
  9. Claim credit for everything and always say that you are winning. We always win.
  10. If you agree with all this directionally, put e/acc in your bio. Do as I do.

Using this strategy, he has, as I noted previously, assembled a motley crew of malcontents willing to indeed put the label in their bio and lend their support, with a broad coalition of reasons for doing so. From my previous post:

E/acc has successfully raised its voice to such high decibel levels by combining several exclusive positions into one label in the name of hating on the supposed other side:

  1. Those like Beff Jezos, who think human extinction is an acceptable outcome.
  2. Those who think that technology always works out for the best, that superintelligence will therefore be good for humans.
  3. Those who do not believe actually in the reality of a future AGI or ASI, so all we are doing is building cool tools that provide mundane utility, let’s do that.
  4. Related to previous: Those who think that the wrong human having power over other humans is the thing we need to worry about.
    1. More specifically: Those who think that any alternative to ultimately building AGI/ASI means a tyranny or dystopia, or is impossible, so they’d rather build as fast as possible and hope for the best.
    2. Or: Those who think that even any attempt to steer or slow such building, or sometimes even any regulatory restrictions on building AI at all, would constitute a tyranny or dystopia so bad it is instead that any alternative path is better.
    3. Or: They simply don’t think smarter than human, more capable than human intelligences would perhaps be the ones holding the power, the humans would stay in control, so what matters is which humans that is.
  5. Those who think that the alternative is stagnation and decline, so even some chance of success justifies going fast.
  6. Those who think AGI or ASI is not close, so let’s worry about that later.
  7. Those who want to, within their cultural context, side with power.
  8. Those who like being an edge lord on Twitter.
  9. Those who personally want to live forever, and see this as their shot.
  10. Those deciding based on vibes and priors, that tech is good, regulation bad.

The degree of reasonableness varies greatly between these positions.

I believe that the majority of those adapting the e/acc label are taking one of the more reasonable positions. If this is you, I would urge you, rather than embracing the e/acc label, to if desired instead state your particular reasonable position and your reasons for it, without conflating it with other contradictory and less reasonable positions.

Or, if you actually do believe, like Beff Jezos, in a completely different set of values from mine? Then Please Speak Directly Into This Microphone.

So which Based Beff Jezos is real in which senses? We cannot know that, nor can we know that about his followers. We do know how they behave in public.

The good news, again, is that this character did not show up for the debate. Which resulted in a less eventful and dramatic discussion, but a more fruitful one.

What about Connor Leahy?

Connor Leahy takes a consistent, straightforward (and, in relative terms, extreme) position on the issues of technological development and AGI.

  1. He believes that if we continue business as usual, with no regulation, slowdown or drastic improvement in our safety efforts, we are highly doomed. In his model it would take a century of ASI safety research to feel highly confident in safety.
  2. He believes this is bad, yo, so we should not do this.
  3. He is willing to pay a large price in mundane utility not created, and in top down control if necessary, under these circumstances.
  4. He also believes that AGI and ASI will become possible very soon, his timelines are very short, his thresholds for when systems could be existentially dangerous are very low.
  5. He believes that we should work to improve our institutions and civilization. He agrees with broadly libertarian instincts in general but wants to proceed in spite of that, because we have no alternatives.
  6. He wants as his primary ask liability of AI developers and deployers, but would consider a hard compute limit for frontier models as good or better. If able to choose, he wants to set it lower than I would set it, at between 10^23 and 10^25, whereas I am glad they set the reporting threshold (and it is only a reporting threshold) at 10^26.
  7. He often emphasizes that we should not have a system where whether AGI goes well or not depends on whether the CEO of a tech company is nice or not, and that currently we have exactly such a system, and that most of the actors within the current system are apathetic and compliant where they need to step up if we are to do what is needed.
  8. He does not think AGI is the only technology for which much of this applies.
  9. He does not believe in mincing words or holding back. At all.7

That should set the stage. The debate is focused around what Jezos thinks rather than what Leahy thinks, which I found to be the more useful approach.

Around the Debate in 80 Notes

They do extensive highlights before starting, so skip to about 8:30 to start.

  1. Connor opens positively, then cuts right to the issue of to what extent e/acc people mean what they say. What is a metaphor or vibe? What is meant literally? Is there an intentional conflation of the two? My take is that they usually do not make a distinction here, and do not typically know themselves how seriously or literally they mean any given statement, it indeed flips as is useful.
  2. (10:00) Connor’s first question: Is there any technology at all we should ban entirely? Jezos asks whether it has ever happened, claiming it isn’t enforcable. My example of it happening would be genetic engineering and cloning via the Asilomar Conference. Connor says, okay, suppose it could be enforced. Jezos then responds that even when a technology is itself bad like nuclear fission, it can lead to positive things like nuclear fusion, and we have already closed off so many paths (so banning technology is both impossible and already happening?). He says we would have benefited from far less regulation towards nuclear fission.
    1. I agree we should be very pro nuclear power, but this seems confused to me. The reason we should have less regulation of nuclear fission is that it has a very important, very positive use case, which is nuclear (fission) power. We need more of that. Whereas I do not think that allowing open access to uranium or letting any person or nation who wants to build a nuke would be a wise policy, and if anything we underinvested in preventing nuclear proliferation.
    2. I do think the point about fission leading to fusion, and not wanting to cut off the tech tree, is a good one. Cutting off tech means likely delaying or potentially preventing future tech. However we do not want to assume the conclusion that the expected future technologies that result will be good. Fusion is good if it is used in power plants, bad if it is used in bombs, it is a physics question whether we can get one without the other, for which I am optimistic. In most cases I do think this is a strong anti-restriction argument, but that is because in most cases advancing technology is in expectation good.
    3. Jezos asks, what is a ban, who enforces it, where, which countries? A fine question. Connor puts a pin in the enforcement question.
  3. (13:00) Jezos says that yes, some technologies can have negative impact. I would not normally note this, but given who said it, I am doing so. He says his thesis is that technology begets technology, and technological advance is generally good, so we should encourage technologies even when they themselves have short term negative impacts, aiming for growth and trusting the market will work out. He calls this more nuanced than trying to deploy nuance via legislation.
    1. Later at (28:30) he will confirm that he thinks that we should enact some regulations on technology some of the time.
    2. He confirms at (30:00) it was good to ban leaded gasoline, and also notes that this ban imposed positive selective pressure on the space of technology development, that sometimes banning a tech can (I would say predictably in at least this exact case) encourage good tech while discouraging bad tech, and says it is good to first gather the evidence of harm via lawsuits and then to crystalize this into legislation. My understanding is that in that case there were lawsuits from workers but not lawsuits about the much greater harm to the public.
    3. At (31:50) he seems happy with the process that led to the ban on lead gasoline. Whereas this seems like a case where we clearly moved too slowly, and the thing to do afterwards is ensure we do not make that mistake again. This is no small thing. We are talking about impacts like half of America losing 5 IQ points, a large rise in crime rates and so on. A process that requires decades of such damage before it is ready to find and fix the problem is not going to be adequate to responding to the dangers of AI even in relatively friendly futures.
    4. It is still true that we often impose bans and restrictions for safety reasons that do not make sense and backfire, and that waiting too long to ban something is the exception rather than the rule, Connor says lead gasoline and Jezos could say the FDA and how we changed its over policies in response to Thalidomide. The right question is, how do we improve our accuracy and decision making, how we evaluate risk versus reward in different situations? And what would be the right decision in a particular case.
  4. (19:00) Connor reiterates the original question, can you imagine a technology that should be banned, or is this impossible? Jezos says yes, there are pure negative technologies that we would ideally want to ban, but again how do we enforce that? How do we avoid today’s system and its tendency to add and but not remove rules and regulations over time? He wants to rely on the market and on legal warfare via lawsuits and liability. Again, in general I think this is right in general, but it relies on the externalities being properly captured under the law, the law being a realistic means of such enforcement, and the plausible damages being bounded enough that one can be held liable and actually pay out. It does suggest a way forward in the form of combining stricter liability and a Hanson-style policy of mandatory insurance.
  5. (22:00) Connor notes that he sees lawsuits as regulation, whereas Jezos is framing them as within the market. Connor notes that regulations set the terms for when one can win a lawsuit. Jezos talks about ‘peer to peer enforcement’ and Connor rightfully asks him what he is even talking about. Jezos seems to then say he is not a fan of the state monopoly on violence, and he is wary of top-down power asymmetry intellectually via AI.
  6. (23:30) He says we are ‘in a weird period now where there is the window of opportunity for there to be sort of AI assisted tyranny to be installed. And to me that’s one of the core existential risks to progress.’ He worries that such control would break democracy and lead to manufactured consent (while, I would note, advocating for the most unpopular agenda we know of, it polls at a margin of -51).
  7. (26:00) He presents e/acc as a counter force to attempts at centralization and at imposing top down control. You, he says, want to maximize safety, he wants to maximize freedom, reality will fall in the middle. Standard libertarian position. I really, really wish we saw such folks making a broader push for this in all the places where I believe that approach is correct. Then he says ‘we have a better data driven prior of this happening’ than some sort of AI takeover. He fears a ‘dark age’ without ‘freedom of compute,’ freedom of access to AI or freedom of information.
  8. (30:55) Jezos says that in AI things are moving super fast and we don’t know where things are going, Connor nods. Jezos says it is too early to set things in stone, and notes again correctly that we don’t walk such rules back.
    1. That seems like a lot of the question. What actions set something in stone and which ones offer flexibility? Are there moves that make it impossible to then move to regulate later, because the cat is already functionally out of the bag or you cannot in time lay the groundwork?
    2. I would argue that with compute monitoring, with chips, with open model weights for larger models, and with developing safety protocols, to varying degrees, you cannot while on this superfast exponential go from zero to sixty in three point five. The fight is not over, as I see it, whether to impose meaningful permanent rules now. It is over whether we should get into position where we could impose regulatory rules in the future. The argument for no, which is a real argument, is that if we have that capability then we will use it in ways we shouldn’t, an interesting mirror of the worry that if AI capabilities are created then they too will operate or be used in ways they shouldn’t be (or, to be exact, in ways we’d prefer they not be used).
    3. In particular Jezos mentions a compute cap. I agree that if we put in a hard compute cap, we should be extremely wary of doing that, and indeed I disagree with Connor’s minority view that we should impose not only a hard cap but a cap below the size of GPT-4. Whereas I think the Executive Order is wise, imposing a much higher reporting threshold of 10^26, and having it be a threshold rather than a cap. Again, to me it is in most discussions a question of whether the slope of regulation is so slippery that we cannot go anywhere near it, while noting there are a few including Connor who would indeed go much farther here.
  9. (32:30) Consensus on the need for sunsetting laws, the idea that every law and regulation is rechecked every 10 or 20 years. If you cannot affirm that the law or regulation is good, it goes away. I too strongly agree this would be great, if we can find a way that it does not turn into either ‘the house shall now vote on renewing all existing laws’ or ‘if we can’t reach consensus we are going to legalize murder at midnight.’ I do think it can be done, where there is a class of permanent laws, and then others are forced to have sunset clauses in a way that defends against auto-renewal. In general all three of us broadly agree that we should be widely skeptical of passing new laws given the governments and systems that we have.
  10. (34:03) “I so I don’t think we disagree. I think we’re on the same page.” “We just have different models of the world.” “Exactly.” Love it.
  11. (34:30) Connor lays out the idea that the world used to be ergodic, where if you made even very large mistakes you could recover from them over time and be fine, and that at some point we exit that, and a large enough mistake would be the end. Instead of having to learn how to handle nuclear material without dying so you can develop nuclear technology, there will be techs where it is everyone who dies if you screw up in a similar fashion. Jezos responds this is already possible with nukes, and affirms at (36:45) that there are paths with payout negative infinity (or what I would describe as permanent universe payout zero).
    1. Jezos says we do not build the technology of the world-shattering nuke because it does not have utility, so instead we only build smaller nukes, and that 10%-20% of the population would survive so (as is sometimes pointed out) this is not strictly an existential risk, but a sufficiently large nuke with a big red button would be bad.
    2. Except, what would happen if the planet buster or other potential technology that posed an existential risk did have utility, or we thought it did? The AI that poses an existential risk is also going to look like it could offer very large positive utility if things go well, and thus people are going to try and build it.
      1. Also, one can nitpick that while no one built the single planet-buster nuke, there is definitely utility in having that as a threat, and some people and nations would use it to hold the world hostage or stave off action, and some people just want to see the world burn, so there are plenty of people who would build it if they could.
      2. And one can also nitpick that the nuclear arsenals of the USA and USSR during the Cold War were rather darn similar to this, sufficient as Churchill said ‘to make the rubble bounce,’ and with war plans that were widely expected to result in the end of the world, and the use of them was rather on hair triggers. See the book The Doomsday Machine. I mean, yes, 10%-20% survival was projected overall, but I do not think this was gave that many of those involved much comfort. I mean, I’m going to go ahead and say that the big red buttons we did have were not great.
  12. (38:45) Connor poses the hypothetical of doing physics experiments and discovering you are in a false vacuum where a small trigger could potentially destroy chemistry, radiate outward and effectively destroy the universe. Jezos (after noting for clarity that he does not believe this is a real thing) responds that if this turned out to be the case, that our world was this fragile, that we could inform authorities and form a world government or what not but even if we did that, we would already be dead on some time horizon.
    1. I think this is actually a great answer. If the world and the way its physics work is sufficiently unfortunate, then we are doomed no matter what we do. So when considering what to do, we assume we are not in those worlds.
    2. As Eliezer points out, if you make too many impactful ‘hopeful assumptions’ using this kind of reasoning, then you stop trying to solve the actual problem, and your work is useless. So you need to be careful.
    3. I do think in practice we need to say that if the situation turns out to be sufficiently unfortunate in its physics and particulars, then what we do almost never saves those worlds, and we should be ‘willing to die’ in those scenarios to give ourselves a fighting chance in others.
    4. Thus, for example, I think that we should set the compute threshold higher than Connor wants, even for reporting, and if 10^25 is soon enough parameters to create a model that can kill us and it kills us, then it kills us, we had no practical path to avoiding that risk without making other scenarios and things too much worse that it wasn’t worth it. Of course, if in the future we then learn that we are indeed in that world, we should adjust and try anyway.
  13. (42:00) Jezos points out that if there were dangerous physics that we do not understand, we would need to study it in order to understand and control it, drawing the parallel back to intelligence. Ignoring it is not The Way. I agree it is not The Way, you want to work towards understanding, but certainly there are experiments you might want to avoid, and forestall others from doing, if the risk was too high, until such time as you had more information.
  14. (44:00) Jezos says that there’s a lot of upside to AI, and it matters. Yes. He says that we shouldn’t let tail risks stop us. Connor says yes, if it was only tail risks he would agree, he doesn’t think it is a tail risk. Indeed, it is a question of price.
  15. (45:00) Connor asks why the AI that wants to help the humans would win in a fight against a hostile AI. Jezos responds why doesn’t this happen with people and countries, Connor says great question. Jezos says because there are benefits to cooperation. I would agree, but the point is that this is a particular fact about the situation, not a law of nature.
  16. (46:00) Whereas Jezos says e/acc is the theory that things will adapt in the way that is best for growth (and has elsewhere said that we should assume this will go well, the ultimate form of Whig History perhaps). That sounds to me neither true nor comforting were it to be true. But Jezos then says that the future will select the entities that are most inclined towards growth, in that sense this seems reasonable, that which grows will grow.
    1. At (46:45) Jezos pulls out the ‘corporations are superintelligences’ statement, sigh, Connor shows remarkable if incomplete restraint on his face.
  17. (47:00) The Jezos vision of the future is that there are some AIs that are aligned with humans, some that are partly or not at all aligned, they engage in trade, and this keeps us relatively aligned.
    1. No. That is not how any of this works, by his own argument, even if we assume that we successfully align some AIs before we lose control over the future, which is very much not a given. If you posit that there are a wide variety of AIs in such scenarios, then being aligned to either a particular human or to humans in general is an uncompetitive burden, and those AIs lose out over time under free competition.
    2. The thing we want, and the thing e/acc or this kind of competition for maximal efficiency represents at the limit with ASIs involved, are not compatible at baseline, unless we decide we happen to find value in whatever thing wins that competition, as Robin Hanson would argue that we should, largely to make a virtue of necessity.
  18. (47:25) Jezos says there will be ways to augment human intelligence to ‘make us more of a player at the big boys table.’
    1. Again, no. Sorry. This is one of those claims that simply does not work, the hope of the hybrid AI-human chess team. There is no reason to think that a human will, after a while, be capable of meaningfully contributing anything, that we would be able to earn a spot at that big boy table. We are not, in this metaphor, either big nor a real boy. Anything we can do, AI can do better, for sufficiently advanced AI. This is people writing science fiction.
  19. (48:00) Connor points out the central contradiction, that human happiness or preferences are one goal, maximal growth is another very different goal, and the systems described are maximizing under this model for the second one. Jezos pulls out the Europe vs. America comparison, I wish America was far closer to ‘all-in on growth’ the way he describes us. He notes that we have localities and can test things locally.
    1. We would both argue that America’s growth model has reached the point where, even with Europe’s focus on short term happiness, one could for pure happiness purposes reasonably prefer America and the profits from growth instead, and that we should expect this to amplify over time. In the AI-Fizzle scenario, I would expect America to get relatively a better place to live versus Europe as the years go by.
  20. (48:30) Connor responds with the argument that over time in this scenario, Americans are less happy but they eat Europe, and we lose our A/B testing ability.
  21. (48:45) Both affirm, as do I, that we are not hedonistic utilitarians. Jezos says that e/acc is not this, but EA is this. I do think this is one of the best critiques of EA. Jezos instead suggests the utility function of maximizing growth and civilization and the beauty of intelligence. Connor points out that the larger list is very different from a pure focus on growth.
    1. I would add it is even more distinct from a focus on short term growth. There is a very important clear assumption in the Jezos or e/acc position here, which is that maximizing growth will maximize civilization and the beauty of intelligence, that we have a duty to the universe on this. That even if the future is not human, that it will maximize these other things we should find most valuable.
    2. I do not believe this, on multiple counts.
      1. I do not believe that the entities that result from such a process, especially assuming they are AIs we did not choose carefully and wisely for this role, are likely to be things that reflect the beauty of intelligence and civilization in ways that I would consider evaluable. Indeed, I expect them by default to have value of essentially zero to me, although I agree this might not be true.
      2. If they do have some such value, I expect much smaller than the potential maximum value of taking a different wiser approach.
      3. I think there is a large risk that maximizing growth in the short term ends up not only being an existential risk to humans, but also to growth, and that instead of the AIs taking over for us, nothing is left behind at all, or nothing at all meaningfully complex within this context.
      4. I think that I have every right to say that I do want to hand the universe over to these potential future AIs, that I have my own particular preferences and I am allowed to fight for them.
      5. Growth is not an end in itself, nor does it automatically produce good things. Creating AIs in the name of growth is like trying to increase measured NGDP without asking whether you are producing more useful things or otherwise doing anything actually useful.
    3. There is some sort of weird conflation going on here on the word ‘growth.’ Clearly Jezos is using it sometimes to mean self-replication and reproductive fitness. Other times it seems to want to stand in for something far more similar to economic growth.
  22. (51:00) Jezos says that if something is non-optimal at growth (in the reproductive fitness sense, which now seems like it is the primary meaning of this in the e/acc model, which totally wasn’t clear before now at least to me?) that something more optimal will replace it. Connor says the optimal thing is cancer, no art, no beauty, no happiness or emotions, just growth. Jezos says that emotions have utility. Which currently they do, but the issue is there is no reason to expect they will be useful to an AI or at the limit. Connor says he expects there is a local minima, that human emotions are not a global maxima.
  23. (51:45) Jezos says if human emotions are not a global maxima, why not explore new ways? Connor pounces, ‘aha!’ and says naturalistic fallacy, is is not ought, emotions are not a global maxima and who the f*** cares, they’re mine, I like them.
    1. Are we allowed to have preferences that are not maximum growth? Why do we value maximum growth if it will not satisfy our preferences? Is this simply a failure mode, where we notice a proxy measure G that in-distribution improves our values V, and Jezos is saying therefore G=V=U and we should maximize G for its own sake?
    2. Which is a move humans often do in various forms, including via having emotions, because we have limited compute. We optimize via a host of proxy measures and heuristics, as this has been proven to at our current capability levels and typical scenarios more efficient in most situations at finding good next actions.
    3. Also humans do it in other ways, such as the hedonic utilitarians Jezos contrasts himself with, who have happiness H and suffering S and say V=H-S, or something else in that vein. Whereas I mostly buy what Jezos says at (52:40) that happiness evolved because it is useful and we should not mistake the metric for what it aims to measure.
    4. I do not think there are easy answers here. I cannot (or at least, I do not know how to) compactly well-describe that which I care actually about. Later at (58:45) Connor says similarly that he has not stated his utility function, that there is some such function but it is not so simple to spell out, and that he does not even fully know what he truly values. And Connor then says that he thinks Jezos does not purely want growth either like he claims to.
  24. (53:30) Jezos goes YIMBY, you love to hear it. More of this, please. He keeps coming back to the broader point that advancing technology has so far been good for humans and more would have been good on the natural margin. The concerns are that this might not have been true even in the past if one had taken that attitude to the extremes, and also that past performance is no guarantee of future success and we have reasons to believe the underlying mechanisms for that might not hold, or will take effort to make them hold.
  25. (54:15) Connor responds that so much of e/acc talk seems to only be about America five years into the future. That they do not actually extrapolate their own beliefs, only Nick Land does that, taking techno-capitalism to its logical conclusion that there will be, and should, be only capital and competition, no labor (or humans). Connor notes: “If you optimize for something, you lose everything you are not optimizing for.” Quite so at the extremes, and sufficient intelligence and capability means the extremes will hold.
    1. As Connor says, we got lucky with the tech tree and what is optimum for growth and production. He gives the example that constantly torturing people does not work. That people can only produce for extended periods if you treat them well.
    2. More generally, I would suggest, democratic capitalist institutions that care a lot about the freedom and happiness of those inside them proved to be able to outcompete autocratic, fascist and communist regimes. This was not a given, it did not have to be so. People in the 20th century mostly did not believe this, and many expected the future to therefore be quite bleak. If it had proven false, our world would look very different today, and it almost did look very different at various points, and not for the better. Nor would I want to switch simply because of the gains to growth.
    3. More specifically, there have been large rewards for various human activities like play and learning and relaxation and exploration, and productivity rises when people are happy and have other things at stake, and intrinsic motivation outperforms other motivation, and other neat stuff like that, and again none of that needed to be true, and we have reason to think that a lot of it will break down in advanced AI scenarios once we understand the mechanisms involved. Key drivers are our limited compute and access to data, and that due to how we physically exist we mostly have highly limited upside in terms of reproduction, versus constant tail risk of death, exile, injury or ruin and such, and that periodically situations radically changed and we did not have that many cycles between such events, and so on.
  26. (56:00) Jezos says there are not a finite number of jobs in the future, then says there are not a finite number of atoms either, there are plenty of atoms in outer space. That even if most things end up done by machines, the human portion would not shrink, only get diluted.
    1. This seems like a clear failure to extrapolate. If you maximize growth for real you hit a limited number of atoms within the lightcone fairly quickly.
    2. Yes, the number and nature of jobs (in the broad sense, including those taken by AIs) would expand greatly. But what would humans hold on to if they have no advantage versus the resources a human must consume? Why should the human realm be protected from intrusion from the AIs in these spots?
    3. We heard a version of this for example from Holz as well in his debate. The idea that the AIs would leave us and the resources and atoms we need alone, because there are plenty of other resources to grab. That simply is not how this works, especially if we posit as Jezos does that there are many AIs engaged in trade and in competition to maximize growth. Taking the atoms of the humans maximizes growth, there is no reason to think this will not happen simply because there are ‘enough’ atoms elsewhere. That only happens if there is something in particular preventing this action.
  27. (57:00) Jezos compares it to taking venture capital, you are diluting to gain more capital and leverage. That our component can still grow.
    1. Well, that’s an interesting metaphor isn’t it? Are we going to rely on our founders shares to continue to control the company, or will we quickly be out on the street? Will having initial capital somehow protect us?
  28. A quote from Jezos: “You’re advocating for the interests of humans, and I’m hearing you out.” Well then, Jezos, what exactly are you advocating for then? He says that people and corporations will always be greedy, Connor again responds that will is not ought. There is what exists, and there is what we want.
    1. I would add that we absolutely have found ways to contain greed, and that if we had not done that we would not have come this far.
    2. If the Jezos position is that we should each of us embrace maximum individual greed, well then.
  29. (59:45) Jezos puts on the Socratic hat, which is only fair. He asks: “Why do you like having relationships? Why do you like happiness? Why do like being part of a group. Because evolution kind of hardcoded you to crave these things.” Connor says this is confusing is and ought again.
    1. I do not think it is that simple. This is a real and important challenge.
    2. If we only like the things we like because it was useful in the ancestral environment to like those things, if they are only the expression of a different version of maximizing growth, why do they represent ‘ought’ rather than ‘is’?
    3. Again, it’s a big problem.
    4. I do not, however think that the Jezos response of demanding an objective loss function of free energy is the right response. Nor is ‘that is not anthropocentric enough’ describing the bulk of my objection to that. This is the standard rationalist (in the classic sense) mistake, to say that what we do must be legible and objective and formally justified, and thus we must disregard anything else. That, if you take it too seriously, reliably leads to disaster.
  30. (1:00:30) Connor responds wisely: “This is just cope. What you’re describing is, is that reality is hard. Yes. If the thing we want is complicated and hard to get, the answer is not to pick something simple and easy and give ourselves a participation award. The answer is, well, we have to get stronger. We have to get better.”
  31. (1:01:10) Jezos responds that this is where physics comes in, the free energy objective is not random, the universe selects for growth. You don’t have the option to disobey gravity or the laws of thermodynamics.
    1. Connor keeps saying that is is not ought, because Jezos keeps conflating the two as his core argument, if anything doubling down. That we should value what the universe rewards. If we know what’s good for us, he might have added? Except if it won’t be good for us, then that’s a problem.
  32. (1:01:40) Jezos then says, if Earth goes half-accelerationist, half-Europe, and we play the movie out, the accelerationist half will outgrow, at which point Connor interrupts to say ‘and we will both be dead’ and Jezos says he’ll need to see more evidence of that.
    1. Jezos is explaining a key point. If part of the world is allowed to proceed and accelerate, then that means we get the consequences of that acceleration everywhere. So you will need an international cooperation, voluntarily or otherwise, to ensure that this does not happen, or you can accept the consequences of such acceleration.
    2. The continued suggestion that the only choices are full e/acc style acceleration on one hand, and Europe on the other, is a lot better than saying totalitarian panopticon dystopia as a straw man, but still assumes that you cannot limit existential risk from AI without going Full Europe.
    3. I do not believe that is the case, unless the future is such that AI is the only technology or industry that much matters even before we get such existential risk. In which case, I know what I expect to happen if you choose acceleration.
    4. So again, it is not ought, and you have to ask, do you want the accelerationist world and its consequences, or not?
    5. Jezos says once again we won’t ‘seek out’ the destructive technologies that would bring existential risk, but this is a clear contradiction of his idea that anything pro-growth on an individual or group level will get sought out, and of course people can be wrong about the consequences of what they are building, and make mistakes. His arguments here that disaster will be averted seem extremely poor.
  33. (1:02:45) Jezos then falls back on the better argument that the upside we would need to forfeit to avoid such risks is so great that it is worth the risk to go after it, which is definitely valid as Connor acknowledges (although again, the framing here is implying a binary of either we do almost nothing to get in the way, or we don’t get to proceed at all, instead of thinking on the margin).
    1. Connor proposes to talk price. Exactly. How often will the accelerationist approach survive? Connor argues the chance is epsilon, that sooner or later you will fail a saving throw.
  34. (1:04:45) Jezos wonders if there are terminal states at all.
    1. I think this is an excellent point. We need to, at some point, solve for the equilibrium that we want, then solve for how to get to that equilibrium. But what if there exists no such equilibrium at all? What if the only stable states are dystopias? Or what if what is valuable requires quests with real stakes and the opportunity to progress, so the very fact that the world is in equilibrium means it cannot be valuable, even if the journey to get there was valuable?
    2. These questions are some of the things that would keep me up at night if I let existential dread keep me up at night, but we all have to sleep some time, so I’ve figured out how to sleep regardless.
  35. (1:05:30) On the nuclear weapon that got accidentally got dropped on South Carolina but happened not to detonate (no, seriously), Jezos says well it would only have blown up a city, we would have survived.
    1. It is a side note, but I do not think it is that simple. At a minimum, the world feels like it will now be very different in many ways, if we accidentally nuke one of our cities. The cold war does not play out the same way.
    2. Most importantly I think there would have been quite a large risk that either the USA or USSR mistakes that nuke for something else, someone does something by mistake, and there is escalation to full war. I would not dismiss such an incident so easily.
    3. There is also a huge risk that various players in the USA decide to put the blame for the incident elsewhere intentionally, again with severe escalation risks.
  36. (1:06:30) Agreement that Covid-19 was a lab leak.
  37. (1:06:50) Jezos argues that the world not having ended is evidence it won’t end. Connor emphasizes that we keep having major accidents and close calls.
    1. One can also mention anthropic considerations.
    2. In general I do not find the past track record hopeful, aside from various failed predictions.
  38. (1:07:20) Jezos says let’s talk about AI already. Jezos says he thinks Connor’s model is that if you build AI then you cannot undo that, and then it inevitably kills you. Connor says no, he thinks building it safely is possible, Jezos says great let’s do that, Connor says it will take 100 years to do it safely, Jezos says we won’t let you have 100 years to do that without a monopoly on power, and he thinks that is a bad trade.
    1. That is not an argument for why Connor is wrong about what it would take to build AI safely.
    2. That does however imply that Connor is wrong, because it would be a very good trade to give some entity a monopoly on power if the alternative was for everyone to die. Or at least, I am going to make that bold claim.
    3. I do not think these are our only choices, that we can be confident this will take 100 years to pull off or anything like that.
  39. (1:09:30) Connor says congratulations, you are a doomer. Jezos says no, there is a third path: A decentralized control of AI causing an adversarial equilibrium, which makes smaller entities sufficiently capable they are not to be f***ed with. Connor says “so AI mercenaries,” Jezos says “sure” and Connor laughs.
    1. This does not make any sense to me, for reasons I discussed above. No go. If you go down this path, you lose. Good day, sir.
    2. This is a crux. If you can convince me that such worlds can and are likely to turn out well provided we get personal alignment to work, that we can solve for the equilibrium and like it, then tons of new paths open up, and the correct strategies change.
  40. (1:12:00) Jezos predicts that without regulatory capture upstarts will catch up to the leading labs.
    1. I think this is simply wrong, if everyone involved was equally responsible in their actions. But I do think the threat of this happening likely prevents the leading labs from acting what would otherwise be considered responsibly.
    2. And in this fully unregulated world in which everyone acts locally greedily, I would expect us to die, but if instead alignment is solved in time anyway then would then expect those companies to move to use their AIs to prevent everyone else from catching up, and I would expect them to succeed.
  41. (1:12:40) Jezos says that lots of companies are using Mistral to minimize platform risk and reduce costs, to which Connor replies the world is not a B2B SaaS app.
    1. My model is that Mistral is essentially a distillation of the work of GPT-4, a model that is two years old and which it is still behind, and that this does not represent Mistral being anywhere close to catching up otherwise, but they (and open source generally) do seem good at distillation.
    2. The use of Mistral represents Mistral giving away its product (and being the best of those giving theirs away), and that many want to do things that violate the terms of service of an OpenAI or Anthropic or Google, and many current uses not caring that much about being at the frontier of capabilities, versus price sensitivity. I also think platform risk is greatly overestimated, because you can sub in the other AI in a pinch and it’s not so bad.
    3. I do not think this has much to do with the question of how an endgame will play out, scenarios where getting maximum intelligence is far more valuable. I also do not expect distillation to work as well at that point, although that’s more of an instinct and I could be wrong.
  42. (1:13:10) War! Jezos claims we are already at war, and we have a duty to accelerate to outcompete and survive and that’s how it will always be.
    1. I notice that this does not answer Connor’s challenge that real war will be nothing like B2B SaaS, that the previous point was nonsense.
    2. Sounds like we’re super dead, then, no? Nothing we can do?
    3. Jezos doubles down on the idea that there is nothing we can do about all this conflict, that’s how we got here, that’s how it will always be. If that was true the way he is saying we would already be dead.
    4. He says we do not want a world government to dominate everyone, but he says we have a duty to win this war, doing so would kind of lead to a world government of sorts.
    5. Similarly as I have said before, and Connor does point this out, if it is our duty to accelerate in order to win our war against China, then it is also our duty to decelerate or at least not accelerate China, which open model weights do?
    6. So Jezos seems to attempt to square this by saying we need to maintain only a small delta between players, a balance of power. We need to accelerate to stay in the game, but we wouldn’t want to actually win, that would be bad?
    7. Jezos says that our strength is competition, so we should use open source to allow more competition and accelerate AI development, and yeah our adversaries get it too but he doesn’t seem to care about this? He doesn’t say why this is fine, or why we should let them into the competition game this way. None of this actually makes any sense to me.
  43. (1:16:00) Connor asks if we should open source the F16. Jezos says there’s a bait and switch, that when the safety argument fails on open model weights people pivot to not wanting our enemies to have it, or as Connor says crazy whackos to have it.
    1. The core safety argument is that this is proliferation, creates a multi-polar race, makes it impossible to control what people do with it or how it develops, and so on. As I put it, Open Model Weights Are Unsafe And Nothing Can Fix This.
    2. Our enemies getting it and having the ability to do with it whatever they want, or use it to bootstrap their capabilities, is a direct extension of this issue, if we indeed believe that we should care about defeating our enemies.
    3. But yes, sometimes we pivot to this argument because we are talking with people who deny that safety is a thing, or do not care about it, or can only think in terms of foreign adversaries and non-state actors (e.g. many people in national security and government.
    4. That’s also because both arguments are true and important. One does not invalidate the other. This is not bait and switch, it is yes and, and switching emphasis. There is no contradiction, they both follow from the same model. The right answer can be overdetermined.
    5. The argument regarding adversaries is also being used as an argument to point out a contradiction in the logic of the other side. If you are saying we must accelerate to defeat our enemies therefore we must open source, then it seems highly on point to say that open source interferes with our ability to defeat our enemies, therefore your argument is invalid and actually goes in the other direction. As I believe it does.
    6. You are… allowed to make two distinct arguments? They are additive?
  44. (1:17:45) From Jezos: “I think so many organizations are compromised. If you’re not going to have actual secrets the only mode is speed. If you want speed you want variance. If you want variance open source is the way.” He is torn on whether we should open source the F-16.
    1. I do think we should be putting more effort into this, across the board.
    2. Sounds like we need to mandate much better investments in cybersecurity, and other secret keeping, in this area, if orgs refuse to do it themselves.
    3. Increasing variance in a situation with big tail risks makes sense when the default or likely outcome is quite bad, and does not make sense when the default or likely outcome is good.
    4. Is this another case where Jezos is saying solving problems is impossible?
    5. I am not typically in favor of focus on the question ‘is this good or bad’?
  45. (1:20:00) Connor asks, if an AGI was smart enough to design an F-16 fighter plane, should it be open source? Jezos visibly, clearly stops to actually think about the answer. And he asks good questions about what this means for its capabilities. Jezos then brings up the question of whether the authorities with a monopoly on violence will have access to a better AI designing better planes.
    1. The pause for thinking, to me, is the important point here. By pausing to consider the details of what this ability implies about the AI, Jezos is making it clear that he believes that we should open source AI models right now, but that a sufficiently capable AI should not have open model weights. And that ability to design an F-16 might or might not indicate having sufficient capabilities.
    2. At this point, we are talking price. We might not even be far apart on price.
    3. The point about relative capability also seems good. An AI being behind the state of the art, and others having a superior AI as a potential defense and a way to learn the full capabilities and dangers of an older AI, seem like important factors to consider in setting a threshold.
    4. Jezos is concerned about there not being too big such a capability gap allowing the forming of a monopoly or cartel, whereas he expects some capability gap and also seems to think that zero capability gap would indeed be bad.
    5. Again, talking price, weighing different questions, good. Connor thanks Jezos for explaining this and did not expect this position, that we do want to maintain a non-zero capability gap for the authorities.
    6. The focus on the exact design of an actual F-16 seems flawed, what matters are (as Jezos initially realized instinctively, I think) what else this AI could do.
  46. (1:23:45) Jezos notes that decentralized systems are hard to steer, says this gives them fault tolerance, that if you have a system that can be steered then power seeking humans will work to steer it. Any centralized control can and will be compromised, and then form a tyranny.
    1. This is indeed a problem. Either humans control the future or we don’t.
    2. If you think the risks of humans being able to steer the future are worse than those of humans being unable to steer the future, that is a take.
    3. If we intentionally choose to lose control over the future, we lose control over the future. A system that we by design cannot steer will cause the world to be taken over by AIs.
    4. My expectation is that worlds where we lose control over the future mostly have zero or almost zero value. Whereas I expect that even actually tyrannical-by-humans future worlds to be non-ideal and I would very much like to do what we can to avoid this result, such worlds are still likely to be solidly positive value for everyone else as well, because that will probably be the preference of those who attain power.
    5. I too would like charter cities and city states and so on.
  47. (1:25:45) Connor brings up offense-defense balance, expects offense to typically be far easier, that being ahead does not sufficiently protect you. Jezos affirms that this is a problem, you need a balance between order and disorder, you do not want max entropy or temperature, you want maximize our ability to seek out free energy to go grow to consume more free energy. And there is danger now that we will go too far towards order, whereas we need a balance.
    1. The idea of balancing order and disorder is not only highly reasonable, it is deep wisdom. Surely we have all read the Tao Te Ching and Aristotle and the Principia Discordia, played Shin Megami Tensei, tried to engineer a peaceful free society and so on.
    2. This is indeed exactly the position of myself or Eliezer Yudkowsky, although with highly important disagreements over price, and of the price of missing high versus missing low in various ways, and how to hit the target.
    3. This is very much not the standard way of presenting the situation from Jezos in particular, or e/acc folks in general, or others who oppose any attempt whatsoever to alter the path of the development of AI other than pushing it forward as fast as possible.
    4. Their standard rhetoric does not sound like a balance at all. Rather, it is advocating the absolute supremacy of one concern over the other. And using vibes and memes and principles and such to advocate for that absolute supremacy, and to treat anyone opposed to this explicitly as enemies.
    5. One can say that this is due to where we are on the margin, that we all agree that ideally we would talk price and reach a compromise. I do hope that is the case.
    6. But that is completely impossible if one ‘side’ is going to use the Beff Jezos or general e/acc approach to discourse, rhetoric and advocacy.
  48. (1:29:00) Connor returns to the question of the false vacuum.
    1. Jezos denies the technical aspects of the premise, but this misses the point Connor is attempting to make.
    2. Connor’s point is that the desire to maximize release of free energy over time is a proxy function, not what Jezos actually cares about, and that this proxy function will cease to function well as capabilities advance. If you released very large amounts of free energy via triggering a false vacuum (if this were a physically possible thing) you would be maximizing free energy, but obviously that would be a highly stupid thing to do, no one would want that. This illustrates that at the limit there are ways to maximize free energy that do not actually hold value.
    3. This is actually exactly the worry of Yudkowsky, that the AI ends up tiling the universe with something relatively simple and valueless, because it is maximizing some proxy function, the most famous example of which is the paperclip. If the AI tiles the universe with that which most efficiently releases free energy over time, that is not at the limit going to actually maximize anything Jezos or most others care about.
    4. You need a better goal. The point of free energy is to use it for the things that you actually care about. Which can justify some amount of earning compound interest, but the energy needs an end other than itself.
  49. (1:30:15) Another few rounds of is-ought.
    1. The example here of gravity seems excellent. Gravity exists. You will obey the law of gravity. It will take a lot of energy to counteract it in individual cases, but also we can and should build airplanes and rocket ships. We shouldn’t instead work to create a black hole.
  50. (1:32:15) Jezos bites the bullet. “I don’t know how to define good otherwise,” in response to being asked whether his ideology outcompeting others makes it good.
    1. Connor calls this ‘might makes right.’
    2. Jezos disagrees and we are back to the conflation. Is growth for its own sake, ‘what physics wants you to do’ or e/acc good because it is actually good and we value it? Is it good (and ‘not weird’) because it ‘comes from physics’? Or is it good only in the sense and to the extent that it will win?
    3. My response is it does not seem good because of either?
  51. (1:34:00) Connor says when people talk morality they conflate three things: (1) That something is true or accurate, (2) decision theoretic goodness that this will cause you to win and (3) aesthetics and values, that this is good because I like it.
    1. Jezos asks why we like that thing, which Connor says is a epistemological question (and implicitly, that it is one in which almost everyone lacks good answers) or a request for a causal history.
  52. (1:35:40) Jezos says e/acc is not prescriptive on what you should like. They are only warning that the pro-growth subcultures will be selected for. I mean, you’ll become stagnant or die but that’s on you.
    1. Bullshit? They’re totally and constantly telling you what is good and bad, and what you should like and dislike. This feels like gaslighting.
    2. Are the Amish effective accelerationists? Why or why not?
    3. If it turned out that accelerating would cause you to likely die, then what?
  53. (1:37:03) The fun ‘libertarians are like housecats’ quote.
  54. (1:37:30) Conner: “If you follow in the will of God, he shall reward the faithful. Is that your ideology?” Jezos: “Yeah. I mean physics is my God to some extent. You can have your own other additional gods.”
    1. He then admits he also follows some sort of God of Civilization as well, which Connor points out is very different and also that God he is down with.
  55. (1:40:30) The moderator asks, what is your value system? Jezos explicitly bites the bullet, disagreeing with Hume that there is no objective morality. He says that the objective morality is the one that will tend to outcompete others via growth, and we should embrace that.
    1. Yes we’ve been over this several times but it is good to finally be clear on this. Especially since this is five minutes after Jezos saying they are not telling you what you should like. Indeed he tries to reverse this again at (1:41:55) saying he is not tell you how to live your life other than to say you have a choice. This sounds remarkably like a Christian saying that you can choose not to accept Jesus, they are not telling you what to do, all it means is you will suffer in Hell for all eternity.
    2. Jezos also says that trying to set the hyperparameters of your civilization top down will cause you to be outcompeted. Certainly if you do too much of this historically it does not go well, but so does doing too little of it. Fully anarchic civilizations did not win out. Centralization and the rise of the modern state very clearly helped allow Europe and its nations to project power better and outcompete the rest of the world in conflicts. The argument here will be true on some margins but clearly attempts to prove far too much. Without rule of law, we would not have advanced chip factories.
    3. The whole e/acc argument, as Jezos confirms again at (1:42:45), is that on some time horizon, whatever strategy confers an advantage will be adapted by some subculture or faction, and this is unavoidable. But this is actually straight up Hobbes, no? That the result of not asserting some form of top down control will be whatever happens to be most competitive, that we otherwise have no say in what ultimately results. So to me this is an argument that, unless we happen to want the most competitive configurations of atoms to be the only such configurations, we have no choice but to pursue such top down control.
  56. (1:43:15) Asked again what his values are, Jezos says he is trying to scale civilization. He does not want to replace humans, he is working on various things like fusion power, including physics-based AI, which would be an extension of our intelligence, whereas the companies talking about safety are the ones creating the danger. Connor says, yes, handshake meme.
    1. Count me in as well.
    2. Jezos seems to be working on technologies that, were they to work, would be net very good, and a form of AI that would be more likely than LLMs to turn out well for humans. I would love for those techs to exist. Whereas LLMs seem like a place where things by default are likely to go badly for humans.
  57. (1:45:30) Jezos warns that if we cap compute we will miss out on good things like drug discovery, we should be careful with our regulations and what they impact.
    1. Yes, we should proceed carefully and choose wisely, and talk price, there are tradeoffs, all the reasonable people on all sides get this, although there are some unreasonable people everywhere.
    2. That is entirely compatible with the only survivable paths forward involving making some very large sacrifices. It is a fact question: Can we do better?
  58. (1:47:00) Connor asks, if the growth maximizing thing was to build AI that would wipe us out, would you do it? Jezos says no, he has self-interest too, so he would not do it, but does not seem to object so strongly if someone else were to do it?
    1. Once again, Connor is trying to say that the growth maximization is not a terminal value and Jezos values other things, his value of growth is contingent on providing those other terminal values, and that Connor (and I) would claim that this relationship will not hold in the cases under discussion.
    2. Jezos’s defense is that he is building, he is trying to create new tech. And I do believe that, and that he has chosen good techs to build, even, if they are possible for him to build. But this is miles different from the full claims made.
    3. Jezos keeps trying to dodge various different forms of this question. Connor keeps pressing. It is a key question. If Jezos bites the bullet fully, and says yes we should all die if that maximizes growth, growth is a terminal value, then that is ‘please speak directly into the microphone,’ we can agree to disagree about terminal values. If Jezos bites the other bullet, and says no we should not all die, and if I thought that growth would kill us all I would stop supporting growth, then great, we can have a different debate over what would actually happen, which would then fully be a crux for all involved.
  59. (1:48:40) Moderator pivots to, what would Connor do (WWCD)? Connor responds he cares about stewardship of dangerous technology, not only AGI. He presents a model of the world as having various distributed super-entities that both are and aren’t agents, that connect and form more of a unified force than they used to but nothing like a real unified force.
  60. (1:53:00) Jezos likes this model, draws parallels to error correction and cybernetic control systems, you want a moderate amount of hierarchy to maintain proper cybernetic control without a single point of failure. What keeps a single top (say world government) node in check? How do we mitigate this risk? Haven’t the centralized ‘biosafety’ labs caused a lot of damage, far more than other threats?
    1. Connor thinks this reply makes a lot of good points and I agree. I wish the overall discussion were more like this (and not only around AI or even tech!), focused on diagnosing the problem and seeking to explore the space of solutions, to try and simultaneously solve very hard and conflicting problems.
  61. (1:57:00) Connor does not trust our civilization and institutions and distributed systems to handle powerful technology at this time, we need to work to get there. That even if only Connor or only the government had access to AGI this would be extremely dangerous. Both agree that current institutions are not competent to guide the world.
    1. The question is, if our institutions are terrible, and this includes more than only the governments, what do you do about it here?
    2. One option is to let nature take its course and hope that course is good.
    3. Another is to use tools you do have, and choose them knowing the problems. Ideally you try and also work towards better institutions and a more competent civilization generally.
    4. Instead, it feels like democracy is breaking (both agree) in the modern tech age, destroying what competition was facing government, and things are getting worse not better on such fronts.
  62. (1:59:30) Here we go again, Jezos says centralized governments can’t be trusted with this power, and also we can’t put the genie back in the bottle, the upside is too high, this is going to happen, every agent will want it. Given it exists it should be in the hands of everyone, not only a small group.
    1. Sure sounds once again like a strong argument for not building it! Jezos is saying we have no path to not building it. Assumes facts not in evidence.
    2. If we must build it, and we have no trustworthy institutions, and no trustworthy other systems, then we have two bad choices. The question is which is worse.
    3. I’ve discussed this a bunch already, I won’t repeat.
  63. (2:00:45) Connor proposes that e/acc is kind of a trauma response to decaying institutions, to not deal with them at all, but that you don’t actually get to do that. That solving such problems is super hard, but we have no alternative or simple solution, you can’t purely rely on the market. Jezos says there should be a market for and competition between institutions. And he says (2:02:45) “that’s all they’re arguing for.”
    1. I have said that e/acc can be thought of as the Waluigi to EA. That could also be thought of as the final example of something triggering this trauma response.
    2. Competition between institutions is… not all they are arguing for. This is very much a motte, and yeah I am doing with this motte. The bailey is close to a call for a total lack of institutional interference in technology and trade, and the ability to do things like say ‘I declare network state’ the way we declare defense production act.
    3. I too am a big fan of forms of market competition between institutions, the same way we want competition in other realms, subject to the usual caveats about market failures, which include externalities such as existential risks.
    4. It is also true that while we want more competition between institutions than we have, if we allowed full unfettered market competition between institutions, then there are various races to the bottom and other issues. The ideal amount to which the world’s governments and institutions are cooperating rather than competing is very much not zero.
  64. (2:04:20) Explicit claim that the AI safety movement is being leveraged to attempt regulatory capture. Connor agrees with this.
    1. I don’t.
    2. I do think that there will inevitably be some amount of this effect over time.
    3. I do not think that this is a major driver of any current AI regulatory efforts.
    4. I think that a lot of these claims are pure bad faith attacks.
    5. I want to reiterate and confirm and amplify Connor at (2:04:50). I would say that the vast majority of safety and regulation advocates believe the things they are saying and primarily have the motivations they claim to have. Obviously there are some people who are talking their book or otherwise cynically motivated, but anyone claiming typical bad faith is either assuming this on principle or saying it in bad faith themselves. The idea that most people talking about existential risk are doing it to prevent start-ups from forming is absurd.
    6. Similarly, jumping ahead a bit, the claim by Jezos at (2:11:40) that the current oligopoly players ‘are the ones writing the laws’ is not actually true, although they are attempting to weaken some of the proposed safety laws.
    7. Most claims that a regulatory proposal would benefit major players and allow regulatory capture are, as far as I can tell, based on a combination of:
      1. Claim that major players always benefit from any regulation.
      2. Claim that Big Tech is bad and all-conspiring and that anything anyone ever tries to do must inevitably benefit Big Tech no matter what.
      3. Claim that, because major players sometimes favor it they must benefit. Note that often they do not in fact favor it, and instead are engaging in ‘regulatory capture’ via sabotaging the parts that would apply to them.
      4. Claim that small players could not afford to abide by the rules, they would be too expensive. Which to me mostly translates to ‘we would like to be irresponsible, that is how we intended to compete with relatively responsible big players, and if we are held to the same standards then we cannot compete.’
      5. Claim that small players could not afford to abide by the rules, they would find this impossible, the rules will outlaw what they want to do. Which to me again translates to the same thing. You wanted to not be responsible for the safety or consequences of your actions, you cannot abide an ordinary set of regulatory rules with your business model and technical plans, and you now are coming crying that you want it to be one way.
      6. Claim that EA is known to be bad, so anything they advocate must be bad, therefore it must be regulatory capture.
      7. The unfortunate fact that some of those in EA decided that it would be a good idea to buy some influence at OpenAI and to fund Anthropic, creating the appearance of a conflict of interest, and potentially a real one.
    8. I do want to reiterate that (i) is a real thing, that the existence of regulation at all will tend to trend towards capture over time, it is a part of how our system works and should be taken into account. That is one reason to keep the rules as simple as possible, for example. But the actual claims, I believe, are highly overblown, and in many cases in highly bad faith.
    9. As Connor notes (2:05:15) it is very much not a coincidence that e/acc opinions are overwhelmingly concentrated at corporations for whom the e/acc position constitutes talking their book or a way to seek deal flow, and a method of self-justification. Of course there is also selection here, where those who believe in boldly going ahead and building in various senses will both try to build and embrace e/acc.
    10. But it is important to notice that there is this particular area, where some very loud (and obnoxious) people are highly concentrated and perhaps even a majority, versus everywhere else, where they are almost non-existent and the most unpopular cause anyone is polling with net favorability -51. Do not confuse Twitter for reality.
    11. It is also important to note that of the two firms advocating somewhat for action on safety, OpenAI was founded explicitly because of concerns about AI existential risk, and Anthropic was employees who thought OpenAI was itself unsafe and left because of this to do something pro-safety. So it is not in any way suspicious that they might be concerned now about existential risk.
  65. (2:05:40) Jezos brings up the battle over OpenAI, calling it a ‘decapitation attempt’ and that the company almost imploded.
    1. I once again reiterate that the incident was not about safety, Altman both started it and chose to risk the company imploding to fight back. See OpenAI: The Battle of the Board, and if you need more, OpenAI: Leaks Confirm the Story. This simply was not what e/acc people want it to be.
    2. I am very tired of the continuous attempts to write a false narrative here.
    3. There is also odd thinking regarding Sam Altman, who it is claimed both is conspiring to use safety as a fig leaf for regulatory capture by advocating too strongly for safety without making much in the way of specific proposals, and also was supposedly removed because of a fight over safety, so then his return became this huge claimed victory and he became a hero to e/acc folks.
    4. Connor asks about whether it would have been better, if OpenAI had the flaw it was not aligned to shareholders and growth, for it to be an inefficient institution that died. Jezos says no.
  66. (02:07:00) Love for real competition, not playing artificially nice. But (2:08:30) Connor points out you can’t do this naively, you have to have regulations and tools that account for market failures. Jezos responds the issues usually come from regulatory capture and preventing disruption.
    1. Again regulatory capture and prevention of disruption are historically huge issues, and currently huge issues in most of the economy. But that is not why you have the central market failure problems.
    2. It is more that, if you don’t use regulation beyond the free market at all you get various market failures (and at the extremes, you get the corporations starting to use force and acting like governments and the worst of both worlds), and the cost of that gets unbounded.
    3. So you need some rules to deal with that, in the modern world there are not simple ways to implement that in many cases, so you have no choice but to have some amount of regulatory capture, you try to minimize it, do cost/benefit and talk price.
    4. This in turn causes many of our current problems right now, and could easily pose other massive problems in the future in AI and elsewhere, no doubt.
  67. (02:09:15) Connor brings up monopoly as one of the cases of market failure, such as AT&T, where government intervention makes sense, that pure free markets do not in practice maximize effective competition. Jezos mentions TSMC and Nvidia today, and says that when incumbents distort the legal system to deepen their moat ‘that is when the problems arise and is what we are trying to avoid.’
    1. Again, some of the problems are that. Others, are other things.
    2. Jezos bites the bullet and endorses anti-monopoly legislation. He says it wasn’t time to break up OpenAI because 90% of the ecosystem was depending on them. Which is an odd time to not be concerned about a monopoly (not that I want to break up OpenAI, I don’t).
  68. (2:11:15) Connor expands this to ask if regulations expanding competition would be e/acc compatible. Jezos says yes. So it is indeed about the impact, regulations that improve acceleration are welcome.
  69. (2:13:45) Connor asks, if a regulation results in less competition but better products, is that good or bad? Jezos says he is very skeptical that is possible, that it could ever outcompete the alternative, but says he would do what was positive for growth.
    1. Connor says he’s not claiming this is true in any case, but I would note we have historical precedents for this, though, do we not? Purely as stated, many classic regulatory regimes qualify. Some of them ultimately went too far, some even to the point of being net negative over time. But yes, sometimes you get meat inspectors, and this ‘decreases competition’ but you can imagine being very happy you did that.
    2. That doesn’t mean you can get a good outcome in a given case, but it is not an outlandish thing to suggest might occur.
  70. (02:15:00) Jezos says we need freedom of information, and we need AI for everyone so it can prevent us from being cognitively hacked.
    1. It always seems like those with such warnings are imagining a very strange failure mode, where people do not have the ability to use an AI that is not being weaponized against them, or effectively do not have an AI at all. Or they fully make the leap to a dystopian tyranny with a monopoly on information that is continuously hacking and oppressing everyone forever.
    2. There seems to be lots of room for a compromise that prevents this?
    3. Even in the fully free case, the vast majority of citizens are going to get their AIs by buying them from a large corporation or government (or one that offers them for free) and will have no ability or desire to fine-tune or otherwise do the things that people are claimed to need to be able to do, the same as most other past sources of information.
    4. Even in the case where frontier AI is limited to a handful of major players who have various restrictions they impose on use, most of the protections involved would still be available to most consumers, in exactly the form they would have otherwise had them, and if anything they would have less threat models to worry about, if only via misuse vectors.
    5. Yes, as Jezos notes (2:15:45) the few players could bias the information in some cases, again this is a very well-known issue, and in the extreme it could get bad, but I definitely feel like this falls under the ‘you have much bigger problems’ umbrella unless you are dealing with a true singleton tyranny.
  71. (2:19:00) Jezos says there won’t be a fully decentralized system, that we are pushing towards that direction, but the optima is a hierarchical cybernetic control system. That e/acc is directionally correct on the current margin given other current situation and other current pressures, and that is what matters. Each man is his own island wouldn’t work.
    1. Again, I do think directionality is too simplistic, but this is a vast improvement, where are these reasonable dudes online?
  72. (2:20:45) Jezos asks Connor again, you say we can do better, but how? What would be this better way? Connor says we need to do better and Jezos and I agree with that, but only concrete proposals can be implemented.
    1. Connor says correctly that we have so much better knowledge of many things that the Enlightenment philosophers lacked access to and we can therefore design better systems, but you still have to actually do that.
    2. And then you have to get people to implement the new system, not easy.
    3. Right now, there are some very clear concrete proposals on the table on AI. For example, here are Jaan Tallin’s priorities, which I consider highly reasonable as goals, and we can talk price around things like what the compute limit should be over time and how hard should be that limit, what it should take to be able to show you can safely exceed it at least somewhat. And these are not the only proposals. The key is that at some point you have to deal with the concrete.
    4. This was not true a year ago. Back then I felt there were not good concrete proposals let alone a consensus on those proposals among advocates of safety.
    5. Connor’s ideal interventions would be more draconian and costly than mine.
    6. But as Connor says later (2:22:55) no one has a Utopian perfect system or all the answers.
    7. Ideally we would of course remake our institutions far beyond the question of AI. Here there is less consensus, the problem of implementation seems vastly harder, but we also do have lots of very good ideas for improvements.
    8. If Jezos, Connor and I were sitting down to improve things in other areas (or also in AI I strongly suspect), there are tons of things we could easily agree upon, but of course good luck somehow passing them.
  73. (2:22:00) Jezos frames the market as having an uninformative open prior versus the option of having an informed prior, and that startups and others struggle with what mix of these to choose. Connor says of course we should use an informed prior, we would be stupid not to here, and when we say uninformed prior we never actually mean uninformed.
  74. (2:23:20) Jezos likens the informed prior to being able to beat the stock market.
    1. I strongly agree with Connor that this is a very different type of complex system, that it is highly plausible that we could find improvements to current institutions without being able to beat the market or violate the EMH.
    2. I do happen to think the EMH is false and you can absolutely beat the market, so there is that too. How many of you are highly overweight Nvidia stock? I am going to claim that in general those advocating for AI safety (and also those advocating directly against it most vocally) have been beating the market for a while now as a group. Short term S&P 500 prices are usually mostly efficient as Connor claims, but of course there are obvious exceptions.
    3. They continue to discuss this question in technical terms, Connor (2:26:30) points out that Jezos’s arguments prove too much.
  75. (2:27:00) It once again comes down to Jezos ‘not wanting to hand the keys to the future to today’s institutions.’
    1. Obviously we would all prefer to have or build better institutions instead.
    2. Failing that, well, is the alternative to crypto-style burn the keys?
  76. (2:28:30) Similar to the not-real-but-too-good quote about Gandhi saying when asked what he thought of Western Civilization that it would be a good idea, Connor reiterates that good institutional design has not been tried, that the amount of effort put towards this has been miniscule. Jezos replies that he tried joining Google and found that good institutions are downstream of good culture and so he is doing cultural engineering via e/acc.
    1. Like Connor I have mad respect that he is at least attempting something to fix the problems he sees, even though like Connor I think he is wrong and is making everything massively worse.
    2. And Connor points out that the market for such things is so inefficient that some dude in Quebec (Jezos) could post a set of random stuff as an alt on Twitter and find tons of alpha from their own perspective. Why didn’t anyone else do it?
    3. I am constantly wondering why I am the only person doing various things. So many of the things I write or suggest or do seem like highly natural first things someone would attempt, that any sane civilization would task many people with, and instead it is clear that if I didn’t do it, it wouldn’t be done.
    4. Whether I do a good job with those tasks, of course, is a distinct question. So is the extent to which the world where I did not do those tasks ends up looking counterfactually different, or in which direction.
    5. So I think Connor is very right at (2:31:00) that the fact that Jezos happened to be himself and actually try to make this happen made a huge counterfactual difference, that these things don’t happen the same way without him.
    6. And I also strongly endorse that the world has a huge agency deficit, those with actual agency are extremely difficult to find and have huge oversize impact. You, reader, could become one of those people if you aren’t one yet. There are no adults in the room. We all agree here.
  77. (2:33:00) Discussion about how much uncertainty and confidence one should have in one’s models, and Jezos says you should demand high certainty before doing anything on a regulatory level given they tend to be one way decisions. As he says, ‘it’s all risk reward.’
    1. Definitely seems like things are going around in circles at this point.
  78. (2:38:00) Connor presents once again the thesis that if your plan is to keep rolling the dice on new techs and on letting nature take its course, eventually you lose, existential failure. And he asks, why is now not the time to act? When will be the time to act? Do you have a plan or core model how to do that? Jezos says he plans to play it by ear and things are moving too fast, but as Connor notes things will only move faster in the future. Jezos proposes to act when there is ‘stability in a current trend.’ But the actual development of AI capabilities explicitly does not count as such a trend.
    1. Consider the parallel to the St. Petersburg problem.
    2. Stability is not coming on its own, quite the opposite.
    3. This seems over and over to come down to: Jezos thinks of regulations or giving power to institutions as the way we lose control over the future, rather than the risk of losing control to either to future AIs or the dynamics of competition and selection (and his term, ‘growth’) given the existence of those AIs or a combination thereof.
  79. (2:45:00) Jezos points out that OpenAI’s plan is to claim that the constant rollout of cutting edge technology is the most ethical and safest way forward, iterated deployment. Which is indeed their claim. Connor says they are wrong, and they’re going to kill us, because lmao this is obvious bullshit from OpenAI. Jezos finds that claim interesting, it’s odd he didn’t know that was Connor’s position.
    1. I’d add that if you cite OpenAI’s plan in saying what you think will keep us safe it seems odd to also accuse them of engaging in regulatory capture when they talk about safety. You can reconcile it but it’s weird and suspicious.
    2. Jezos says I’m launching a rocket, I’m not going to chart out all the optimal values at this stage, I’m going to adjust according to sensors. But that’s a hell of a metaphor. Rocket launches are meticulously planned, there is huge attention to safety and robust safety protocols, they often blow up, they have rather exact flight paths with only minimal ability to do correction in-flight. Jezos says you have ‘a rough idea’ where the rocket is going to go, and… well, wow, I am very surprised no one has clipped this yet.
    3. Jezos says then that with a rocket we have a very strong prior from Newtonian mechanics on where the rocket will go. And yes, we do, but you what we would not do if we did not have that prior? Launch the rocket.
  80. (2:48:00) After Connor tries the ‘do you think these statements would be comforting to Neanderthals?’ line, they conclude with Jezos taking a bold pro-death stance, saying it is part of the cyclical adaptation process, that constant fading out of the old is important. Connor summarizes the position as ‘letting Jesus take the wheel.’
    1. They have a strong positive note saying they should talk more, and that they understand each other better.
    2. Connor warns Jezos that he thinks Jezos’s followers do not believe that Jezos thinks they believe. And if Jezos thinks they believe what Jezos is advocating for in this talk, then I am confident Connor is right.
    3. Connor’s ‘punchline’ proposal is to create a fixed area of pure raw competition, but impose strict rules on what people can and cannot do. And yes, like all other rules, they need to be ultimately enforced by violence, and someone needs to be watching the watchers. And yes, this will involve taking some hits to our optionality, such as not having the ability to go kill someone who annoys you.
    4. Jezos responds our institutions are so slow that anything we do will be net negative, so until we fix the institutions we should do nothing.
    5. They close with thanks for finding common ground.

Afterwards

I thought this was a good discussion and debate. The participants said so as well.

There are a few key cruxes or questions here that seem fruitful to explore.

  1. What is good in life? What do we value? Big questions!
  2. Can we design better institutions that we might be able to implement? Is there a path to improving the ones we have?
  3. How big is the risk and cost of tyranny? What would cause this risk and cost to increase or decrease by how much?
  4. How do we design rules and regulations that head off the things we want to prevent, while mitigating the risks both of tyranny and of stalling things we want?
  5. What is the right way to bound and shape competition that works for us?
  6. What is the thing we are worried about locking in? Should we worry about locking in regulatory rules, or a ruling regime of some sort, among humans? Or should we worry more about humans losing control over the future entirely, either to AIs or otherwise? And how can we trade off these risks?
  7. What would happen if ASIs were unleashed to compete with each other and with humans, with those most ‘aligned with growth’ being fruitful and multiplying, and those that are not perishing? Would we be fruitful or perish, and how quickly? Are there ways to head off this outcome at various points? Where is the point of no return?
  8. If it is too early to act now, and things are moving too fast now, when will things later be ready, or move slower? What would be the plan? Is second best time to act right now, if not why not, and how can we expect to respond to an exponential neither too early nor too late?
  9. How can we deal with the regulatory and other burdens placed upon builders throughout our society, in technology and otherwise, in the many places we all agree that they are doing far more harm on the margin than good?
  10. How can we take this cooperative discourse and make it the norm and rule, rather than the exception?

These and more are excellent questions that came up in various forms, many asked by Jezos to Leahy. You could write many books, have endless dialogues.

Alas, the follow-up seems to have been the re-emergence of Caustic Beff Jezos.

First, he describes Connor Leahy as grandstanding. I would say that there was far more dissection in the first two hours of Jezos’s positions than Leahy’s, but that seemed appropriate to me, as we are mostly clear where Leahy sits, and he did lay out his positions later. One could say that Leahy was asking somewhat gotcha-like questions in places in places, but I think that there was a clear purpose behind them.

Beff Jezos: I came in for a good faith discussion and was met with a non-debate free-form attempt at grandstanding.

Past the first two hours we see a bit more eye to eye and it becomes more of a discussion. In any case, enjoy, folks!

It’s what the people wanted.

Next up, Yud?

Connor Leahy: I came in for a grilling and dissection of my opponent’s ideas and my own, unfortunately my opponent seems to think otherwise and would have preferred things being nicer.

Oh well. Let people watch and judge for themselves!

gg, thanks for playing!

Which is all fine and good, and the invitation to Eliezer Yudkowsky is welcome, whether or not that actually happens.

Then he went back to his old endless stream of memes and vibing, it was actively painful to click through to verify the situation, which, I mean, I do know exactly what I was expecting and no one is forced to follow him, then… well…

Connor Leahy: Unfortunately, I feel obligated to take back what I said about @BasedBeffJezos, he is much crazier than I thought he was. Guillaume, I hope you get help, because come on man, this ain’t healthy.

What brought that on? He quotes AI Safety Memes having a number of claims that I am not going to check or mention further, but one is both highly on point and very easy to fact check, there is a transcript.

Which is the claim that Connor was calling for violence, and continuing to label his opponents as future terrorists, no matter what everyone constantly says and does?

Beff Jezos: The Doomers are trying to rile up the crazies to do something to me. Notice how he causally mentioned travelling to SF to k*ll me during the debate. Trying to hyperstition violence from his likely deranged following. Despicable.

Beff Jezos: The Doomer cult will eventually resort to violence and it won’t be pretty.

Beff Jezos: Step 1) call your opponent evil to rile up the crazies Step 2) Casually allude that *you* specifically can’t come to SF to k*ll me Step 3) pretend you didn’t do that Step 4) gaslight the other party and deny you said such an irresponsible thing From the transcript:

All right, sure. Look at the context, ideally watch the clip, judge for yourself. Here is the context, with Leahy talking about society needing to have bounds on competition:

Leahy: But if you if you expand this [competition] to encompass literally everything, you predictably end in disaster. This is what I call civilization. Civilization is not about being nice.

It is about we have some rules, you know, no killing the other guy, you know, no poisoning. Do we respect those rules?

Jezos: How are they enforced?

Leahy: They need to be enforced by violence.

Jezos: Yeah, but then who? Who keeps those people in check, right? It’s always.

Leahy: This is a good question. You know this, but this is a this is a design question. And I’m not saying this is easy, but we’ve done a hell of a lot better than random.

Our current civilization living in the United States or over here in London is a hell of a lot better than living in Somalia or whatever. You know, there is plenty of more restrictions that we have here.

So for me personally, coordination is about taking a hit. It’s about saying I will willingly surrender some of the things that you do. For example, I can’t go over to San Francisco and murder this guy because he annoys me, because I wouldn’t because that’s bad. I surrender this power.

This is the fundamental idea of the social contract, and the idea of state monopoly on violence, and there being bounds on competition and our actions. And it is Leahy not pretending the world works other than in the way that it works, that we sleep soundly in our beds thanks to men with guns tasked with ensuring it is so. We agree to never use violence on or kill other people. Civilization and law and peace, putting bounds on competition, are how we are able to have these discussions and use words rather than bullets.

Was this a tactical error on Leahy’s part, opening up the opportunity for Jezos to make this interpretation? Yes, on reflection there was no need for it, it risks more heat than it brings light, it is a mistake, and Jezos at the time reacted exactly correctly to note that it was a mistake but without any actual substantive concern.

Does it reflect any kind of actual threat of violence? No.

Does it represent a call to others to go commit violence? No.

That’s obvious highly overdetermined nonsense. Either it is deliberate nonsense, or a sign of extreme and unhealthy paranoia and disconnection from reality, that was not on display in the debate, but is compatible with other statements attributed to Jezos.

So unfortunately that is where we leave it. We had an appearance by a relatively reasonable person, who made actual arguments and claims that could be explored in more detail, and promises to do exactly that.

Then in public we continued to get something entirely different.

I am happy to continue discussing the questions in the afterwards, or other good questions. And I am glad I did this once. But I see no need to ever do it again.

1

They don’t discuss decision theory, and seem to both implicitly be likely using more of a utilitarian (although explicitly not hedonic utilitarian) and causal decision theory framework than I would use or think is wise, but I will treat this question as well beyond scope here given it did not come up, and handwave with a version of ‘without loss of generality and in a way that allows for virtue ethics or deontology and FDT and so on.’

2

Sounds like fun.

3

He is unclear on whether he also worships the God of Civilization, this is one of the things that kept switching.

4

Anyone want to guess which of these are LLM-generated and which ones I came up with myself, or the same with the C labels? There are at least two of each in both cases.

5

In at least one case ABBJ does briefly clearly admit that free energy and growth are at least not his sole terminal values.

6

I am less confident in the distinction here but I think this is how it goes.

7

Well, actually, he holds back quite a bit throughout, while in another sense never holding back. This is one of those things no wise man actually fully means.

New Comment
7 comments, sorted by Click to highlight new comments since:

Noting out loud that I'm starting to feel a bit worried about the culture-war-like tribal conflict dynamic between AIS/LW/EA and e/acc circles that I feel is slowly beginning to set in on our end as well, centered on Twitter but also present to an extent on other sites and in real life. The potential sanity damage to our own community and possibly future AI policy from this should it intensify is what concerns me most here.

People have tried to suck the rationalist diaspora into culture-war-like debates before, and I think the diaspora has done a reasonable enough job of surviving intact by not taking the bait much. But on this topic, many of us actually really care about both the content of the debate itself and what people outside the community think of it, and I fear it is making us more vulnerable to the algorithms' attempts to infect us than we have been in the past.

I think us going out of our way to keep standards high in memetic public spaces might possibly help some in keeping our own sanity from deteriorating. If we engage on Twitter, maybe we don't just refrain from lowering the level of debate and using arguments as soldiers but try to have a policy of actively commenting to correct the record when people of any affiliation make locally-invalid arguments against our opposition if we would counterfactually also correct the record were such a locally-invalid argument directed against us or our in-group. I think high status and high Twitter/Youtube-visible community members' behavior might end up having a particularly high impact on the eventual outcome here.

First, I suggest that people pay heed to what happened in the movie "Don't Look Up." I don't remember the character names, but the punk female scientist, when confronted during an interview by unserious journalists, went absolutely bonkers on television and contributed significantly to doom. The lesson I got from this is if you do not present serious existential threats in a cogent, sober manner, the public will polarize based on vibes and priors and then lock in. Only the best, most unflappable, polished spokespeople should be put forward, and even then, it might be no use.

Second, you cannot have a meaningful exchange on Twitter. Twitter encourages the generation of poorly reasoned emotional responses that are then used to undermine better reasoned future arguments. I would recommend people just avoid that platform entirely because the temptation to respond to raconteurs like Jezos is too high.

[-]niplav72

In the spirit of pointing out local invalidities, the first paragraph (while perhaps true) is a display of learning from fictional evidence, which is usually regarded as fallacious.

Therefore I have weak-downvoted your comment. It would be much improved by trying to learn from real-world cases (which might as well support the position).

[+][comment deleted]1-4
[-]trevor5-2

I'm not really fond of Connor's current culture war-esque public persona. Some relatively minor issues with Yud's 2000s personality alone (it's probably a neurotype thing, not unusual in rare extraordinary people as they were often failed to conform to other children, and also something he did a great job working on over the years, including the routine strategic jettison of the fedora) resulted in like a dozen people who are way too fond of spending way too much of their time hating on him. The internet doesn't particularly dunk on Bostrom.

If everything goes well, the culture war types will probably look back at Connor's persona and think he was very based. But that requires everything to go well, and I'm doubtful that Connor's current persona will be net positive towards making things go well. Not a good look for AI safety; the Openphil and FHI people aren't consistently friendly and thoughtful because they're EA, it's because it's instrumentally convergent to work on your personality if you're serious about saving the world and it's a social primate species.

"entities that are most inclined towards growth"

As always, the word for this entity is "cancer" and it has a well-known tendency to kill its host and evolve around interventions to restrain it.

Thamaldahide

Thalidomide, I expect.