Link:
Personal note: I'm somewhat in between safetyism and e/acc in terms of their general ideologies/philosophies. I don't really consider myself a part of either group. My view on AI x-risk is that AI can be potentially an existential threat, but we're nowhere near that point right now, so safety research is valuable, but not urgent. For this reason, in practical terms, I'm somewhat closer to e/acc, because I think there's a lot of value to be found in technological progress, so we should keep developing useful AI.
I'm hoping this debate will contain solid arguments as to why we shouldn't keep developing AI at full speed, ideally ones that I haven't heard before. I will write this post as a series of notes throughout the video.
One hour in
This is insufferable. Connor started with fairly direct questions, Beff bounces around them for no good reason, but eventually reaches a simple answer - yes, it's possible that some technologies should be banned. So far this seems to be the only concrete thing that was said?
At some point they start building their respective cases - what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff's side - what if there are dangerous aliens?
For the love of god, please talk about the actual topic.
About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone's views are the extremist parodies of themselves. Embarrassing tbh. Ostensibly, Connor avoids making any concrete statements about his own values, because any such statements could be treated the same way. "You like puppies and friendship? Well I guess nobody will grow food anymore because they will be busy cuddling puppies".
He also points out, many many times, that "is" != "ought", which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell. Example exchange (my interpretation, the conversation was chaotic so hopefully I'm not misunderstanding)
B: Your values are not growth? How so?
C: Because I like puppies and happines and friendship [...]
B: Why do you like friendship? Because evolution hard-coded this in humans
C: You're mixing "is" and "ought"
He was not, in fact, mixing "is" and "ought". But stating that he did was a simple way to discredit anything he said using fancy rationalist words.
So far, the discussion is entirely in the abstract, and essentially just covers the personal philosophical views and risk aversion of each participant. Hopefully it gets to the point.
Two hours in
Beff brings up geopolitics. Who cares? But Connor didn't even coherently express his point of view on AI risk, so I can't blame him.
"Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn't making a point"
Immediately followed by "If an AI could design an F16, should it be open-sourced?"
Exchange at about 1:33
C: You heard it, e/acc isn't about maximizing entropy [no shit?!]
B: No, it's about maximizing the free energy
C: So e/acc should want to collapse the false vacuum?
Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.
It's in line with what seems like Connor's debate strategy - make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.
B: <long-ish monologue about building stuff or becoming a luddite living in the woods and you should have the freedom of choice>
C: Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.
Thanks for that virtue signal, very valuable to the conversation.
The end
After about 2 hours and 40 minutes of the "debate", it seems we finally got to the point! Connor formulates his argument for why we should be worried about AI safety. Of course, he doesn't do it directly, but it's close enough.
"I'm not claiming I know on this date, with this thing, this thing will go wrong [...] which will lead to an unrecoverable state. I'm saying, if you keep just randomly rolling the dice, over and over again, with no plan to ever stop rolling or removing the bad faces of the die, somehow, then eventually you roll death. Eventually you roll x-risk."
FINALLY! This is, so far, the only direct argument regarding AI x-risk. Unfortunately, it mostly relies on a strawman - the assumption that the only alternative to doomerism (and Beff's stance) is eternally pushing technology forward, never ever stopping or slowing down, no matter the situation at hand.
That's obviously absurd.
If I were to respond to this myself, I'd say - at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely. As we move into the future, we will constantly reevaluate the situation and act accordingly. If, for example, next year we get an AI trained and instructed to collect diamonds in Minecraft, instead hack the computer it's running on using some weird bit manipulation or cosmic rays, then yes, we'd probably need to slow down and figure that out. But that's not the reality that we live in right now.
This sentiment seems to be shared by Beff.
C: If you don't do policy, if you don't improve institutions [...] [we'll be doomed, presumably]
B: No, we should do all that, I just think right now it's far too early [...]
To which Connor has another one of the worst debate arguments ever:
"So when is the right time? When do we know?"
Beff only really said "I don't think it's right now", which is pretty much the same thing I'd say. I don't know when is the right time to stop AI development. I don't know when is the right time to stop overpopulation on Mars, or when to build shelters against microscopic black holes bombarding Earth from the orbit. If any of these problems arises, at least in a foreseeable short or long term - overpopulation on Mars, microscopic black hole bombardment, or dangerously powerful AI - I will entirely support using the understanding of the problem we'll have at that time to tackle the problem.
In response, Connor resorts to yelling that "You don't have a plan!"
This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do. But instead, the discussion moved on to rocket flight path, neanderthals and more platitudes.
A whole 5-10 minutes of actual discussion on topic that devolved into pointless yelling. meh
Final thoughts
This was largely a display of tribal posturing via two people talking past each other. We need debates about this, but this wasn't it. I suspect that Beff wanted to approach this in good faith, but didn't have a plan for the debate, so he was just struggling to navigate the discussion. Connor just wanted an easy win in a debate, and to do a character assassination on Beff, calling him evil, showing that he's a hypocrite. All the fun stuff that wins debates, but doesn't get anyone closer to the truth.
Poor performance from both of them, but particularly Connor's behavior is seriously embarrassing to the AI safety movement.
Personal takeaway
I don't think this moved my opinion on AI safety and x-risk either way. It would be a bit silly, since the discussion mostly did not concern AI safety. But it certainly made me more skeptical of people who consider Connor to be some sort of authority on the topic.
For what it's worth, I think you're approaching this in good faith, which I appreciate. But I also think you're approaching the whole thing from a very, uh, lesswrong.com-y perspective, quietly making assumptions and using concepts that are common here, but not anywhere else.
I won't reply to every individual point, because there's lots of them, so I'm choosing the (subjectively) most important ones.
No it's not, and obviously so. The actual topic is AI safety. It's not false vacuum, it's not a black marble, or a marble of any color for that matter.
Connor wasn't talking about the topic, he was building up to the topic using an analogy, a more abstract model of the situation. Which might be fair enough, except you can't just assert this model. I'm sure saying that AI is a black marble will be accepted as true around here, but it would obviously get pushback in that debate, so you shouldn't sneak it past quietly.
As I'm pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let's say your goal is stopping AI progress. If you're consistent, that means you'd want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it's so transparent and I'm disappointed that you don't see it.
Great! So state and defend and argue for this position, in this specific case of an unaligned superintelligence! Because the way he did it in a debate, was just by extrapolating whatever views Beff expressed, without care for what they actually are, and showing that when you push them to the extreme, they fall apart. Because obviously they do, because of Goodhart's Law. But you can't dismiss a specific philosophy via a rhethorical device that can dismiss any philosophy.
Again, I extremely strongly disagree, but I suspect that's a mannerism common in rationalist circles, using additional layers of abstraction and pretending they don't exist. Black marble isn't the point of the debate. AI safety is. You could put forward the claim that "AI = black marble". I would lean towards disagreeing, I suspect Beff would strongly disagree, and then there could be a debate about this proposition.
Instead, Connor implicitly assumed the conclusion, and then proceeded to argue the obvious next point that "If we assume that
AIblack marble will kill us all, then we should not build it".Duh. The point of contention isn't that we should destroy the world. The point of contention is that AI won't destroy the world.
He's not making a point. He's again assuming the conclusion. You happen to agree with the conclusion, so you don't have a problem with it.
The conclusion he's assuming is: "Due to the nature of AI, it will progress so quickly going forward that already at this point we need to slow down or stop, because we won't have time to do that later."
My contention with this would be "No, I think AI capabilities will keep growing progressively, and we'll have plenty of time to stop when that becomes necessary."
This is the part that would have to be discussed. Not assumed.
Believe it or not, I actually agree. Sort of. I think it's not good as an argument, because (for me) it's not meant to be an argument. It's meant to be an analogy. I think we shouldn't worry about overpopulation on Mars because the world we live in will be so vastly different when that becomes an immediate concern. Similarly, I think we shouldn't (overly) worry about superintelligent AGI killing us, because the state of AI technology will be so vastly different when that becomes an immediate concern.
And of course, whether or not the two situations are comparable would be up to debate. I just used this to state my own position, without going the full length to justify it.
I kinda agree here? But the problem is on both sides. Beff was awfully resistant to even innocuous rhethorical devices, which I'd understand if that started late in the debate, but... it took him like idk 10 minutes to even respond to the initial technology ban question.
At the same time Connor was awfully bad at leading the conversation in that direction. Let's just say he took the scenic route with a debate partner who made it even more scenic.
Great question. Ideally, the debate would go something like this.
B: So my view is that we should accelerate blahblah free energy blah AI blah [note: I'm not actually that familiar with the philosophical context, thermodynamic gods and whatever else; it's probably mostly bullshit and imo irrelevant]
C: Yea, so my position is if we build AI without blah and before blah, then we will all die.
B: But the risk of dying is low because of X and Y reasons.
C: It's actually high because of Z, I don't think X is valid because W.
And keep trying to understand at what point exactly they disagree. Clearly they both want humanity/life/something to proliferate in some capacity, so even establishing that common ground in the beginning would be valuable. They did sorta reach it towards the end, but at that point the whole debate was played out.
Overall, I'm highly disappointed that people seem to agree with you. My problem isn't even whether Connor is right, it's how he argued for his positions. Obviously people around here will mostly agree with him. This doesn't mean that his atrocious performance in the debate will convince anyone else that AI safety is important. It's just preaching to the choir.