Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, joined me on Doom Debates to debate Bayesian vs. Popperian epistemology.
I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.
We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes. The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.
Timestamps
00:00 Introducing Vaden and Ben 02:51 Setting the Stage: Epistemology and AI Doom 04:50 What’s Your P(Doom)™ 13:29 Popperian vs. Bayesian Epistemology 31:09 Engineering and Hypotheses 38:01 Solomonoff Induction 45:21 Analogy to Mathematical Proofs 48:42 Popperian Reasoning and Explanations 54:35 Arguments Against Bayesianism 58:33 Against Probability Assignments 01:21:49 Popper’s Definition of “Content” 01:31:22 Heliocentric Theory Example 01:31:34 “Hard to Vary” Explanations 01:44:42 Coin Flipping Example 01:57:37 Expected Value 02:12:14 Prediction Market Calibration 02:19:07 Futarchy 02:29:14 Prediction Markets as AI Lower Bound 02:39:07 A Test for Prediction Markets 2:45:54 Closing Thoughts
AI-Generated Transcript
Liron Shapira: Welcome to Doom Debates. Today I'm speaking with Vaden Masrani and Ben Chugg. They're the hosts of their own podcast called The Increments Podcast, which has a lot of overlap in terms of talking about epistemology and occasionally AI topics. In this debate, next couple hours, you're going to hear Vedan and Ben challenging me about my Bayesian I'm going to challenge them about David Deutsch's AI claims and, uh, Karl Popper's epistemology. And we're going to talk about the risk of extinction from super intelligent AI. So, guys, welcome, and please introduce yourselves one at a time and tell us a little bit about your background.
Ben Chugg: Awesome. Yeah. Thanks for having us excited to be here. Uh, I not sure exactly how much you want me to say, but yeah, my background is mostly academic, uh, studying math and computer science at currently I'm doing a PhD in statistics and machine learning had a brief stint at a law school pretending I knew something about law, but yeah, mostly in, uh, mostly my backgrounds in math.
Vaden Masrani: yeah, um, stoked to be here. Um, yeah, so my PhD was in, uh, machine learning. I was working in a Bayesian, um, machine learning lab that, uh, on the website is all about building superintelligence. So, um, I kind of in that, uh, space started reading a lot about Popper and Deutsch and, um, have, uh, A lot of positive things to say about Bayesianism with regards to statistics and engineering.
I think it's amazing. And a lot of negative things to say about Bayesianism with regards to epistemology and beliefs. Um, and so, kind of like to walk that difficult tightrope and defend it to some people and then attack it to other people. Um, and then on the podcast, Ben and I have been doing that for about four years and we've been, um, Um, old buddies growing up in Calgary and started the podcast as, um, COVID began just as a means of continuing to argue and learn and talk to one another.
And, um, we explore a multitude of different topics. Uh, yeah. Popper and Deutsch come up a lot, but also things like recycling and things like, um, uh, the patriarchy and things like AGI super intelligence and everything in between. So we try not to limit ourselves. to just a few topics, but, um, because both, um, because Ben was coming from an EA background and I was coming more from a popper background, that tends to be, um, kind of the locus of stuff that we talk about, but the future is open and we have no idea what the podcast is going to be in a couple of years.
Liron Shapira: Everyone check out increments podcast. It's a ton of interesting content. I'm enjoying it. So to set the stage, we're going to start by talking about epistemology. And as viewers probably know, my own background is I'm a software engineer. I'm a small tech startup founder. I'm a lifelong student of computer science. A theory of computation, that kind of thing. Uh, and I'm an AI doomer since reading Eliezer Yudkowsky in 2007. Pretty much got convinced back then, and can't say I've changed my mind much, seeing how things are evolving. So that's my background, and we're gonna kick off, uh, talking about epistemology, and just to get the lay of the land of your guys position, I guess I would summarize it as kind of what you said, where you're not really fans of Bayesian epistemology, you don't think it's very useful, Your epistemology is more like Karl Popper style, more or less. And you just think the AI doom argument is like super weak and you're not that worried. Is that a good summary?
Vaden Masrani: exception of, um, There's a lot of things about AI to be worried about. autonomous weapons, uh, face recognition technology, um, that kind of, uh, stuff I am worried about. And I think it's a huge problem. Um, and like other forms of technology, uh, it absolutely needs to be worked on.
And if we don't talk about it, it's going to become very problematic. So I'm not naive that there are certain, um, huge difficulties that we have to overcome. The stuff that I'm not worried about is super intelligence, paper clips, Bostrom, Simulation, Brokos Basilisk, all that stuff. That, to me, is all just, um, science fiction nonsense, basically.
However, the caveat is, um, I haven't read all of Yudkowsky, and at some point in the conversation, I'd love for you to just take me through the argument as if I hadn't heard it, because it could be that we're operating with asymmetric information here, and so I'm completely open to having my mind changed, and, uh, we don't have to do it now, but at some point I'd love to hear just, like, From the step one, step two, step three, what the full argument is, because I could have just missed some stuff that Youkowski has written that would change my mind.
So that's the caveat.
Liron Shapira: Okay.
this is a question I like to ask every guest.
Here it comes!
Robot Singers: P(Doom), P(Doom), what's your P(Doom)? What's your P(Doom)? What's your P(Doom)?
Liron Shapira: Ben, what is your P(Doom)?
Ben Chugg: I'm almost, I'd almost be unwilling to even give you a number because whatever number I gave you would just vary so wildly from day to day and would just be based on some total gut hunch that, um, I'd be unwilling to defend or bet on. And so I think it's much more fruitful in these conversations to basically just talk about the object level disagreements.
and not try and pretend knowledge that we have about the future and come up with random guesses and put numbers on those guesses and then do calculations with those numbers as if those numbers had some sort of actual epistemological relevance. So, I'm sorry to break the game here, but, uh, yeah, it would be silly of me to even say,
I think.
Liron Shapira: Vaden, wanna add to that?
Vaden Masrani: um, I completely agree with everything Ben said. Yeah, I have a deontological principle to not put numbers on my beliefs. However, if by Pidoum, you simply mean just like, what do I believe? Um, then I would categorize it in the same place as, um, the Rapture or the Mayan Apocalypse or Roko's Basilisk.
That's my conclusion.
Liron Shapira: What if we zoom out and we're not just talking about AI, right? So like nuclear war, pandemics, even asteroid impacts, although those seem unlikely in a hundred years. But just, yeah, looking at everything together, just the probability that humanity goes extinct or gets reduced to cavemen in the next hundred years, any ballpark estimate for that?
Vaden Masrani: Meaningless question. Um, I won't give you a number. I don't think that we can know the probability. Uh, if you want to know my beliefs about stuff, that's a different question. So I can tell you how much I believe, but I won't give you a number. No.
Liron Shapira: Would you tell me if you think that it's more than one in a million chance?
Vaden Masrani: Numbers are meaningless.
Ben Chugg: Also, I mean,
Vaden Masrani: I, I can, I can, yeah, I can compare it to other stuff though. So that's, I, I will give you a comparative thing. So the reason why people ask for numbers is because I want to compare. Um, so I can give you things to compare it against. And one thing I would compare it against is Roko's Basilisk.
Solid.
Liron Shapira: obscure topic, right? So it's, and the question is pretty straightforward of humanity's going extinct, so maybe we can compare it to, like, an asteroid impact, right? So, compare the chance of humans going extinct to the chance of a large asteroid the size of the dinosaur one coming in the next century.
Ben Chugg: so, yeah, I think, I think it's much better to just take these one topic at a time, right? So when we talk about asteroid impacts, this is a very different class of event than something like AI or nuclear war. In particular, we have models of asteroids, right? We have both counts and we have physical explanations of how often, uh, asteroids, uh, Uh, enter our orbit, and, you know, uh, we have a sense of our deflective capabilities with respect to asteroids.
So there's lots of, like, there's lots of knowledge we actually have about the trajectories of asteroids. And, uh, and then we can use some statistics to start putting numbers on those risks. That's completely unlike the situation of geopolitics, for instance. We have no good statistical models to model the future of
Liron Shapira: yeah, no, I hear ya. Well, I'll just explain where I'm coming from with this kind of question. So, as I walk through life, I feel like I'm walking on a bridge, and the bridge is rickety. Like, it's very easy for me to imagine that in the next hundred years, like, the show is gonna end, right? It's gonna be game over for humanity.
And to me, that feels like a very salient possibility. Let's call it beyond 5 percent probability. That's how I would normally talk about that. And then, so the reason I'm asking you guys is, you know, we don't even have to get that deep into the epistemology. I'm really just asking you, like, hey, In your mind is the bridge like really really solid or is it rickety?
Ben Chugg: So, uh, I, uh, Yeah, I would argue, um, DeVayden and I might disagree about certain object level things, right? There's very, there's geopolitical risks that I'm certainly worried about and, you know, I think nuclear war, uh, is a possibility in the next hundred years and I'm worried about nuclear deterrence and I'm worried about, uh, the U.
S. getting involved in certain geopolitical conflicts that increase the likelihood of nuclear war. So all of that we can talk about. When you say the words, put, you know, what's the probability of this? You're already bundling in a lot of assumptions that we're operating in some well defined probability space here.
Probability is this technical tool that we use, that, you know, mathematicians use sometimes to solve certain problems. It has a domain of applicability. Uh, machine learning is one of those domains, right? We use statistics a lot to reason about algorithmic performance and reason about how to design algorithms to accomplish certain goals.
Uh, when you start talking about the probability of nuclear war, we're totally outside the realm of probability as a useful tool here. And this is You know, now we're sort of getting to the heart of the matter about the critique of Bayesian epistemology. It, it views, you know, it has this lens on the world where everything can be boiled down to a number, and those numbers can be compared, uh, with one another in coherent ways.
And those are premises that I and Vaden reject.
Vaden Masrani: Wholeheartedly
Liron Shapira: guys you guys are being pretty quick to throw my question back at me But
I feel like I'm asking about something that you can probably interpret meaningfully for
instance just to help you perhaps answer more easily I mean Vaden did answer saying that he feels like the next hundred years are solid in terms of Probability of human extinction or in terms of
fear.
Let's say subjective fear of human extinction,
right? It's
Vaden Masrani: say probability,
but yeah.
Liron Shapira: Okay, so it's solid in some sense that maybe you could describe as your subjective sense, right? When you say solid, it's
the sense that you
Vaden Masrani: But
to be clear, subjective sense is different than probability,
Liron Shapira: Yeah. Okay. Fair. So, I can make the question, um, perhaps even, uh, more meaningful by saying like, hey, imagine this is the peak of the Cold War crisis, peak of the Cuban Missile Crisis. And people are saying like, man, this blockade, the U. S. is going to do the blockade around Cuba. The Soviets have threatened to respond. They might even use their missiles before they lose them. So imagine it's like that evening, right, where a lot of people are like, I sure hope they don't launch those missiles. During that evening, if I ask you, Hey, does the next century of humanity's future seem to you solid or rickety, would you still give that same answer, solid?
Vaden Masrani: Not in that circumstance, no.
Liron Shapira: Okay, and so from our vantage point today, could you imagine that sometime in the next few decades, like what's going, happening right now with Ukraine and Russia, or Israel and Iran, could you perceive yourself entering another type of evening like that, when you're like, oh, maybe I don't feel solid anymore.
Vaden Masrani: I can imagine all sorts of things, for sure.
Liron Shapira: So generally when we're imagining the future and we're thinking about the past and we're just, we're not sure what's going to happen, that's generally a time when a lot of people would be like, well, there seems to be some significant probability that things are going to go really bad.
Vaden Masrani: A lot of people would, but we don't. Totally.
That's what we're
Liron Shapira: you would rather dismiss those people and just be like, nope, my answer is solid.
Vaden Masrani: No, you're misunderstanding the claims that we're making. Um, I don't dismiss those people. I ask for their reasons for the number, because the number itself is next to meaningless. It's not entirely meaningless. But it's close to it. Um, if you want to know my subjective belief about something, I will absolutely tell you.
If you want to know how strongly I believe that, I'll tell you that too. What I won't do is put numbers on it, because putting numbers on it allows for fallacious comparisons between things that should not be compared. talk about subjective
Liron Shapira: now I'm just
Vaden Masrani: and your answer. yeah, yeah, but I won't put numbers on it.
If you want
Liron Shapira: when you.
Vaden Masrani: if you want me to put numbers on it, then we're going to stalemate here. But if you want to have something else, then we can go. Yeah.
Liron Shapira: Right now I'm just pushing on your answer when you said solid. Do you want to perhaps revise solid, or do you still want to go with
Vaden Masrani: No, uh, no, I'm an optimist. Um, yeah, I'm an optimist about the future. I think that there's definitely things to be worried about, but there's also many things to be excited about. Um, technology is awesome. Um, we can talk about Ukraine and Israel and Iran, and those are things to be worried about. We can also talk about, um, the mitigation of poverty.
We can talk about, uh, getting to Mars. We can talk about the amazing things that, um, uh, a, uh, diffusion models are making and how that is going
Liron Shapira: yeah.
but none of those things are directly irrelevant to my question of the risk of something like the Cuban Missile Crisis really coming
to a
Vaden Masrani: that wasn't your question. That wasn't the question, if I recall it. The question was about, do we think we're standing on a solid bridge or a rickety bridge? And then you use the Cuban Missile Crisis as an example, right?
Liron Shapira: So, if a lot of good things are happening on the bridge, right, like there's a candyland happening on the bridge, but the bridge still might collapse, that's really more of what I'm asking about, is the risk of collapse.
Vaden Masrani: Yeah, I don't think we're going to collapse. No.
Liron Shapira: Okay. All right. Um, so yeah, you guys mentioned a lot of different topics that I definitely want to drill down into. Um, I think, yeah, a good starting point is like to zoom all the way out and like, let's talk about epistemology, right? Epistemology is like the study of how people are allowed to know things.
Is that basically right?
Ben Chugg: Sure, yeah. The study of, you know, how we know what we know, I think, is usually how it's phrased.
Vaden Masrani: Yeah, yeah. Knowledge about knowledge. Knowledge about knowledge.
Liron Shapira: Why is epistemology important? And what are the stakes for getting epistemology right? Ben.
Ben Chugg: So, I mean, epistemology is at the center, perhaps, of this mystery of how humans. have come to do so much in so little time, right? So for most of even human history, let alone world history or let alone universal history, not much was going on and humans weren't making tons of progress. And then all of a sudden, a couple hundred years ago, we started making huge leaps and bounds of progress.
Um, and that is a question of epistemology. Right. So now we're asking questions, how are we making so much progress? Why do we know that we're making progress? Can we actually say that we're making progress? We seem to understand the world around us much, much better. We're coming up with theories about how the world works.
Everything from, you know, cellular biology to astronomy. Um, and how is this mystery unfolding? And epistemology is. A key question at the center of that, right? To be able to say, um, how and why we're making progress. And also to start analyzing, uh, the differences between how people go about making progress and how that differs maybe across cultures.
Are there better and worse ways to generate progress? Are there ideas that stultify making progress? Uh, you know, these are all important questions when it comes to future progress and, you know, just human, human welfare in general.
Vaden Masrani: Yeah, one thing I would
maybe add to that is, I like to sign off on everything Ben said. I'd also say epistemology is like the grand unifier. So if you like science, and you like, um, literature, and you like, um, If you like journalism, and you like art, and you like thinking about the future, epistemology is the thing that underlies all of that, which is why our podcast just keeps branching out into new subjects every other episode, because epistemology is the center of the Venn diagram.
So for that reason, and for Ben's reason, yeah, I like it. Mm
Liron Shapira: a major breakthrough in popular epistemology, right? This idea of like, hey, if you want to know what's true, instead of just like arguing about it and getting stoned and deferring to whoever is higher status, why don't we go outside and conduct an experiment and let reality tell us what's right, right?
Vaden Masrani: Exactly.
Liron Shapira: Yeah, so epistemology is powerful in that sense. And then also, as we stand today, I think we argue over epistemology as it relates to how we're going to predict the future, right? I mean, you saw it a few minutes ago, and I'm like, hey, is the next century, are we all going to die? And it sounds like we're kind of on the same page that we can't even agree on whether or not we're all likely to die because of a conflict that's going to trace to our epistemologies.
Right?
Vaden Masrani: Mm hmm. 100%. Exactly. Yeah.
Liron Shapira: okay, great. So I just wanted to set that up because I think a lot of viewers of the show aren't epistemology nerds like we are, but now we've raised the stakes, right? So the rest of the conversation is going to be more interesting. Okay, so my first question about your epistemology is, would you describe yourself as Popperians, right, in the style of Karl Popper?
Vaden Masrani: Um, I only say reluctantly because I don't like labels and I don't like a lot of obnoxious frickin pawperians on Twitter who also identify as pawperians and every time you label yourself, now you are associating. Now other people with that label, their bad behavior or their annoying tendencies maps onto you.
So that's why I don't like the label, but I have to just, yes, I'm a pawperian through and through. He's the one who's influenced me the most. Um, and every other utterance of mine either. It either has his name cited directly or is just plagiarizing from him. So yeah, I'm a pauperian, definitely.
Liron Shapira: think you said on your podcast you spent like hundreds of hours studying all of Popper. Is that your background?
Vaden Masrani: Yeah. Um, that was what I was doing while I was in a Bayesian machine learning reading, uh, machine learning, uh, research group. Yeah. Um, so it was, uh, Bayesian in the day and pauper at night. And uh, that was, uh, exactly. Yeah. Yeah.
Liron Shapira: Okay. Ben, how about you?
Ben Chugg: Um. Probably more reluctantly than Vaden if only because I don't know popper's stuff as well. So my knowledge of Popper, you know, I know I've read some of popper's works in, in great detail, uh, and argued with vaden almost endlessly about much, much of popper's views. So, you know, it'd be cheap to say that I don't understand Popper. Uh, but you know, I haven't read all of his work and. I've become extremely allergic to labeling myself with any particular view, but yeah, if you press me at the end of the day, I would say that I think Popper and his critical rationalism makes the most sense of any sort of epistemology that I've come across previously.
So I'd have to adopt that label.
Liron Shapira: Okay.
Vaden Masrani: And, and you came from an EA background. I think that's important for the listeners to, to know.
It's not as if you were totally neutral. And they should listen, yeah, they should listen to our first 10 episodes because that's where the battle began. And so you were familiar with the EA stuff. And there's through a long, slow battle, which this two, three hour conversation is not going to resolve at all.
Um, but hopefully the conversation will spark some sort of interest in the viewers who, and those of, And those who want to explore this more can listen to our 70 plus episodes where we gradually explore all of this stuff. So no minds are going to be changed in this particular debate, which is why I don't like debates too, too much, but if it kindles some sort of interest and people actually do want to explore this slowly, then there's a lot of stuff to discover.
Um, so
Liron Shapira: Great. Okay. And, uh, as people may know, I'm coming at this from the Bayesian side, uh, people who read Less Wrong and Eliezer Yudkowsky. That whole framework of rationality and AI Doom argument, it does tend to come at it from Bayesian epistemology, and it explains why Bayesian epistemology is so useful from our perspective.
And in this conversation, I'll put forth some arguments why it's useful, and you guys will disagree, right? So that's kind of where we're going with this, is kind of a Popper versus Bayes argument. Epistemology debate, is that fair, Vaden?
Vaden Masrani: let's do it.
Liron Shapira: And then before we jump in further, when I think about popper today, I feel like David Deutsch has really popularized it in the discourse.
So I feel like most people, myself included, haven't read almost any popper directly, but we have read or seen indirectly a good amount of David Deutsch. And when David Deutsch was on your podcast, he was a great speaker. I think he said he's not an official spokesman, right? He's not a Popper surrogate.
He's just somebody who's read a lot of Popper and been highly influenced by Popper, but he doesn't claim to speak exactly like Popper would speak. But from your perspective, isn't David Deutsch very closely aligned with Popper?
Vaden Masrani: Uh, yes, if you don't know Popper's work very well, if you do know Popper's work very well, then you start to see major distinctions and differences between the two. Um, so, from an outsider perspective, I think understanding Deutsch's work is a great entry point. It's more approachable than Popper, for sure.
But, um, but there's no substitute. Reading Deutsch is not like, So actually, let me take one step back. Um, for about five years, I read the beginning of infinity and fabric of reality, and I just thought to myself, ah, you know what? I basically get conjectures and refutations. I get the point wrong. You do not get the point.
You have to read conjectures and refutations. There is so much more in that book than, um, you have learned in beginning of infinity, and it is not like a surrogate at all. You have to read, uh, conjectures and refutations at least, um,
to start to have the picture, uh, filled in. Well,
Liron Shapira: about Bayes that maybe there's some deep stuff that you guys don't get yet, right? So maybe we'll bring out some of the deep stuff in this
conversation.
Vaden Masrani: so, just to add, so, um, in Logic of Scientific Discovery and Realism and the Aim of Science, about three quarters of both those books is discussing probability and Bayes. So, it's math and it's equations and, um, everything that I know about Bayes comes from Popper, and that's not in the book. So if you want to really understand Bayes and probability, then you have to read Popper.
Um, it's not enough to read Joukowsky because Joukowsky is coming from the Jaynes line. Um, so E. T. Jaynes is the famous Bayesian and so Jaynes is, uh, Joukowsky's Popper. But, um, Jaynes just gives one glimpse into how probability works. Um, and so if you actually want to understand it at the root, you can't just read, um, Joukowsky or Pop, uh, Jaynes.
You have to go down to Popper and then branch out from there. Um, so just add that.
Liron Shapira: And just to tie David Deutsch into this argument a little more directly, I heard when he was on your podcast, you were talking about how you're not a fan of Bayes these days, and you spend a lot of the time on your podcast telling people why Bayes is wrong, or the arguments are weaker than they look, and David Deutsch was really nodding along.
I think he gave you like an attaboy. So he basically supports your mission of being kind of anti Bayes, right?
Vaden Masrani: Our mission was because of like one page in beginning of infinity and that then that got my little cogs turning and then being in a Bayesian machine learning reading group or research lab Couple of reading popper is what made the whole argument starts to become very interesting to me.
But
Liron Shapira: Okay. So our debate today, it's a little bit of a proxy two on two, where you've got this team of Karl Popper and now David Deutsch, who's actually still alive and well. And then on my side, we've got, uh, you know, the Reverend Thomas Bayes, or, you know, who, the group who actually invented, uh, Bayesian reasoning. Um, and, and Eliezer Yudkowsky, right, who's been highly influential to a lot of people like me, teaching us most of what we know about Bayes. So yeah, so Eliezer, uh, as a successor to Bayes, versus David Deutsch as a successor to Popper, all battled through us random podcasters. Sound
Ben Chugg: With the caveat, yeah, there's always a bit of trepidation, I think, at least on my part, and I'm sure on Vaiden's part as well, to speak for anyone in particular. I mean, David Deutsch has his own lines of thought and, you know, I, I, I would be very hesitant to label myself, uh, a Deutsch or Popper expert.
And so, you know, I always prefer it if we just, keep the maybe debates at object level. Um, of course, in the background, there's going to be these Bayesian versus Deutschian, Popperian dynamics. And, you know, that's inevitable given how we're, we've all been influenced, but just to put it out there, I'd be, I'm, I'm comfortable saying that my views, uh, comport precisely to someone else's views.
Vaden Masrani: Yeah, just to, uh, clarify for the, uh, listeners, um, the Reverend Thomas Bayes is not equivalent to Bayesianism, and the guy, Thomas Bayes, is legit and he's fine and that's just like where Bayes theorem came from or whatever, but, uh, Bayesians I think of as E. T. Jaynes and I. J. Goode and Eliezer Yudkowsky.
And so these are the people who, um, I would put on the opposite side of the ledger.
Liron Shapira: Great, and also, the other correction I would make is that, uh, I think Pierre Simon Laplace is actually the one who publicized, uh, Bayesian methods, and he kind of
named it after Bayes, so yeah,
you.
know, and this isn't really a history lesson, I don't really know what happened, but it just
is what it is, in terms of
Vaden Masrani: That's great. Yeah, great.
Liron Shapira: Okay. Alright, so, to kick this part off, uh, Ben, how about just give us really briefly, um, like, explain Popperian reasoning, and like, pitch why it's valuable.
Ben Chugg: Uh, sure. So I think the way I like to think about Paparian reasoning at a high level, and then we can go more into the details, is just trial and error, right? So it comes down to how do we learn things? You know, if, how you, if you ask a kid how they learn how to ride a bike or how they learn to cook or how they learn anything, you try stuff and it doesn't work, you learn from your mistakes, you try again and you slowly reduce the errors in your, in your, uh, thinking and your habits.
Uh, and then Popper just takes that same attitude to epistemology. He says, okay. Um, How do we learn things? Well, we conjecture guesses about the word, uh, about the world, how the world works, whether it's politics, whether it's science, um, and then we look for ways to refute those guesses. So this is where the critical experiment comes into play for Popper in the realm of science, right?
So we have a theory, that theory makes predictions about how the world works. It says certain things should happen under certain conditions, uh, and that gives us something to test, right? So then we go out, we run that test and, and then again, follows his famous falsification criterion, right? So if that test does not succeed, we say, okay, uh, theory falsified, uh, and then we come up with new guesses.
Um, and so there's of course a lot more to say, but it's really the method of trial and error at work in the realm of epistemology. And so Pauper really does away, um, with the notion of seeking certainty. So, at, you know, he was operating at the time of the Vienna Circle, and people were talking a lot about how do we get certainty out of our science, right, or how, and how do we justify our certainty, um, and also talking about demarcations of, like, meaning versus, uh, meaningfulness versus meaninglessness.
Um, and Pauper basically takes a sledgehammer to both of those traditions and says, these are not, uh, useful questions to be asking and certainty is not achievable, it's not attainable. So let's just subvert that whole tradition and instead, uh, We're not going to search for certainty, um, but that doesn't mean we can't search for truth.
Um, and that doesn't mean we can't get closer and closer to the truth as time goes on. But there's no way to know for sure if something's true, so we can't be certain about truth. Um, and then this also starts to subvert certain notions of Bayesianism, which wants to, they, you know, Bayesians want to approach certainty, but now via the probability calculus.
Um, and so, you know, that gets us perhaps farther down the line, but that's maybe a, The scope of the debate, and then I'll let Vaden correct anything I've said wrong there.
Vaden Masrani: Great. Um, just one thing to, to add is, um, what Pauper says we don't do is just open our eyes and observe evidence getting beamed into our skulls such that the probability of a hypothesis goes up. Up, up to some threshold, and then bang, you know it's true, and that's how you get knowledge.
It's not about just opening your eyes and having the evidence beamed into you. It's about conjecturing stuff, and then actively seeking evidence against your view. Trying to find stuff that falsifies your perspective. Not opening your eyes and observing stuff that you want to see. Um,
Liron Shapira: Great. We'll definitely get into that.
So me and the Bayesians, we don't have a problem with taking in a bunch of evidence and then updating your belief on that
evidence, right? So I guess we'll talk more about that. That does sound like an interesting distinction. Let me give the quick pitch for what Bayesianism is, what it means. Uh, so Bayesianism says that you go into the world and in your head you basically have a bunch of different possible hypotheses, some mental models about how the world might be working, right? Different explanations for the world. That's what Bayesianism is. And then you observe something, and your different competing hypotheses, they all say, oh, this is more consistent with what I would have predicted.
This is less consistent than what I would have predicted. And so then, you go and you update them all accordingly, right? You make a Bayesian update. You, the ones that said, hey, this is really likely, the ones that gave a high prediction, a high probability to what you ended up actually observing, They get a better update after you observe that evidence and eventually once you keep observing evidence You hopefully get to a point where you have some hypothesis in your head Which is really high probability compared to the others and you go out in the world and you use the hypothesis and it steers you In the right direction like it turns out to keep giving a high probability to things that actually happen So that's the model of Bayesianism and It sounds like a lot of what Bayesianism tells you to do is similar to what, uh,
Popper tells you to do.
I mean, Bayes and Popper, they're not like night and day, right? They're not enemies, and they're arguably more similar than different. I mean, there's major differences we're going to get into, but like, when you guys said, Hey, there's no certainty, right? There's just like, doing your best. I mean, I feel like that fully dovetails with what Bayes would tell you, right?
Because you're not supposed to give like a 0 percent or 100 percent probability.
Vaden Masrani: What do you, can I ask a question there? What do you mean by update? So, um, what I do is I change my mind all the time when, uh, stuff that I think, uh, turns out to not to be true or I see new evidence that, um, either confirms my view or disconfirms it. So I'm changing my mind all the time. Um, but you didn't use that phrase, change your mind, you said update.
And so I'm just curious what the difference is between updating and changing your mind.
Liron Shapira: Yeah, so when you talk about, hey, what's on my mind, right? Like, what do I think is the correct hypothesis? Um, like, uh, maybe a good example is the election, even though, you know, politics is such a controversial topic. But I'm just talking about predicting who will win, let's say Trump versus Kamala. If you ask me, Liron, who is going to win?
And I say, um, I don't know, I saw the latest poll, I guess Kamala. And then tomorrow, like, oh, another poll just moved the, moved the win probability one percent. So now it's like, I guess Trump. But it's not like my mind has changed from Kamala to Trump. It's like I, I was always very much entertaining both the hypothesis that says tomorrow Kamala will win, and the hypothesis that says tomorrow Trump will win.
And when I update their probabilities, I'm just like, okay, yeah, if I had to bet, I would now bet slightly higher odds on one than the other. So that's what I mean by changing my mind. It's very much not binary.
Vaden Masrani: No, I didn't ask what you meant by changing your mind, but you meant by update. Um, so update is the same as changing your mind or is it different? Um,
Liron Shapira: So I, I don't really have such a thing as changing my mind because the state of my mind is always, it's a playing field of different hypotheses, right? I always have a group of hypotheses and there's never one that it's like, oh this is my mind on this one. Every time I make a prediction, I actually have all the different hypotheses weigh in, weighted by their probability, and they all make the prediction together.
Vaden Masrani: where did you get the
Ben Chugg: Wait, wait, wait, let's,
Vaden Masrani: Like just for like Yeah, no, it's, it's, can we just have some, no, I just want to have a conversation. Like, um, I, I just don't understand your, your answer. Right. Uh, but Ben had a question first.
Ben Chugg: uh, yeah, like, maybe let's just make this concrete. Um, so, when, if you're designing a satellite, Uh, you're going to send the satellite into space, right? Uh, you're not going to base the mathematics of that satellite, uh, on some combination, some weighted combination of theories of physics. Um, you're going to base it on general relativity, hopefully, otherwise it's not going to work.
Uh, and so in what sense, you know, you're not assigning a probability one to general relativity because we also know it's wrong. In some fundamental way, right? Specifically, it doesn't count for, uh, certain very small subatomic effects that we know to be true. So, yeah, in what sense is like, you know, you're taking a decision there, uh, it's not a weighted average of physical theory, so what's, uh, what's going on there?
Liron Shapira: Great question. If I'm going to go and invest 100 million on an engineering project, it's because whatever combination of hypotheses are going in my head are agreeing with sufficiently high probability on predictions that my engineering is going to work. So, I don't have a hypothesis in my head that has more than 1 percent probability, saying, you're gonna launch that satellite and some other non Newtonian, non Einsteinian force is just going to knock it out of its trajectory.
I don't have a hypothesis like that that's getting sufficiently high probability. So, this is a case where I feel very confident, so my dominant hypothesis about how the physics is going to work has already established itself with more than 99 percent probability.
Vaden Masrani: I don't understand that, but
Ben, did you
Ben Chugg: Uh, yeah, I mean, I, okay, that's, that's fine. I, I, dis, I think, heh, Yeah, we can, we can move on. I mean, I, I, I don't think this is actually what's going on in your head. I don't think you have these explicit theories and you're actually assigning probabilities to them. I think what's going on is you've been swayed by arguments that if you send a satellite into space, it's going to
Liron Shapira: a fair criticism, right?
Ben Chugg: relativity.
So I think Bayesianism in this way is both descriptively, uh, and normatively, we'll get into that later, false. Um, but you know, I can't sit here and examine the context, the contents of your
Liron Shapira: If I understand it, I
think this is an interesting point you're making. You're basically saying, look, Liron, you kind of retconned, right? You retroactively described yourself as using Bayesian epistemology to justify why you funded this satellite project, but realistically, you never even thought of that.
You're just retroactively pretending like you're Bayesian. Is that, like, basically your criticism?
Vaden Masrani: but hold on though, cause Ben's question wasn't about if you have a hundred thousand dollars and you need to allocate it to different engineering projects, it's if you were the engineer. And we don't know how to make a satellite yet, how are you going to do it? And that's a different thing, right? So, we're not talking about assigning probabilities to which project is going to be more or less successful.
We're talking about, like, how do we get a satellite into the sky? Um, and to do that, you need to understand general relativity. And quantum mechanics. And these two things are mutually exclusive. So if you sign probability to one, you have to necessarily assign less probability to the other under the Bayesian framework.
However, that isn't how scientists make satellites, because as we get more evidence for the quantum mechanics, that doesn't take away what we know from general relativity, because we have to use both frigging satellite into the sky. And so you just kind of answered a question that was adjacent to, but not the same as the one that Ben was asking.
Liron Shapira: To make this specific, is there a particular prediction, like, you're basically
saying, hey, how am I going to resolve the conflict between these different theories, but can you make it specific about which part of the satellite engineering feels tough to resolve for you?
Ben Chugg: Yeah, I, it's.
Vaden Masrani: Uh, does it,
Ben Chugg: It's just more when you were saying like, how do you reason about the world, right? You're not, you're not tied to any specific hypothesis. It sounded like your worldview is not like, okay, uh, for the purposes of achieving this, I'm going to assume general relativity is right. That's anathema to the Bayesian, right?
The analogy there is assigning probability one to general relativity. You're not going to do that because we know general relativity is false in some important way. Um, and so you said what you're doing, you know, what you're thinking and the actions you're taking, correct me if I'm wrong, of course, are some, you know, weighted average of these hypotheses that you have about how the world works.
But that just doesn't comport with, like, If, you know, if you were to be an engineer, in terms of how you're actually going to design the satellite and send it up into space, um, it's not, you know, you're not relying on a mishmash of physical theories to get the job done. You're relying on general relativity in this case.
Liron Shapira: I mean, there's specific things that I need to model to get the satellite to work, and I don't necessarily need to resolve every contradiction between different physical theories. I just have to say, what are the relevant phenomena that need to follow my models in order for the satellite to perform its function and not fall down to earth? And I probably don't have major conflicts between different theories. I mean, if I'm not sure whether Einstein's Relativity is true, right, if I'm not sure whether time dilation is a real thing or not, then I, as a Bayesian, I don't think that Bayesianism is the issue here, right? If engineers launching a satellite didn't know if time dilation was going to be the issue, I think even as a Popperian, you're like, uh oh, well they better do some tests, right?
I think we're in the same position there.
Ben Chugg: Yeah, for sure. Yeah.
Vaden Masrani: can I go back to a different thing that you said earlier? Maybe the satellite thing is getting us a bit stuck, um, you said that you never change your mind because you have a fixed set of hypotheses that you just assign different weights to. First, is that accurate summary of what you said?
I don't want to
Liron Shapira: If you want to drill down into, I wouldn't call it a fixed set of hypotheses, in
some sense it's a variable set of, but it's always, there's a community of hypotheses, right, and they're all getting different weights, and then they're all weighing in together when I make a
Vaden Masrani: so when you said you never changed your mind, just maybe flesh out a bit more what you mean by that, because I don't want to
Liron Shapira: Okay, if, if, I mean, if I walk into a room of strangers and I say, guys, I never changed my mind, I think that's very much sending the wrong message, right,
Vaden Masrani: Totally, totally, which is why I'm not trying to star man you at all. So maybe just, just, just clarify a
Liron Shapira: Because on the contrary, right, the takeaway is really more like, no, Bayesians, I'm a master at the dance of changing my mind, right? I
don't just see changing my mind as like, oh, hey, I have this switch installed that I can flip.
No, no, I
see myself as like a karate sensei, right, where I can like exactly move to the right new configuration of what my mind is supposed to have as a belief state. So does that answer your question?
Vaden Masrani: Um, so I gotta, I guess, why did you say you'd never changed your mind in the first place? I'm totally understanding that, that you don't mean,
Liron Shapira: I meant,
yeah, I feel like I threw you off track. When I say I don't change my mind, what I meant was that when you use that terminology, change your mind, it seems to indicate that like somebody's mind has like one prediction, right? Or like they've picked like this one favorite hypothesis and then they threw it away and took a different one. And I'm just saying that's, that doesn't describe my mind. My mind is always
this community of different hypotheses. Yeah.
Vaden Masrani: Gotcha. Yeah. So yeah, so that's actually a nice distinction between like a Popperian approach and a Bayesian approach. So for me, once I have enough disconfirming evidence, I do exactly what you said the Bayesian doesn't do. I take that hypothesis and it's gone now. I don't assign less probability.
It's just, it's dead up until the point where there's another reason to think that it's no longer dead and then I'll revive it again. But um, but so that's just a distinction between how my thought process works and yours, I guess. curious. Another thing, though, which is, um, where do you get your hypotheses from in the first place?
Uh, because I understand that under the Bayesian view, you start with hypotheses and then you just assign different weights to them, but, um, but I'm just curious, before that stage, before the reweighting, where do the hypotheses come from in the first place?
Liron Shapira: Uh huh, that's a popular gotcha that people like to throw at the basins, right? They're like, hey, you
guys keep talking about No, no, no, I know, I know, I know. Uh, people like to, you know, Bayesians love to keep updating their probabilities, but if you don't start with a really good probability in the first place, then you still might get screwedup.
For example, like, if my a priori probability that, like, Zeus is the one true god, if it's, if I start out with it at 99. 99%, then even if I see a bunch of evidence that the world is just, like, mechanical and there is no god, I still might come out of that thinking that Zeus has a really high probability. So, you know, this is just kind of fleshing out your, your kind of
Vaden Masrani: no, you misunderstood the question, you misunderstood the question. I'm not talking about, um, how do your prior probabilities work. Where do they come from? Um, I'm, cause when you talk about Bayes Theorem, you have your likelihood and your prior. Um, so P of E given H, P of H over P of E, yeah? Um, so we can talk about the probability for P of H, and that's what you were describing.
I'm not talking about that, I'm talking about H. Where does H come
Liron Shapira: Sure,
Vaden Masrani: So before the probabilities are assigned, just where does H come from?
Liron Shapira: So this is not necessarily a point of disagreement between us. I mean, just like you can generate hypotheses, I'm also happy to generate hypotheses and consider new hypotheses. So the short answer is, you and I can probably go source our hypotheses from similar places. The long answer is, there is an idealization to how a Bayesian can operate called Solomonoff induction. Are you guys familiar with that at all?
Vaden Masrani: Yeah, yes.
Liron Shapira: Yeah, so Solomonoff induction just says like, Hey, there's a way as long as you have infinite computing resources, right? So it's an idealization for that reason, but there is a theoretical abstract way where you can just source from every possible hypothesis and then just update them all, right?
That's the ideal. So I do some computable approximation to that ideal.
Ben Chugg: But the approximation, that's where the details are hidden, right? You clearly don't
have every, you're not running every possible hypothesis in your head, right? At some point, you're coming up with new ideas. Like, sometimes you wake up, you have a creative thought that you haven't had before. Um, you know.
Bayesianism can't really account for that. All in, and in, you know, if you want to get into the math, it really complicates things. Cause now all of a sudden you're, you're, you're working with a different probability space, right? And so like, what happens with all the probabilities that you assign to some other fixed hypothesis class?
Now it's like, okay, now I have a new hypothesis. Everything's got to get rejiggered. Um, and so it's, it's, it's just doesn't account for idea creation in a satisfying
Liron Shapira: So this is, um, this is how I perceive the current state of the conversation. I'm basically like, Hey. My epistemology has an uncomputable theoretical ideal that I'm trying to approximate. And you guys are like, well, that's so fraught with peril, you're never going to approximate it. Like, what you actually do is going to be a shadow of that. Whereas, I would make the opposite criticism of you guys of like, okay, well, you guys haven't even gotten to the point where you have. A, an uncom computable ideal. So I feel like I'm actually farther along to be in this because approximating an uncom computable ideal, we do that all the time. Right? This whole idea of like, Hey, we're going to go do math.
Well, math is actually uncom computable, right? Like the,
the general task of evaluating a mathematical statement is so in, in all areas of life, we're constantly approximating uncom computable ideals. So I, I, I'm not ashamed of approximating un uncom computable ideal.
Vaden Masrani: we do it on this side of the aisle too. If, again, if you want to set this up as a debate, then we can, I guess, do that. Um, the Turing machine is an uncomputable ideal that we approximate with our brain. So we have that on our side too, if that's what you're looking for. Right.
Liron Shapira: And how does that relate to Arianism?
Vaden Masrani: Um, it doesn't totally because Popperian or Popper, so it does relate to Deutsch. So that, um, Deutsch, church, church, Turing thesis is where he gets his universal explainer stuff. And I. We can maybe go into that if you want, but, um, but in terms of Popper, it doesn't at all. But in terms of giving you what you said we didn't have, it does.
Because you were saying that on our side of the aisle, we don't have the, uh, uh, Incomputable ideal that we approximate, but we do, because I'm talking to you on it right now, which is a MacBook Pro. And that is an approximation of an incomputable ideal. So, yeah.
Liron Shapira: Okay, got it. So when I ask you guys, Hey, where do Popperians get their hypothesis? You're basically saying, Well,
we do at some point consider every possible Turing machine as a
Vaden Masrani: No, no, no, no, no. So this, this is great. So, um. We don't know. So the answer is, I don't know. Um. So, POPR starts with trial and error. Um, but the question of like, where do the conjectures come from, where do the ideas come from? We don't have an answer. I don't know. And I would love to know and to me answering that question is equivalent to solving AGI.
Um, so I have no idea how the brain comes up with its conjectures in the first place. POPR starts with trial and error. There, it just says, there is some magical mystery process that we'll leave it for the neuroscientists and the psychologists to figure out. We're just going to say, Dunno, but that's the foundation, the trial and the error.
So that's the answer from our side. Uh, yeah.
Liron Shapira: question that I think you might have answered now, which is, so Popper talks a lot about explanations, right? Like good explanations. It sounds like you're saying that when you think about an explanation, you can formalize what an explanation is as being a Turing machine.
Would you agree?
Yeah.
Ben Chugg: Uh, no, I don't think so. I mean, if we, if we knew how to program a good explanation, presumably that would allow us to generate them computationally, right? If you understood them deep enough at that level. And also I suspect something like that is impossible because then you might be able to litigate what is a better and worse explanation in every circumstance.
And I highly doubt that that's possible, right? This is like the realm of argument and debate and subjectivity enters the fray here. And like, you're not going to be able to convince everyone with. with an argument, and so I don't think computation is the right lens to have on something like good explanations.
Vaden Masrani: and, Just to add a metaphor to, to that, so, um, It's kind of like saying, uh, could you automate, um, the proof process? Well, in some sense, absolutely not, no, like this is what gold, uh, like the incompleteness stuff is, is about, which is that, like, for different kinds of mathematical problems, you have to entirely invent new kinds of proof techniques, such as, like, Cantor's diagonalization argument, right?
Um, that was completely new, and you can't just take that, uh, uh, and apply it to all sorts of new kinds of problems. That's what mathematicians are doing all day, is coming up with entirely novel kinds of proofs. And so if you grok that with the math space, so too with explanations. I think that different kinds of phenomena will require different modes of explanation, um, such that you can't just approximate them all with a Turing machine.
Liron Shapira: Now in the math space, I think we're now at the point where, you know, we've got set theory and formal proof theory, and I think we're at the point where I can say, what a math. Math, what do mathematicians do? They're approximating this ideal of creating this mathematical object, which you can formalize within proof theory as a proof.
Like we, we actually have nailed down the ontology of what a proof is, but it sounds like you're saying okay, but we haven't nailed down the ontology in epistemology of what an explanation is. So, but now you're saying, well compare it to math, but I feel like math is farther
along.
Vaden Masrani: So, uh, can I just jump in here for a sec, Ben, which is that, uh, Ben will never say this about himself, but listeners, just type in BenChug Google Scholar, and look at the proofs that he does, they're brilliant. So you're talking to a mathematician. And so, not me, Ben. Um, and so I just pass that over to Ben because he is absolutely the right person to answer the question of what do mathematicians do
Ben Chugg: uh, I'm just more curious about what you mean by we've solved the ontology of proofs, um, as a genuinely curious question because this might make my life a lot easier if I could appeal to some sort of book that will tell me if I'm doing something right or wrong.
Liron Shapira: let's say, uh, a mathematician, grad student, goes to his professor and he says, Hey, I'm trying to prove this. I am trying to write up a proof once I figure it out. Well, that thing that he writes up, these days, almost certainly is going to have an analogous thing that could, in principle, might take a lot of effort, but it could be formalized, right, purely symbolically within set theory.
Is that fair? Okay.
Ben Chugg: I mean, yes, I mean, okay, I'm, I'm, I'm, I'm confused. I mean, once you have the proof, the point is that it is, it's, it's logic, right? So you should be able to cache this out in terms of, yes, like going down, down to like ZF set theory, for instance, right? You should, you can cache this out all in terms of like certain axioms.
You don't tend to, uh, descend to that level of technicality in every proof. You stay at some abstract level. But yeah, the whole point of a proof is that it's written in tight enough logic that it convinces the rest of the community. That doesn't mean it's certain. That doesn't mean we're guaranteed truth.
That just means everyone else is convinced to a large enough degree that we call this thing published and true. Okay, great. The hard part is coming up with the proof. What the hell the proof is in the first place, right? So once you have a proof, yeah, we can start doing things like running proof checkers and stuff on it.
The hard part is, you know, proving the thing in the first place.
Liron Shapira: You're right. So the reason I'm bringing it up is, you know, I don't even want to talk about the hardness of finding the proof yet. I just want to talk about the ontology, right? This idea that when you ask a mathematician, What are you doing? The mathematician can reply, I am conducting a heuristic search.
My human brain is doing a computable approximation of the uncomputable ideal of scanning through every possible proof in the set of formal proofs and plucking out the one that I need. That proves what
Vaden Masrani: don't know a single mathematician who would say that, and you just asked a mathematician and his reply wasn't that. This isn't a hypothetical, you're
Liron Shapira: isn't, I'm not making a claim about what mathematicians say, right? I'm just making a claim about this, uh, the ontology of what is, uh, right? So, so an informal, informally written English language mathematical paper containing a proof maps to a formal object. That's all I'm
Ben Chugg: Sure. Yeah, yeah. I mean, you're, I mean, math, math is a formal
Liron Shapira: I, yeah, go ahead.
Ben Chugg: as proofs are about manipulations in this formal language, then sure. Yep.
Liron Shapira: So the reason I brought that up is because when I'm talking about Bayes and you're asking me, Hey, uh, where do you get hypotheses or what is a hypothesis? I'd be like, Oh, a hypothesis is a Turing machine that outputs predictions about the world if you also encode the world, you know, in bits, right? So I have this ontology that is formalizable that grounds Bayesian reasoning.
But when you guys talk about Popperian reasoning, it sounds like you haven't agreed to do that, right? You haven't agreed to take this idea of an explanation and have a formal equivalent for it.
Vaden Masrani: False analogy because a hypothesis is a natural language, not in a formal language. So the analogy doesn't work because the ontology
Liron Shapira: so is a mathematical paper, right? So is a research paper.
Vaden Masrani: uh,
Ben Chugg: Step outside of
Vaden Masrani: saying that what you just,
Ben Chugg: Yeah. Go to, go to physics or chemistry or something.
Vaden Masrani: yeah, like I'm just saying that the stuff that you were just asking about the ontology of a mathematical proof using that as, um, an analogy to, The hypothesis, the H in Bayes theorem, the analogy is broken because the hypothesis is some natural expression.
It's not a formal language. So it's just, the analogy just doesn't work. That's all I'm saying.
Liron Shapira: Yeah.
I'm not saying that a hypothesis is a proof. What I'm saying is, when I talk about a hypothesis using natural language, or when I'm saying, hey, my hypothesis is that the sun will rise tomorrow, there is a formal, there's a corresponding formal thing, which is, hey, if you take all the input to my eyes of what I'm saying, and you codify that into bits, and you look at the set of all possible Turing machines that might output those bits, my hypothesis about the sun rising tomorrow is one of those Turing machines.
Ben Chugg: Sure, I mean, okay, so let me just, let me just try and restate your critique of us just so I make sure I'm on the same page. I think you want to say, you know, in theory Bayesianism has this way to talk about the generation of new hypotheses. Right? As abstract and idealized as this is, we've put in the work in some sense to try and formalize what the hell is going on here.
You pauperians are sitting over there, you know, you're critiquing us, you're making fun of us. You haven't even tried to put in the effort of doing this. Where are your hypotheses coming from? You can't criticize us for doing this. You have no, you don't even have a formalism for God's sakes. You just have words and stuff.
You, you know, is that kind of, that's kind of where you're coming from? Without the snark. I added that
Liron Shapira: It's rough, yeah, it's roughly accurate because I do think that formalizing the theoretical ideal of what you're trying to do does represent epistemological progress.
Vaden Masrani: Only if the theoretical, uh, philosophy assumes that a formalism is required. So part of Popper's view is that formalisms are useful sometimes in some places, but most of the time you don't want to have a formalism, because having a formalism is unnaturally construed. Constraining the space of your conjectures.
So, the theory on our side is that formalisms are sometimes useful in some places. Not always useful in all places. And so I, I totally accept your critique from your view. Because your view is that a formalism is always better. And we don't have one. Thus, we're worse. But our view is that formalisms are sometimes useful in some places.
Not always in every place.
Liron Shapira: What would be the problem with you just saying, okay, I can use a Turing machine as my formalism for an explanation, because when we look at the actual things that you guys call explanations, it seems like it's pretty straightforward to map them to Turing machines. Okay.
Vaden Masrani: And, yeah, I guess you
could,
oh, go ahead,
Ben Chugg: Well, I think it just doesn't help you try and figure out the question of like, really where these things are coming from, right? So if you're interested at the end of the day of trying to figure out, uh, philosophically and presumably neuroscientifically how humans are going about generating hypotheses, mapping them to the space of all possible Turing machines is not helpful.
Like, sure, the output of your new idea could be run by some Turing machine. Great. The question is, you know, what, you know, there's an entire space of possibility as you're pointing out, you know, like vast combinations, endless combinations, in fact, of possible ideas. The human mind somehow miraculously is paring this down in some subconscious way and new ideas are sort of popping into our heads.
How the hell is that happening? I don't see how the Turing machine formalization actually helps us answer that question.
Liron Shapira: It's because we're talking about the ideal of epistemology. It might help to think about, Hey, imagine you're programming an AI starting from scratch. Isn't it nice to have a way to tell the AI what a hypothesis is or what an
Vaden Masrani: But the ideal of your epistemology is that a formalism is required. Not our epistemology.
Liron Shapira: Right? But so what I'm saying is, okay, let's, you're saying a formalism isn't required, but let's say I take out a, a white sheet of paper and I'm just starting to write the code for an intelligent ai, right? So when, what, what you say as formalism I say is like, Hey, I have to put something into the
Ben Chugg: yeah, yeah.
Liron Shapira: how
do I teach the AI.
Ben Chugg: I mean, I agree, like, this would be awesome if you could answer this question, but I just don't, I don't think you're answering it by appealing to, like, one thing I don't quite understand about your answer is, you're appealing for a process, rather, okay, let me say that again, for a process that is taking part in our fallible human brains.
As an explanation, you are appealing to this idealized system. By definition, we know that can't be what's going on in our heads. So how is this helping us program an AGI? Which I totally take to be a very interesting question. And I, and we're, you know, we'll get into this when we start talking about LLMs and deep learning.
I don't think, This is the right path to AGI. And so, and very interesting question from my perspective is what is the right path? Like if we could have some notion of like how the human brain is actually doing this. I agree that we, you know, once we figured out we could write, sit down and presumably write a program that does that.
Uh, and that's a very, that's a very interesting question. I just don't think we, we know the answer to that.
Liron Shapira: Yeah, so I agree that just because I have a formalism that's an uncomputable ideal of Bayesian epistemology doesn't mean I'm ready to write a super intelligent AI today. Uh, and by analogy, uh, if, you know, they understood chess when the first computers came out, it was pretty quick that somebody's like, hey look, I could write a chess program that basically looks ahead every possible move, and this is the ideal problem, uh, program.
It will beat you at chess, it'll just take longer than the lifetime of the universe, but it will win. So I agree that your criticism is equally valid, uh, to me as for that chess computer. My only argument is that the person who invented that chess computer did make progress toward solving,
uh, you know, uh, superhuman chess ability, right?
That was a good first step.
Ben Chugg: Yeah. Yeah. That's fair. Can I, um, can I just pivot slightly and ask you to clarify whether you're talking. Do you think Bayesianism is true descriptively of the human brain, or are you making a normative claim about how rational agents ought to act?
Liron Shapira: Right, yeah, yeah, you said this a few times, I'm glad we're getting into this because this is definitely one of the key points, and remember like what you said before, like, okay, you're telling me now that this engineering program, you used Bayesian reasoning to say you were 99 percent confident of the theory, but it sounds like you're retconning,
right, like that's kind of the same, the same style of question.
So I, retconning,
Vaden Masrani: retconning? What's retconning? I don't
Liron Shapira: it's Like retroactively rewriting history basically, right, like, oh yeah, I was totally Bayesian.
Vaden Masrani: Okay, cool, I'm sorry, I didn't know that term. Sorry, sorry to interrupt you.
Liron Shapira: No, that's good, yeah. Um, So
okay, it's totally true that like as I go about my day, right, like why did I open the refrigerator? What was my community of 50, 000 hypotheses, right, that told me different things were going to be in my fridge or there wasn't going to be a black hole in the fridge, right? What were all the different hypotheses? And the answer is, look, I just opened the fridge, right, because it's like muscle memory, right, like I was thinking about something else, right? So I'm not pretending like I'm actually running the Bayesian algorithm. What I'm claiming is To the extent that I can reliably predict my future and navigate my world, to the extent that my brain is succeeding at helping me do that, whatever I'm doing, the structure of what I'm doing, the structure of the algorithm that I'm running, is going to have Bayes structure.
Otherwise, it just won't work as well.
Ben Chugg: Uh, okay.
Vaden Masrani: descriptively you're saying.
Ben Chugg: you're saying descriptively if you do something else you'll fall short of perfect rationality. Like you'll have worse outcomes.
Liron Shapira: I'm saying is like, sometimes my muscle memory will just get me the can of coke from the fridge, right, even without me even thinking about Bayes Law, but to the extent that that was true, it's because it dovetails with what Bayesian epistemology would have also had you do.
Like, Bayesian epistemology is still the ideal epistemology, but if you're doing something else that ends up happening to approximate it, then you can still succeed.
Vaden Masrani: According to you, sure. Um, yeah, it's not Bayes law, it's Bayes theorem, first of all. Uh, but, sure, yeah. Um, that's, that's, that's the worldview that we are saying we disagree with. But, sure.
Liron Shapira: Yeah, I mean, look, similarly to you guys, right? Like, when you're opening your fridge, right, you're not, you don't have the, the one Popperian model with a good explanation, and what's that, right? You're just, like, thinking about something else, most likely.
Vaden Masrani: Are you conjecture that you're thirsty? Or you have a little, like, um, I don't, I guess I don't entirely know what the question is. If, if you're asking, what is the Popperian approach to getting something from the fridge? It's probably, um, Pretty simple. It's, you have an idea that you're hungry and you go there and you open the fridge and you get it.
Um, but if the claim is something deeper, which is like, does the Popperian view say something about Bayes being the ideal, et cetera, et cetera, then it definitely says that that is not the case. Um, so we can go into reasons why that's like, not the case, but it, your answer is assuming the very thing that we're disagreeing about is the point.
Um,
Liron Shapira: Mm hmm. Okay, a couple things,
Vaden Masrani: Yeah. No,
Ben,
Ben Chugg: was just gonna, yeah, just, I, we're somewhat in the weeds, so I just maybe wanted to say to people how I, how I envision the Bayesian debate often is like there's two simultaneous things often happening. One is like the descriptive claims that you're making about like how humans do and how brains do in fact work and that they're doing something approximating, uh, Bayesian reasoning.
Um, and Baden and I both think that's wrong for certain philosophical reasons. You know, we can get into empiricism and stuff, but I don't think observations come coupled with numbers and that those numbers aren't being represented explicitly by your brain, which is updating via Bayes theorem. Um, so there's this, like, whole descriptive, uh, morass that we've sort of entered.
And, but then there's, there's, you know, where rubber really meets the road, um, is, like, the normative stuff. Right, so Bayesians want to, because they want to assign numbers to everything, uh, like you wanted to do at the beginning of this episode, right? You'll assign new numbers to geopolitical catastrophes and, you know, P Doom, and, and then you'll compare those to numbers that are coming from, you know, robust statistical models backed by lots of data.
And I think, Faden, correct me if I'm wrong, I think Faden and I's core concern is really with this second component of Bayesianism, right? I think the descriptive stuff is philosophically very interesting But it's sort of less important in terms of like actual decision making and real world consequences Like if you want to sit there and tell me that you're doing all this number manipulation with your brain that helps you make Better decisions and like that's how you think about the world then, you know, like honestly, that's that's fine to me but What really, you know, when this stuff starts to matter is, I'll just steal Vaden's favorite example, because I'm sure it'll come up at some point, which is, uh, you know, Toby Ord's book of probabilities in, in the precipice, right?
So he lists the probability that humans will die by the end of the century, I forget, correct me if I'm wrong, um, and he gives this probability of one sixth. Where does this one sixth come from? It comes from aggregating all the different possibilities that he's, that he analyzes in that book. So he does AI and he does, um, uh, bioterrorism and he
Vaden Masrani: Volcanoes and asteroids
Ben Chugg: does all this stuff. And this is an illegal move that from Vaden and I's perspective, and this is the kind of stuff we really want to call out and that we think, you know, really matters and really motivates us. Most of the Bayesian critique and sort of goes beyond this like descriptive level touring machine stuff that we've been arguing about now So anyway, I guess I just wanted to flag that for the audience.
Like I think there's more at stake here in some sense than just deciding How to open the fridge in the morning, which is is fun and interesting to talk about but I just wanted Maybe frame things
Vaden Masrani: Yeah.
May I just, yes. I just want to add something to what Ben said. Beautiful. Exactly right. I think it's so important to continuously remind the listener, the viewer, why we're arguing in the weeds so much. We're arguing so much about this because of exactly this high level thing that you said, which is, um, it is illegal, it is, um, uh, duplicitous, and it is misleading the reader when someone says the probability of superintelligence is 110, and they compare that to the probability of volcanic, uh, extinction, which is 1 in 1 million.
Because you can look at the geographical, geological history to count. Volcanoes and make a pretty rock solid estimate But you are just making shit up when you're talking about the future and then you're dignifying it with math And a hundred years of philosophy. And so why Ben, er, I can't speak for you actually on this one, but why I like to and need to argue in the weeds so much is that I have to argue on the opponent's territory.
And so when I'm getting all annoyed by this 1 in 10 to 1 in 1 billion comparison, to argue against that I have to go into the philosophy of the Turing machines and the this and that and the whatever. And we get super in the weeds. Um, but the reason I'm in the weeds there is because Toby Ord has been on multiple podcasts and probably blasted this number into the ears of over 10 million people, if you can fairly assume that Ezra Klein and Sam Harris, who both swallowed this number, um, uncritically, uh, if their listenership is somewhere, um, I think it's one in six for the aggregate of all extinctions and then one in 10 for the super intelligence one, if I'm remembering the precipice correctly.
And that was compared against. Um, I don't remember the numbers for volcanoes and supernovas and stuff, but one in one million, one in ten million, that, that, that order of magnitude, yeah.
Liron Shapira: Yeah, and then, so, you're making the case why we're getting into the weeds, why epistemology is so high stakes, because basically the upshot in this particular example is that humanity should be able to do better than this Bayesian guy Toby Ord, because it's kind of a disaster that Toby Ord is saying that, like, nuclear extinction, for instance, might have a probability of, just to oversimplify what he actually says, something in the ballpark of 10%, right?
Which gets to what we were discussing earlier. So you consider it kind of a failure mode that people like myself and Toby Ord are making claims like, Hey guys, there's a 10 percent chance that we're going to nuclear annihilate ourselves in the next century. You think it's a failure mode because you think something better to say is, Hey, we don't know whether we're going to get annihilated and nobody should say, quantify that.
Vaden Masrani: the, the, the, the claim. Um, so I didn't use nuclear annihilation intentionally, because I think that is also in the camp of we don't really know what the numbers are here. I used, uh, volcanoes, and I used supernovas, and I used asteroids. I did not use
Ben Chugg: No, that's what he's saying. That's what
Liron Shapira: and I think we're all on the same page that those things are unlikely in any
given century, right? But so, so why don't we talk about the, the thing that's like the more meaty claim, right? The
Vaden Masrani: No, no, but, but, but, but my claim is, is not that it's, we can't reason about nuclear annihilation. I think that's very important. I'm just saying that if I talk about the probability of volcanoes and then I talk about the probability of nuclear annihilation, when I say the word probability, I'm referring to two separate things.
I should talk about like probability one and probability two or probability underscore S and probability underscore O or something. They're just different and we can't use the same word to compare
Liron Shapira: you might label it frequentist probability, right, would that be a
Vaden Masrani: No,
uh, no, no, frequentism, yeah, frequentist is a philosophical interpretation. Um, I've been using objective probability, but just probability based on data, probability based on, on counting stuff, data, but frequentist is not, right, no,
Liron Shapira: Okay. Yeah, maybe you could call it statistical probability.
Vaden Masrani: um, let's just call it probability that's based on data,
Ben Chugg: Or stitch.
Vaden Masrani: CSVs, Excel, JSON, yeah,
Ben Chugg: Yeah, it just goes fine for the purpose of this this conversation, honestly Um, and yeah, just to just to maybe answer the question you you asked in a minute ago It's certainly not that we can't talk about the risk of nuclear annihilation, right? What we're saying is let's skip the part where we all give our gut hunches and like scare the public with information that no one can possibly have.
Uh, and so I would just turn it on you. Like, so if, you know, say you're very worried about, uh, nuclear annihilation, you give a probability of 1 over 10 in the next 50 years, then someone comes up to you, some geopolitical, Analysts, say John Mearsheimer comes up to you and he says my probability is 1 out of 50, okay?
What's your next question? You're gonna ask why is your probability 1 And he's gonna say why is your probability 10? What are you gonna do? You're gonna start descending into the world of arguments, right? You're gonna start talking about mobilization of certain countries, their nuclear capacity, their, you know, incentives, right?
You're going to have like a conversation filled with arguments and debates and subjective takes and all this stuff. Uh, you're going to disagree. You're going to agree. Maybe you'll change his mind. Maybe he'll change your mind. Great. Uh, and then at the very end of that, the Bayesian wants to say, okay, now I'm going to put a new number on this.
Um, but Bain and I are just saying the number is totally irrelevant here and it's coming out of nothing. Let's just
Skip the number.
part and have arguments, right? And that's not saying we can't think about future risks. We can't prepare for things. It's not throwing our hands up in the and You know, claiming that we, yeah, we absolutely can't take action with respect to anything in the future.
It's just saying, let's do what everyone does when they disagree about things. Let's take arguments very seriously. Arguments are primary, is a way to say it on our world view. Numbers are totally secondary. Numbers are secondary and only useful when they're right. They're right for the problem at hand.
And they're certainly not always useful. Yeah.
Vaden Masrani: Typically when you have a data set is when it's
useful to use numbers. Yes.
Liron Shapira: imagine none of us were Bayesians and we just had the conversation behind closed doors about the risk of nuclear annihilation and we come out and we're like, okay, we all agree that the likelihood is worrisome. It's too close for comfort. It's still on our minds after this conversation. We didn't dismiss the possibility of being a minimal, right?
So that, that'd be one kind of non Bayesian statement that normal people might say, right? Okay, and, or alternately you can imagine another hypothetical where people, maybe it's in the middle of the Cuban Missile Crisis and people walk out of the room, which I think actually something like this did happen in the Kennedy administration where people walked out of the room saying like, I think this is more likely than not.
Like, this looks really, really bad.
So where I'm going with this is, I think that there's a, a, you could bucket a number of different English statements that people, normal people often say after leaving these kinds of meetings. And it's pretty natural to be like, okay, well in the first place where they said too close for comfort, maybe the ballpark probability of that is 1 percent to 20%.
Vaden Masrani: Hold on. Hold on. That's the move. That's the move that I want to Excise. So I think it's completely legitimate 100 percent to bucket degrees like strengths of your beliefs I think that this is done all of the time when you answer survey questions So like a 1 to 10 scale is very useful. How do you agree with this proposition?
Sometimes it's like strongly disagree Disagree, neutral, agree, strongly agree. So that's,
um, a five point scale that indicates strength of belief. Uh, sometimes it's useful to go to ten. Uh, I think for like certain mental health questions I do that. All great, I'm so on board with that, that's important.
Where I say, hey, hold on people, is calling it a probability. Okay, you don't have to do that.
You could just say, you could just say, how strongly do you believe something? Um, and, um, Then as soon as you start calling it a probability, now we are in philosophically dangerous territory because the arguments to assign probabilities to beliefs and then equating probabilities that are just subjective belief gut hunches with like counting fricking asteroids.
That's where all the, the, the difficulties come. So I am totally in favor. Of quantizing, discretizing, um, strengths of belief, and I think it's about as useful as, um, a 10 point scale, but that's why doctors don't use, like, 20 point scales very often, and only when I'm answering surveys from, like, the less wrong people, or the frickin Bostrom people, do they give me a sliding scale, uh, 1 to 100, it's the only time I've ever been given a survey with a sliding scale, is when I know that they want to take that number, because I'm an AI researcher, and turn it into the probability of Blah, blah, blah, blah, blah.
But, uh, most people don't think that, um, Granularity beyond 10 is very useful. That's why doctors don't use it.
Yes,
Liron Shapira: surprising to me that people get really worked up about this idea that like, yeah, we're just trying to approximate an ideal. Maybe if there was a super intelligent AI that I might be able to give really precise estimates as humans. We often say something like, hey, So, an asteroid impact, we've got a pretty confident reason to think that it's like less than one in a million in the next century. Because it happens every few hundred million years, statistically, and we don't have a particular view of an asteroid that's heading toward us. So, roughly, that's going to be the ballpark. And then, I can't confidently tell you the probability of nuclear war in the next century, right? Maybe it's 1%, maybe it's 5%, maybe it's 90%. But, I feel confident telling you that nuclear war in the next century is going to be more than 10 times as likely as an asteroid impacted in the next century.
Am I crazy to claim that?
Ben Chugg: let's just descend into the level of, back to the weeds of philosophy for one second. What do you mean by approximating ideal? What's the ideal here? Like, is the world,
Vaden Masrani: thank you. Yeah, and
Ben Chugg: Well, no, no, but, but, no, no, not even normative, not even a normative idea. When you say like, you know, am I create your, okay, correct me if I'm wrong.
You're saying there is a right to probability. And I'm trying to approximate that with my degrees of belief. So there is an X percent chance for some X that there's a nuclear strike on the U S in the next hundred years. Do you think that?
Liron Shapira: Yeah, I mean Solomonov induction is going to give you the ideal Bayesian probabilities to make decisions
Ben Chugg: Okay, okay, okay, but that's different. Okay, so that's, that's a claim about rationality. I'm asking you, is there a probability attached to the world? Is the world like, is the world stochastic in your, for you?
Liron Shapira: No, probability is in the mind of the model maker, right? So, um, the universe, you might as well treat the universe
as being deterministic because you don't, there's actually no ontological difference when you build a mental model. There's no reason to take your uncertainty and act like the uncertainty is a property of the universe.
You can always just internalize the
Ben Chugg: Okay, good.
Liron Shapira: Or,
Vaden Masrani: one of the good Bayesian critiques about frequentism that I like. So I, we, I totally agree with you. That, that, that the world is deterministic, non stochastic, and randomness doesn't actually occur in nature. I, I agree. but
Liron Shapira: we, or we might we, we, there's, there's just no epistemic value to treating the universe as ontologically fundamentally, non deterministic, and the strongest example I've seen of that is in quantum theory, like the idea that a quantum collapses. ontologically fundamental to
the universe and like the probabilities are ontologically fundamental instead of just saying, hey, I'm uncertain what my quantum coin is going to show you know, to me, that seems like the way to go and by the way, I bounced this off Eliezer because it's not officially part of the Eliezer canon but Eliezer says he thinks what I just said is probably
Ben Chugg: Yeah, nice. Um, I think, yeah. So for the purposes of this, I think we're all comfortable agreeing the world's deterministic. So, yeah, so now the question is, when you say, ideal, now you're, you're appealing to a certain, uh, normative claim about how rational agents ought to behave, right? And so now we need to descend into like, by whose lights is it rational to put probabilities on every single proposition?
Um, but I just wanted to, I just wanted to, because when, it sounds like you're, you know, It sounded, when you were talking, like you were saying, you know, there is an X percent probability that some event happens. We're trying to figure out what that X is. That's not true, right? So, you know, the world is
Vaden Masrani: the, um, the, yeah,
the ideal,
Liron Shapira: hmm.
Vaden Masrani: it's the ideal Bayesian reasoner, right? It
is what the ideal means.
Liron Shapira: Let me give you more context about the Bayesian worldview, or specifically the Solomonoff induction worldview. So the game we're playing here is, we're trying to get calibrated probabilities on the next thing that we're going to predict. And this ideal of Solomonoff induction is, I take in all the evidence that there is to take in, And I give you a probability distribution over what's going to happen next and nobody can predict better than me in terms of like, you know, scoring functions, like the kind that they use on prediction markets, right?
Like I'm going to get the high, provably get the highest score on predicting the future. And that's the name of the game. And remember, like the stakes, the one reason we're having this conversation is because we're trying to know how scared we should be about AI being about to extinct us. And a lot of us Bayesians are noticing that the probability.
Seems high. So the same way we would, if there was a prediction market that we thought would have a reliable counterparty, we would place like a pretty high bet that the world is
going to
Ben Chugg: good. We're getting into the meat of it. Um, I just have a, uh, a historical question. Is Solomonoff induction tied to the objective Bayesian school or the subjective Bayesian school? Or do you not know?
Liron Shapira: I, I don't really know, right? So, so this is where maybe I pull a David Deutch and I'm like, look, I don't necessarily have
to represent the Bayesians, right? I think that I'm, uh, faithfully representing Yud, Eliezer Yudkowsky. I think you can consider me a stochastic parrot for his position because I'm not seeing any daylight there.
But I, I don't, I can't trace it, uh, back to, you know, what Eliezer wrote about Solomonov induction. He indicated that it was, uh, part of it was original. So this could just be
Eliezer only at this
Ben Chugg: Yeah, that wasn't supposed to be
Vaden Masrani: Yeah. Solomon, Solomonov induction is, it's, it's, um, it is induction, like philosophical induction, the stuff that we've been railing against, um, except with a Bayesian, uh, theorem interpretation on top of it. So all of the critiques that we've made
about
Ben Chugg: no, I know, but I was just curious because, um, you know, there are two schools of Bayesianism, the objective Bayesians and the subjective Bayesians. Jaynes comes from the objective school, um, and Solomonoff induction,
Vaden Masrani: Oh, he comes from the
Ben Chugg: that's what the ideal rational agent is about. Like, he thinks there is a correct prior, there are correct probabilities to have in each moment.
And it sounds like Sol No,
sorry. Within Bayesianism, which is still a subjective interpretation of probability, there, there's an objective, there's, there's, or call it logical probability versus
subjective Bayesianism. These are different
things, right? So, subjective Bayesians, I think, wouldn't sign off on the Solomonoff induction.
This is a total tangent. You can cut this out if you want. But they, I don't think they'd sign off on Solomonoff induction because they're, they're, like, for them, probability is completely individual. And there's no way to litigate that I have a better probability than you because it's totally subjective.
Then there's a large, the Logical or objective Bayesians want to say, no, there is a way to litigate who has a better, uh, uh, a better, uh, credence in this proposition, but they're both still Bayesian in the sense that they're putting, uh, probability distributions over propositions and stuff, right? Like there's still, yeah.
Um, anyway, sorry.
Vaden Masrani: think you should keep that in. That was helpful for me. Yeah, yeah. You should keep that in. Yeah, yeah.
Liron Shapira: You know Ray Solomon off came a couple centuries after Laplace I think so there was a long time when people like hey Bayesian updating is really useful But where do the pyres come from? I'm not really sure but if you have pyres This is a great way to update them and then Solomon off came along and is like hey Look, I can just idealize even the priors, right?
I can give you I can get you from 0 to 60 of having no beliefs to having the provably the best
beliefs
Ben Chugg: Okay. Yeah. So probably the objective is cool.
Vaden Masrani: Yeah, but, Yeah, can I say for the listeners, all this, like, ideal, provably, blah, blah, blah, it all rides on Cox's theorem. And so just, you know, Google my name and just type in the, the credence assumption, and then you can see the three assumptions that underlie Cox's theorem. The first one, the second one, and the third one are all something that you have to choose to assume.
And this is what Yudkowsky never talks about. And when he talks about laws and you have to be rational, blah, blah, blah. All of that is only if you voluntarily decide to assume the credence assumption, I don't because that assumption leads to a whole bouquet of. Paradoxes and confusion and nonsense about superintelligence and yada, yada, yada.
Um, but just for the listeners, when you hear that there's Bayes law and the law of rationality, all of that is only if you voluntarily wants to assume the credence assumption. And if you don't like myself, then none of this stuff applies to you. So just take that
Liron Shapira: maybe, maybe we'll get into that, um, but I, I got
Vaden Masrani: That was more for the listeners than for, than for you.
Liron Shapira: okay, okay, okay.
Vaden Masrani: sure. Yeah.
Liron Shapira: Um, where I'd like to try next is, so, you guys, uh, just put in a good effort, which I appreciate, uh, zooming into some potential nitpicks or flaws of Bayesianism. So let me turn the tables, let me zoom into something in Popperianism that I
Vaden Masrani: Yeah,
please.
Liron Shapira: I might be able to collapse a little bit.
Uh, let's see, so, okay, so we talked about, uh, how, okay, you, you're not entirely, uh, You're not really liking the idea of let's formalize the definition of what an explanation is. It's just like, look, we do it as humans. We kind of, we do our best, right? It's a little bit informal. One thing Popperians say about explanations is that better explanations are hard to vary, right?
Certainly, Deutsch says that. Do you want to like elaborate a little bit on that claim? Yeah, yep.
Vaden Masrani: from, um, Deutsch. That's one of the things that he, um, kind of built on, uh, Popper's stuff with. And all he means there is, um, that, Just consider two theories for why the sun rises in the morning. Theory one is that there's a God, which if they're happy that day will make the sunrise. And another theory is the heliocentrism where you have a sun in the center of the solar system and the earth rotates around it and the earth is on a bit of a tilt and the tilt, the earth rotates, um, was the, um, The earth itself is rotating around the sun, and it's the rotation of a spherical earth which causes the, the sunlight to rise the next morning.
So the first explanation, the God's one, is completely easy to vary and arbitrary because you could say, why is it when the God is happy, why is it one God, why is it six gods, and just whatever you want to justify in the moment can be justified under that. theory. So to actually with super intelligence, but we'll come to that later.
Um, with the soup, uh, with the heliocentrism theory, that one is very difficult to vary because if you change any detail in it, so why spherical, let's switch it to, um, cubic. Well, now all of a sudden the predictions are completely different because the sun is going to rise in a different fashion. Um, and so it's, um, That's what Deutsch is getting at with the hard to vary stuff.
Um, some critiques of this though is that it's not, like, I give you a theory, um, it's not like you can just naturally categorize this into those which are hard to vary and those which are easy to vary. Um, and so I'm assuming you're about to say something like, um, well, it's, uh, this is a difference in degree, not in kind, um, because everything is, um, Kind of easier, hard to vary, and you can't, um, uh, naturally bucket them into one camp or the other.
To which I'd say, I agree. That is true. You can't, um, The hard to vary criterion, uh, I think is rather useless as a critique of other people's theories. You could try to tell astrologers, and homeopathy people, and all these people that, Their theories are hard to vary, are not hard to vary, and thus it's wrong.
They're not going to listen to you. It's not a very good critique for other people. It's a great internal critique, though. And so if you take this on yourself, and you, um, and subject your own thought process to is my explanation easy to vary here? Like, Is the explanation that the superintelligence can just create a new reality whenever it wants?
Is that easy to vary? Is that hard to vary? Then you can start to, um, uh, weed out different kinds of, of theories in your own thinking. So, um, so it just adds to, to what Deutsch said, which is that it's, um, it is a, a degree, not a kind. And it's a kind of useless critique on other people, but it's a great internal critique.
Um, I don't know, Ben, if you'd want to add anything to, to
Ben Chugg: Maybe the only thing I'd add is that while this might sound, uh, perhaps like. philosophically, uh, in the weeds a bit. This is Precisely the kind of thing that people do on a day to day basis, right? If you drop your kid off at kindergarten, uh, you go to pick them up. There's many theories that, you know, they could have been replaced by aliens while they were there.
Now they're a different person or they've completely changed their personality over the course of the day. Like many possible predictions you could make about the future. What are you doing? You're saying those are totally unlikely because like, if that was to happen, you know, you have no good explanation as to like why that would have happened that day.
So this also just comports well with like how we think about, you know, Reality day to day, like, why do I not think my T is going to all of a sudden start levitating? Like, yeah, precisely for this sort of reason. Even if people don't really think of it like that, I think that's sort of what's going
on.
Vaden Masrani: And maybe a little plug for our conversation with Tamler Summers because we go into this in much greater detail, um, and so just for people who want a more fleshed out version of what we just said, check out that episode, yeah.
Liron Shapira: so personally, I do see some appeal in the particular example you chose. Like, I think there, it's, you know, I get why people are using it as a justification for their epistemology. Because, like, if somebody is, like, reading
Vaden Masrani: It's not a justification for the epistemology, just to be clear. It's, it's more of a consequence of the epistemology. It's a, it's a heuristic and a criterion, not a justification for it, but yes.
Liron Shapira: Do you think it's a corollary, or do you think it's one of the pretty foundational rules of thumb on how to apply the epistemology?
Vaden Masrani: No, it's not foundational. No, it's um, it's a corollary, yeah.
Liron Shapira: Interesting, because I feel like without it, you might not, it might be hard to derive from the rest of Hopperianism.
Vaden Masrani: Nothing is derivable in Popperianism, um, and it's not a foundational. No.
Liron Shapira: But you're saying nothing is derivable, but you're also saying it's not foundational and
Vaden Masrani: Oh, sorry, sorry, uh, if by, sorry, good, good claim, uh, if by derivable you mean like formally, logically derivable, then no, nothing is derivable, it's, it's conjectural, conjecture. If by derivable you just mean like in the colloquial sense, like, um, oh yeah, I derived the, the, yeah, so just to be clear there, just cause the formal natural distinction, yeah, uh, seems to be important in this conversation.
Liron Shapira: haven't gone that deep on preparedness and so I am actually curious, like, so this, this rule that Deutsch brings out a lot or heuristic or whatever it is, right? That, that a good explanations are hard to vary. Did Deutsch infer that from something else that Popper says?
And if so, what's the inference? Okay.
Ben Chugg: Yeah, Vayden, correct me if I'm wrong here. Can't, uh, doesn't this come somewhat from Pauper's notion of content? Like empirical content of theories, right? If you want theories with high empirical content, that's
Vaden Masrani: Uh,
yeah, yeah, yeah,
yeah.
Ben Chugg: want things that are hard to vary.
Mm,
Vaden Masrani: just because the, um, there is a, uh, important distinction between just the way that Ben and I, and you think about stuff, which is, uh, formal systems compared to natural language. So words like derivable, infer, et cetera. I just feel like we need to, to, to plant a flag on those because translational difficulties there.
So just because of that, um, yes, it is absolutely. Colloquially derivable from his, uh, theory of content, absolutely, but, um, Deutsch just kind of re newed it, um, so, it's consistent with Popper for sure, but it's just like a, it's a rebranding, it's a, what is the, a concept handle? It's like
a concept handle, um, yeah,
Liron Shapira: Ben, do you want to elaborate on that? I'm curious to learn a little bit more. Because, I mean, look, I find some merit or some appeal to this concept. So, can you tell me more about the connection to the content,
whatever Popper's
Ben Chugg: Yeah, yeah, I'll let Vaden go, because he
loves this stuff,
Vaden Masrani: yeah. Do you, do you want to hear a full thing about content? I could spiel about that for like an hour, but Ben, maybe
Liron Shapira: can you just tell me the part that grounds the explanation should be hard to vary claim? I don't
Vaden Masrani: yeah, so this, yeah, I'd love to talk about content, but I need to explain what it is. Like, do you know what proper stuff
on content is?
Um,
Liron Shapira: hmm. Mm
Vaden Masrani: okay, so content is a really interesting, um, concept. So the content of a statement is the set of all logical consequences of that statement. Okay? Yeah, so, um, and I'm going to expand upon this a little bit because, um, it's actually going to lead somewhere and it's going to connect nicely to what we've been discussing.
So far. Um, so just to give an example, so the content of the statement, uh, today is Monday, would be, um, a set of all things that are logically, um, derivable from that. So today is not Tuesday, today is not Wednesday, today is not Thursday, et cetera. Um, the content of the statement, um, it is raining outside, would be it is not, um, sunny outside, there are clouds in the sky, that, that kind of thing.
Um, so that's what the content is. Uh, and then there's different kinds of content. So there's. There's empirical content and there's metaphysical content. So um, empirical content is a subset of all the content and that is things which are derivable that are empirically falsifiable. So if, for example, I say, um, uh, what's the content of the statement that all swans are white?
Um, well one, uh, derivable conclusion from that would be there is not a black swan in Times Square on Wednesday. 2024. Um, that would be a empirically derivable, um, uh, claim. Uh, the content of, um, a metaphysical statement would be something like, um, uh, the arc of progress bends towards justice or what's that, um, quote from MLK.
Um, so, and then the content of that would be something like the future will be more just than the past. Um, okay. If you let me elaborate a bit further, I promise this is going to connect to what we're So now we can talk about, um, uh, how do you compare the content of different kinds of statements. So, with the exception of tautologies, essentially everything, it has infinite content.
Um, because you can derive an infinite number of statements. statements. from today is not Monday. You can just go today is not Tuesday, etc. So it's infinite, but you can do class subclass relations. So, um, the content of Einstein's theory is strictly greater than the content of Newton because you can derive Newton from Einstein.
So Einstein is a higher content theory from Newton precisely because anything that Newton can derive, you can derive from, from Einstein. Um, you can't compare the content of say Einstein and Darwin. For example, because they're just infinite sets that can't be, can't be compared. Um, So going a bit further now and where this is going to connect really nicely to what we've been discussing so far.
So let's talk about the content of conjunctions. Um, so the content of a conjunction, um, so we have two statements today's Monday and it is raining. So the content of a conjunction is going to be strictly greater than or equal to the content of the, uh, statements, uh, on there on its own. Um, the content of a tautology.
It's zero, if you want to put a measure on it, if you want to put numbers on it, it's zero because nothing can be derived from a tautology. The content of a contradiction is infinite, or one, because from the law of, um, what's it, the law of, uh, explosion principle or whatever, from a contradiction anything can be derived.
So it's infinite, but because it's infinite, you can immediately derive a empirical, uh, Um, falsifier that would show that the content of a contradiction is, is false. So now we're going to connect. So let's talk about the probability of a conjunction. So the probability of a conjunction, today is Monday and today is not raining, strictly goes down.
Probability is less than or equal to. The probability of a tautology is one. The probability of a contradiction is zero. So if you want, in science and in thought. To have high content, you necessarily must have low probability. If you want, um, Your theories to be bold and risky than they necessarily have to have low probability.
So on this side of the aisle, we claim that the project of science is to have high content propositions, theories that are bold and are risky, and that's necessarily low probability. On your side of the aisle, you want high probability. So if you just want high probability, just fill your textbooks with tautologies.
Um, if you want low probability film with contradictions from our perspective, we want high content. Um, so we want low probability, so we are completely inverted. And I would claim, and Ben I think would claim, and Popper, this is, I'm just ventriloquizing Popper entirely, that the goal of science is to have high content, risky, bold, empirical theories, such as Newton, Einstein, Darwin, and DNA, et cetera, et cetera, and that means low probability, which means that Bayesianism is wrong, please.
Liron Shapira: Yeah, thanks for that. Let me make sure I fully understand here because in the example of the you know the I think we talked about the Sun going there on the earth or like we see the Sun rising and setting and One person says I think this is because the earth Is spinning right? So we see the Sun coming up and down and another person says I think this is because I Believe in the Greek gods and this is clearly just Helios right as said in the Greek mythology You And you're saying, well, look, we prefer a higher content theory.
And so when you talk about Helios, because it's easy to vary, that makes me think it's using fewer logical conjunctions, which would make it lower content. Am I verifying you correctly? Mm
Vaden Masrani: great. Yes, I actually didn't connect those two. Um, and there's a nice relationship between, um, complexity, which is about, um, conjunctions of statements and simplicity. And what we look for in science is simple statements with high content because those are the ones which are the easiest to falsify.
Um, and so if we have certain statements that, um, A lot can be derived, such as you can't travel faster than the speed of light, um, then it makes a lot of falsifiable predictions, and thus touches reality much more, um, and it's harder to vary because if you change any part of it, then you're falsified, you're falsified, you're falsified.
So there is directly a relationship there, yeah,
Liron Shapira: Okay, but what if the ancient Greek pushes back and he's like, Oh, you need logical conjunctions, eh? Let me tell you about Helios. Okay, Helios rides his chariot around in the sun, and he wears these sandals made of gold, and he's friends with Zeus, right? So he gives you
like 50 conjunctions. He's like, I actually think that my theory is very high content.
Vaden Masrani: yeah, and so this is where there's a difference between content and, um, easy to vary this, right? So, all the conjunctions that he just made up, he could just make up a different set. And that's why it's so easy to
Ben Chugg: But,
Liron Shapira: But,
but what if, okay, just playing along, I
Vaden Masrani: Oh yeah, yeah, yeah, no, but
Liron Shapira: me just push the bumper car here, right, so what if he's like, but okay, but I'm specifically just telling you all the conjunctions from my text, right, and we haven't varied the text for so
long.
Ben Chugg: think there you'd want to talk about the consequences of his view. You want to look in, in conjunctions of the content, which are the, the, the, in this case, it's supposed to be an empirical theory. So the empirical consequences. So you ask him, okay, like, given all these details about your theory, that's fine.
But like, what do you expect to see in the world as a result of this theory? And there it's very low content, right? Because it's going to be able to explain anything that can happen. War, no war. Clouds, no clouds. Um, I don't know. I don't know. I don't actually know what chariots
Liron Shapira: what you're saying, and, you know, I'm playing devil's advocate, right, I'm not even necessarily expecting to, to beat
you in this argument, but I'm really just pushing, just to see if I can,
right,
Vaden Masrani: I mean it's
Liron Shapira: so imagine then,
Vaden Masrani: it's not win or losing, it's just trying to learn, learn from each
Liron Shapira: yeah,
Yeah, imagine that he says, um, okay, but I have this text. It's been around for a thousand years and it specifically says every day Helios comes up and then down, right?
And it can vary a little bit, but it's always going to be like up and down and an arc pattern in the sky. So I'm not varying it. Right. And it has like a 50 conjunctions. So like, why does this
not beat out the earth is spinning theory? No,
Vaden Masrani: to like induction and stuff, and I've seen it a thousand times in the past, is that where you're going with
Liron Shapira: no,
I'm not moving. I'm actually still, I'm actually still trying to prod at this idea of being
hard to vary. Right. Right. So.
Vaden Masrani: sorry. Sorry.
Liron Shapira: is a critique that the helios going around the
Ben Chugg: So it's, it's not, um, so, oh, I see. Okay. So I'd, I'd rather talk about that. That's why I think content is actually sort of like the more primal concept here, the more primitive concept rather, because there you can talk about, it's not that that has no predictive power or no content, right? That you're, as you said, it's going to predict that the sun rises and sets.
But then you start asking, like, what's beyond that prediction? Like, what else does this say about the world? Well, the theory of heliocentrism says a lot, right? It says things about seasons, it makes very It posits a very rigid structure of the world, and we can go and test this structure. Like, you know, tilt theory of the world comes to mind.
It's related to this, I guess, um, you know, and this comports with other theories we have of the world, which together make this web thing of things that are like, that's when the hard to variance comes in all of that together is very hard, hard to vary. So it's true that it makes some, uh, predictions and has some empirical content, right?
That's presumably why they thought it was like a useful predictive theory in the first place. But you ask, okay, what does heliocentrism have on and above that? And it's got much more posits, way more content. And so we prefer it as a theory. Did that answer your question or
Liron Shapira: Okay, and just to make sure, let me try to summarize, I may or may not have understood
you correctly. You're saying like, look, the, the, the earth spinning model, it can also make a bunch of other predictions that we can even go test. And so just by virtue of doing that, it's, it's kind of like you're getting more bang for the buck.
It's kind of like a, it's a compact theory. It's getting all these other, it's constraining the world. But it almost sounds like hard to vary might not even be the main argument here, but it's more like, hey, look, there's a bunch of different types of evidence and it's compact. I feel like those are the attributes you like about it.
Vaden Masrani: Well, so the hard to variance again is not like the core thing that if you refute this, you destroy all of Popperianism, right? It's a, it's a heuristic as a way to think about stuff. It's related to content. Content is a bit more of a fleshed out theory. All of this is related to falsification. So content is part of the way that you connect to falsification.
Um, and it's related to like Occam's razor and stuff with the compactness. So compactness connects you to simplicity. Um, but again, it's, it's not, this like, uh, ah, you got, like, you gotcha, man. It's, it's like, yeah, sometimes I think about hard to variance and other times I think about empirical content. And, uh, what Ben just said was, was beautiful and perfect, which is the rigidness of the, um, the, the theory and how it's like locked and tightly fit on top of reality.
And then it gives you extra things that you can think about that you hadn't realized that if this is true, this leads to this other thing. So for example, heliocentrism leads pretty quickly to the idea that Oh, shit, these little things in the sky that we see, they're lights, they're stars. Maybe they're just far away and maybe there's other planets there too and maybe, on these other planets, there's other people contacting other planets.
Contemplating how the world works and so it's not like it's derivable from it, but it's it just your thought leads leads there, right? And that's part of the content of the theory. So
That's not yeah, so
Liron Shapira: So my perspective is, you know, if you were to come into my re education camp and I wanted to reprogram you into Bayesianism, what I'd probably do is, like, keep pushing on, like, okay, what do you mean by hard to vary? What do you mean by, like, following Occam's razor? I feel like if I just keep pushing on your kind of heuristic definitions of things, I'll make you go down the slippery slope and you're like, okay, fine, Solomonov induction perfectly formalizes what all our concepts really mean.
Vaden Masrani: but it's not making us go down the slippery slope. Like Ben is a statistician. He understands Bayes I grew up in a We understand this stuff. We've read it all. I've read a lot of Yukowsky. Like I know the argument, like there's maybe a deep asymmetry
Liron Shapira: So
Vaden Masrani: which is that we know your side of the argument, but you don't totally know our side.
And so it's like the reeducation has already happened because I started as a Bayesian and I started as a less wrong person as to Ben. And so we have been reeducated out of it. And so you're talking about re re educating, but you wouldn't be telling us new things. You wouldn't be telling us
Liron Shapira: I, I actually haven't heard my side represented well on your podcast,
so, so let's see, let's, let, let, let's
see how much you know my side by the end of this, okay?
Vaden Masrani: Sure, sure. Yeah.
Liron Shapira: All right, so here, I've got another related question here on, on the subject of, uh, harder to vary, and I think you mentioned this yourself, that like, yes, technically, when somebody says, um, when, when somebody says that it's Helios Chariot. Technically, or sorry, wrong way. When somebody says, hey, the Earth is spinning on its axis, and that seems kind of hard to vary. Technically, it's not hard to vary because you could still come up with infinitely many equivalent explanations. Uh, so, like, so what I mean is like, okay, the Earth spins on its axis. And there's like
angels pushing the earth
around, right? Like, so you can just keep adding random details or, or even make like equivalent variation, like build it out of other concepts, whatever. Like,
so there's this infinite class, but the problem is you're wasting bits, right? So it's just not compact, I think is the main
Vaden Masrani: no, no, no, no, no, no, no. Good, sir. It's not about bits. No. So the problem with that Is, yeah, you can take any theory and then, I actually gave a talk about this at a high school once and I called it the, the tiny little angels theories. But you can say, take everything you know about physics and just say it's because of tiny little angels and the tiny little angels are doing, doing that.
Um, the problem there is not that you have to add extra bits. It's that as soon as you posit tiny little angels, you are now positing a completely different universe that we, Um, that we would be having to live in, that would rewrite everything we know about, it's the same with like homeopathy and stuff, like if the more you dilute, the stronger something gets, that rewrites all of the periodic table
Liron Shapira: They're just there, but they're inert.
Vaden Masrani: uh, so then,
Ben Chugg: did the
Vaden Masrani: That is the hard to vary stuff.
Ben Chugg: how do, what is their
Vaden Masrani: and why why not angels? Why not devils and the very way that you are in varying the explanation as we speak Is what we're talking about, right? So it's easy for you to vary
Liron Shapira: turns on its axis, but there's just like one extra atom that's just sitting there. Can't I just pause at that, right? Like, isn't that an easy variation?
Ben Chugg: But then take that
seriously as a theory right? Like, take that, so what's that, is that extra atom interacting with anything? Like, if not, then what use is it? If so, then it's gonna have effects. So why haven't we witnessed any of those effects? Like, where is it in our theories? Like, you know, like, um, yeah, like,
Vaden Masrani: also the heliocentrism theory is not a theory of there are this many atoms like it's not a theory of that level, right? Who's counting
atoms in heliocentrism?
Ben Chugg: theory, but, yeah.
Liron Shapira: So I think, I guess, let me, let me summarize my point here.
I think you guys do have a point when you talk about harder to vary, and I think it maps to what Bayesians would claim as like, and Occam's Razor would claim as like, let's try to keep the theory compact. Like it gets, it has a higher a priori probability if it's compact.
So if you just add a million angels that are inert, you're violating Occam's Razor, which I think maybe both worldviews can agree on. But if you're saying, no, no, no, we don't care about Occam's Razor, we care that it doesn't make extra predictions, Or like, that it makes, you know, maybe it'll make other predictions that are falsified.
I feel like now you're starting to diverge into a different argument, right? So I do feel like the hard to vary argument kind of seems equivalent to the Occam's razor argument.
Vaden Masrani: homeopathy probably could be represented with much fewer bits than the periodic table, but I still prefer the periodic table, even though it's more complex, right? So it's not just Occam's razor and low number bits. Like quantum field theory would take up a lot of bits. Um, there's many simpler kinds of theories you could use, but they don't explain anything.
They don't explain the experimental results that we
Liron Shapira: with homeopathy is, doesn't,
Vaden Masrani: simplicity itself is not valuable. The, okay, well now we're back to content. But I'm saying that if, if the only, Criterion is small number of bits being compact and simplicity. There are so many theories which are complex, use a lot of bits, are not simple, but I still prefer them.
And that's my point.
Liron Shapira: If you're trying to set up an example though, you have to make sure that it's an example where two different models make the same prediction. So when you brought up homeopathy versus the periodic table, I wasn't clear on what was the scenario where they're both making the same prediction.
Vaden Masrani: Um, they are both predicting that if you have my drug, it will make your cold, um, go away faster. If you have my, buy my product at Whole Foods, and they will both address your cold.
Liron Shapira: But in this scenario, doesn't the homeopathy remedy not work? Okay, but I mean,
Vaden Masrani: but from the homeopathy people think that they're predicting that it's medicine, right? And there's a reason why people don't use traditional medicine and go to homeopathy because they're both making predictions that if they take this, they're going to feel better, right?
Um,
Liron Shapira: example you're trying to set up is one where we actually have the same phenomenon, right? Like in the other example, it was, hey, the Sun is going to come up and go in an arc in the sky, right? So it's the same
phenomenon and you have two theories and we're
Vaden Masrani: No, the type of example I was just trying to set up here is much simpler, which is just simplicity and Occam's razor aren't sufficient. Um, they're just. modes of criticism. They're useful heuristics sometimes, but they're not the primary. And if all we care about is small numbers of bits and simplicity and compactness, then I can give you a bunch of theories that meet that criterion that I don't like very much.
Liron Shapira: This is actually interesting, by the way, I guess. I wasn't even really expecting you to say that you basically don't think Occam's razor is useful, or like, what's your position on Occam's razor?
Vaden Masrani: Um, I think Occam's razor is good. Sometimes it's, it's a, it's one way to criticize stuff, but it's not the only thing. Um, sometimes the theory is super complicated and has a bunch of superfluous assumptions that you need to shave off. That's when I'll pull off Occam's razor. Sometimes I'll pull out Hitchin's razor.
Hitchin's razor says that that which can be asserted without evidence can be dismissed without evidence. That's also a useful criticism and a useful heuristic. There's a whole toolkit of different razors that one can pull out and neither of them, none of them, are at the base level. They're all just kinds of, you know, Shaving equipment that shaves off shitty arguments.
Liron Shapira: What do you guys think of the Bayesian view of Occam's razor?
Vaden Masrani: I think it's as fallacious and mistaken as Bayesianism itself. Um, or, that's cheap. Ben, give me a less cheap answer.
Ben Chugg: yeah, so the, I mean, so the, you wanna say that, uh, theories that are simpler should have high prior prob higher prior probabilities, right? That's the view of Bayesianism with respect to
Liron Shapira: Right, and that's what Solomonoff induction does. Uh, right, so it, it just, it, it basically orders all the different possible Turing machines that could ever describe anything, and it puts the, the ones that are shorter earlier in the ordering, which means that if you have one Turing machine that says there's a million angels doing nothing, right, that's going to be deprioritized compared to the Turing machine that's like, okay, there's not angels, right, it's, it's just simpler.
Vaden Masrani: So I think a lot, can I step just back like one second and if this is, if in your view I'm dodging the question feel free to re ask it but I just want to like frame a little bit and then Please criticize me if it seems like a dodge, but, um, Solba, Sol, Solomonov,
Ben Chugg: Solomonov.
Vaden Masrani: Solomonov Induction?
Solomonov Induction and Bayesianism and all this stuff tends to, um, fiddle with the probabilities enough to come up with a justification that Pauperians are already, already have. So there are reasons why we like simplicity. Um, one of the reasons is that simple theories have higher content and thus, they're more powerful.
are easier to refute. So that is in my view, why we like simplicity. However, you could come up with a Bayesian story about why we like simplicity. You could, you could talk about it in terms of induction. You could talk about it in terms of Solomonov induction, and you could fiddle with the math and say, and this is how we get this conclusion from Bayes.
And you can do this all over the place. You can, this is like, The Bayesian epistemology hypothesis testing stuff where you can come up with a post talk story about why Bayes theorem is what led us to the discovery of the double helix. Um, but it's just that it doesn't give us anything new. We've already discovered that.
And then you could come up with a story after the fact. The double helix is another nice example of why I like content. Um, and I'll stop repeating myself there. But, but just when you ask these kinds of questions. Um, yeah, you can always tell a story from Bayes perspective about why we value this stuff.
And we're giving you an alternate story. And I guess it's just ultimately going to be, have something that the listeners are going to have to decide. Um, but the starting point that Solovanov induction is right is wrong, and we can argue about that. But if you grant that, or sorry, if you start with the assumption that it's right and you don't, So if you don't want us to argue about that, then yeah, of course you'd come up the story from Solomonov's perspective about why we value simplicity.
Um, but it's just, we're talking completely across purposes here because I reject Solomonov induction because I reject induction. And so any kind of induction just is wrong. And we can, and you can listen to us argue about this for hours and hours and hours if you like. Um, but you're starting with the assumption that it's right and that already is meaning we're not talking properly to one another.
Liron Shapira: Yeah, and by the way, I do want to hit on the human's paradox of induction stuff. I know you guys have talked about it. I find it interesting. Uh, so I want to get there. Uh, but first, I think I've, I've got a little more, uh, red meat to throw at you on this
topic of, uh, you know, how, how do we actually apply, um, uh, Popperian reasoning to judge different hypotheses that
seem like they could both apply. I've got an example for you. Uh, okay. Let's say I take a coin out of my pocket, we're just at a party, right? It doesn't look like it's, uh, I'm not like an alien or whatever, it's just normal people, right? I take out a quarter, it just looks like a totally normal quarter, you don't suspect me of anything. Um, and I flip it ten times. And it just comes up with a pretty random looking sequence, say, Heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. Okay, so it
doesn't look like a particularly interesting sequence. Um, and then you say, Okay, this just seems like the kind of thing I expect from an ordinary fair coin. And then I say, I've got a hypothesis for you. This is just a coin that always gets this exact sequence when you flip it ten times. You just always get heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. So if I flip it again ten times, I'm just going to get that exact same sequence again. That is my hypothesis, and I'm like drunk, right? So I don't even seem like I'm like a credible person, but I'm throwing out the hypothesis anyway. You think it seems to be a fair coin unless I'm doing an elaborate trick on you. So my question for you is, what do Popperians think about contrasting these two hypotheses? Fair coin versus that exact sequence every time you flip it ten times. Which of the hypotheses seems more appealing to you in terms of being like, I don't know, harder to vary or just better?
Vaden Masrani: I would just do it again and quickly find out, like, you just flip the coin again, do you get
Liron Shapira: Well, what if,
Vaden Masrani: No? Then, yeah.
Liron Shapira: what if
Vaden Masrani: Then, then you've, you've found out,
Liron Shapira: have to bet a
thousand
Vaden Masrani: here?
Liron Shapira: Whatever you predict, you have to bet a thousand bucks. So I'll let you do it again, but it's just, but we gotta gamble on it.
Vaden Masrani: I would say I don't want to, I just want to do it again. And so, like, if I'm at a party and someone tells me a magic coin, first I'd be like, well that's crazy, how is that even possible? And like, obviously, like, can you actually do that, man, are you doing a magic trick or did you just come up with some, like, do you, can you control the, your thumb enough to get the same sequence?
So it strikes me as prima facie completely implausible. And so yeah, I would take the, take the money. But. I, no I, no I wouldn't take the money, I wouldn't take the bet, because they're probably,
Ben Chugg: asking to bet for a
Vaden Masrani: some trick going on, so,
yeah, exactly, so I would say, no I'm not gonna
Liron Shapira: imagine this really is just, like, a random drunk guy who has, like, no incentive, doesn't want to bet you. Like, it really just does seem like
somebody's dicking around and you have no reason to suspect anything. And like,
there's no
Vaden Masrani: I wouldn't, I wouldn't
Liron Shapira: my question for yeah, forget, forget about the bet, okay?
So my question for you is just like, I mean, you brought up like, let's flip it again, and I'm saying like,
well, for whatever reason, right? Before you flip it again, just between you and me, right? You
just
Vaden Masrani: this, but
this
Liron Shapira: Liron, I'm about to do some Popperian reasoning.
Vaden Masrani: This is fundamentally a great example of the difference between Popperians and Bayesians. And Ben brought up a great example a long time ago about this. But the big difference is that a Popperian would just do it. A Bayesian would go off into their room and spend six hours writing a 20 page blog post about how the They can formalize the probability space of their beliefs on this particular circumstance, and then they would post it to LessWrong, and then spend another 20 hours arguing about the probabilities.
That's
Liron Shapira: So Popperian's wearing Nike.
Ben Chugg: So,
Vaden Masrani: it. They would just say, oh, okay, that's interesting.
Yeah,
let's just do it. Let's try. Oh, it's wrong? Okay, good. Move on.
Ben Chugg: so maybe to answer, make that answer slightly more globally applicable, and remove a tiny bit of the
snark, um, I think a good way to, to,
coherently talk about the differences in worldview, which I think is actually very interesting is that Bayesians are extremely focused on having accurate beliefs, put aside exactly how we define accurate, but accurate beliefs given the information you have right now.
Popperians are very interested in generating new hypotheses and figuring out what we can do to grow our information. So very interested in like, if we have multiple competing, Good hypotheses for some phenomenon. How do we go about discriminating between those? And that's where the crucial test and stuff comes up, right?
So there is
Liron Shapira: Yeah, so, so I, I get that your mindset is to just do it, and I get that you want to find new information, but like,
Ben Chugg: no, no, sorry, I'll, sorry, I'll stop dodging the
Liron Shapira: do you make of the challenge?
Ben Chugg: answer it. But I, I just, I just, the reason Vaden is having trouble answering your question is because there is this extreme difference in emphasis between these two worldviews, so much so that I, I have recently started struggling to call Bayesianism, Bayesian epistemology, because it's not really about epistemology in the sense of growing knowledge.
It's about epistemology in the sense of, like, justifying current credences for certain hypotheses. So the emphasis of these two worldviews are different, and they're still conflicting in important ways, as we've
Liron Shapira: I mean, Solomonov induction does grow its knowledge and grow its predictive confidence, right? So I, I, I think you're going out on a limb to say that Bayesians shouldn't have a right to the term epistemology.
Ben Chugg: No, no, I,
Vaden Masrani: no, that's not what you said.
Ben Chugg: if you want to call it
Vaden Masrani: you just said that.
Ben Chugg: fine. I'm just saying, I think, honestly, this is me trying to give a boon to your side and say, like, I think we're often arguing at slightly cross purposes because the Bayesians are extremely focused on, uh, uncertainty quantification, right? They want to say like exactly what your credence should be given the current information.
Um, there, I think they're less focused on like, I mean, I haven't heard many Bayesians talk about generating these new hypotheses with infinitely many Turing machines and stuff, right? Like it.
Liron Shapira: I mean, I can tell you, I personally don't spend a lot of time trying to precisely quantify hypotheses. I just know that, when I'm just manually doing things, like, Hmm, this seems like a good move to try, when I'm just like, thinking things through roughly, I just know in my head, on a meta level, that I'm approximating the numerical ideal.
And that's it. I just live my life with an approximation.
Vaden Masrani: you're assuming in your head that that's true, but
sure, um, but let me not dodge the question. Can you ask the question
Ben Chugg: yeah, I'll just,
Liron Shapira: Yeah, yeah, sure. So there's this weird sequence of
Ben Chugg: yeah, let me just answer and say like, yeah, all else being equal, I think I would say, um, I would take the bet or whatever, like, I would say like, yeah, this is probably implausible because if, if, if I can see how they're flipping it, I would say it seems extremely implausible that there's a mechanism by which they can control the coin, you know, there's no string in the air or something and like, yeah, so the only plausible mechanism by which this sequence had to happen is, um, basically their finger, right?
Because you're seeing the same. Same coin. And if the coin is like memoryless, which seems like a reasonable assumption, it wouldn't know that it has flipped like a head. It wouldn't know its own history. Right. So it has to be the
Liron Shapira: exactly. And by the way, just my intent. I'm not gonna trick you. I'm not gonna be like, Psyche, the
Ben Chugg: No, no, no, no. I
Liron Shapira: Like, that's not where I'm
going with
Ben Chugg: but no, like, yeah, so I would just, I'd probably say, yeah, it's very implausible that it's exactly the sequence.
And I would, um, you know, if I was in the mood to bet up against it and I had the spare income to do so, I'm a PhD student, so I don't have much spare income, you know, but if it's 30 cents, then maybe I'll take the bet. Um, yeah.
Liron Shapira: Right. So, so this is my question for you, right? Is this, this to me seems like a great toy example of like, how do you guys actually operate Popperian reasoning on a toy problem, right? Because you're saying it's a fair coin, but it seems to me like, the HHH always HHHTH, THHHTH always that hypothesis, that is like a very rich hypothesis, right?
It has, uh, more detail and it's harder to vary, because when you say fair coin, I'm like, wow, you fair coin, that's such an easy to vary hypothesis you could have said it's a 60 40 coin You could have said it's a 70 30 weighted coin. In fact, in my example, you got seven heads. So like, why did you say fair coin instead of 70 30 weighted coin?
You're the one who's picking a hypothesis that's
so arbitrary, so easy to
Vaden Masrani: You just, you just put a thousand words, you just put a thousand words in our mouths that we did not say.
We didn't use
Ben Chugg: also, yeah,
let me just, I think I can resolve this quickly and say like, you're right that saying I, you can flip exactly that sequence of coins. That is an extremely strict hypothesis that is very rich and it has lots of content, right? The content says every time I do this, I'm going to get this exact sequence of numbers.
What is content good for? It's good for discriminating between different theories. Um, and so how would we do that? We'd try and flip the coin again. So Vaden, like, you know, he wasn't trying to dodge the question by saying flip the coin again. Content is inherently tied to, like, how we
Liron Shapira: Okay, but for the sake of
argument, you don't get to flip the coin again. You
have
to just give me your
Vaden Masrani: on, hold on, but, but LeBron, like, you're, you're saying, okay, what, how would a Popperian deal with this circumstance, right? Like, that's, that's your question,
Liron Shapira: Okay, and I get that you really want to flip the coin again, but can't you just assume that you have to give me your best guess without
Ben Chugg: But I just gave you a bunch of
Vaden Masrani: but you're simultaneously, yeah, like, you're, you're saying, how would a Popperian deal with this? Um, I say how we would deal with it, and you say, okay, but assume you can't deal with it the way that you want to deal with it, then how would you deal with it?
Liron Shapira: Okay, so
you're basically saying, like, you, you, so if this, so you
Vaden Masrani: I would run
Liron Shapira: have nothing to tell me before flipping again, like nothing at all.
Vaden Masrani: uh, like, where are you trying to lead us to?
Like, we would run another experiment, or we would, Take the bet because it's so, uh, implausible that a coin could do this, that, like, either the guy has mastered his thumb mechanics in such a way that he can make it happen, or there's some magic coin that somehow knows how to do it. Flip itself in the exact sequence that is being requested and both of these things seem completely Implausible, so I would take the bet.
I wouldn't count the probabilities and then come up with some number in my head But yeah, that's I've given you 20 or a couple
Liron Shapira: Yeah, yeah, so one reason I wanted to bring up this example, that what originally inspired me to make up the example, is to, uh, to show a toy example where hard to vary seems to flip, you know, it's to be counterintuitive, right? Because I do think I've successfully presented an example where the 50 50 hypothesis, the fair coin hypothesis, actually is easy to
Ben Chugg: Uh, you're thinking only in terms of
Vaden Masrani: the hard to variance
Ben Chugg: You're thinking only in terms of statistics though, right? Like the In terms of, um, in terms of, like
Vaden Masrani: Explanations of,
Ben Chugg: like in terms of explanations of the underlying physics and stuff. Then it's like, is he magically doing this with his magic thumb? Right? Like that's easy to vary.
Vaden Masrani: that's the part that's easy to vary. The part that's hard to vary would be, uh, this is not possible and the guy is wrong and I'll take the bet because the easy to vary part would be magic thumb. Is he a super being? Is he, uh, telekinetic? Is he, is, are we living in a simulation? These are, like, I can come up with a thousand ideas all day and that's what's easy to vary and I just reject all of them because I'm just making them up as I go.
That's all rejection and that's how
Liron Shapira: So it, sounds like the resolution to, I don't know if it's a paradox, but it sounds like the resolution has to kind of zoom out and appeal to like the broader context of like, look, we live in a physical world, like coins are physically hard to make be this tricky, right, to always come up in this exact sequence.
So it, as a Bayesian, I would call that my prior probability, but how would you call that?
Vaden Masrani: Being a common sense
Ben Chugg: Yeah, just knowledge about how the
Vaden Masrani: thinking about how the world works. Just like, it's implausible. And so I would, yeah. Like,
anyone who doesn't study any, like, philosophy would come up with the same answer, approximately. That, like,
that doesn't seem
Liron Shapira: I wanted to challenge you more, I think I would
probably have to put in some work where I like, invent a whole toy universe that doesn't have as many, uh, as much common knowledge about, like, the laws of physics, so that it's not, like, super obvious that the coin is, is fair, because I do think there's some essence that I would distill as a Bayesian to just be like, you know, what I would get out as a Bayesian, what I think is, like, a profound lesson that's worth learning, like, you know, imagine, you A million, right? Imagine a million flips in a row, right? So then, even if the laws of physics made it really easy to make unfair coins, the fact that it just looks like there was no setup and you, and, you know, it could just be, it could even just be your own coin, right? Your own coin that you just got from like a random K marker, a 7 11, right?
You, you flip it, um, and it's like your own coin and it came up, you know, a hundred times like a totally random thing and you're like, I have a great theory, it always does this exact sequence, um, um, Yeah, I mean, I, I do find it convincing that it's like, look, the laws of physics makes that a priori unlikely, but I feel like the, a big advantage of the fair coin hypothesis is also that it, uh, you know, it's a priori much more likely than the hypothesis of, like, this exact sequence.
Like, that's kind of a ridiculous hypothesis. Like, where did I get that hypothesis without actually flipping the coin? You know, like, do you see there may be something there
Vaden Masrani: Well, you can always tell a Bayesian story after the fact, yeah, it's, it's a priori less likely, we all agree on that, and then Ben and I would want to say it's less likely because it's a bad explanation, so we just reject it, and you'd want to say it's less likely and, okay, how much precisely likely is it less than the other, let's come up with a count, let's say, okay, so there's eight heads and there's two tails and let's come up with a problem, we just say,
you don't
need that, it's, it's a ridiculous thing to
assume,
Ben Chugg: turn around, like, how would you, so what is your prior probability on the coin being fair?
Vaden Masrani: Yeah, great question. That's a great question. Yeah. What is your
Liron Shapira: Yeah, I mean if it generally if somebody does a party trick and I don't judge them as somebody who like wow this guy could Actually be doing some pretty fancy magic right if it just seems like a random drunk friend Then I'd probably be like okay. There's probably like a 97 percent chance This is just a regular fair coin and the other 3 percent is like okay This drunk guy actually got access to like a pretty good magic
Ben Chugg: Okay. So what you gave is like a bunch of reasons and then a number, right? We're just giving you the reasons.
Liron Shapira: Yeah, and the number is a
Ben Chugg: I know, exactly. We're just giving you the reasons and no number.
Liron Shapira: Yeah, I
Vaden Masrani: it's such a ballpark that we just don't need the number like what's the number four
Liron Shapira: Yeah, I mean, I
get that, right? So, I mean, the standard Bayesian response is to be like, Well, look, if there was, like, a market, right? Like a betting market, or a prediction market, or even a stock market, right? And, actually, this gets to another section that I was going to hit you with, which is like, okay, so expected value, right?
What do you think of this idea of calculating expected value?
Ben Chugg: Oh boy.
Vaden Masrani: think it's like the Pythagorean theorem, it's useful in some circumstances and not useful in other circumstances, and it's a banal mathematical fact that statisticians use all the time, and for whatever reason, it's also this like, crazy, philosophically metaphysical thing that the Oxford philosophers like to say.
Like William McCaskill and Toby Ord and Hilary Greaves and Elias Yukowsky and some day traders too. I know what you're about to say Yeah, then that's where the problems come in and let's let's go have we could talk about this a lot But for my first approximation, that's what I think He
Liron Shapira: the example of, hey, we're at this party. Somebody just did something with a coin who seems to be trying to gaslight me like it's this coin that always comes up, you know, these 10 sequences in a row, but then some trader overhears the conversation and walks by and he's not a confederate.
He's just honestly somebody who likes trading and likes markets and he's like, Hey, let me make you guys a market on this, right? Like, what odds do you want to give for this bet? And that's where, okay, yes, I pulled a number out of my butt, but, like, this guy wants to make odds, right? So, like, you have to plug something in.
Ben Chugg: Yeah, if you're willing to bet,
Liron Shapira: Yeah, and you could be like, well, parents would just walk away. We wouldn't participate, right? But, like,
Vaden Masrani: No, we would just run the experiment again. We'd run the experiment again and find out. Oh, no, we would, no, okay, we would run the experiment again and then decide if it's so ambiguous as to require us doing the experiment like a hundred thousand times and then collecting data on it and then building a statistical model and then using that statistical model to figure out what's actually happening because, yeah, there's many experiments that are really challenging to run and you get differences every time you do it and that's where data and statistics come in.
Is applicable. Um, but, And that's different than just saying the expected value of this is going to be big. And where are you getting this stuff from? You're just making it up and uh, it's useless in most cases unless you have data. And yeah, we can talk about the cultural stuff of traders and um, like Sam Bankman Freed, like if you read Malcolm Lewis's book, they talk about his culture at Jane Street and how they put expected values on everything.
And that is a cultural thing, um, which people do and We can talk about the culture there too, but that's just very different than how most people use it
Liron Shapira: an interesting example from that book is, I mean, if you look at, uh, Jane Street and, you know, the famous medallion fund. So there are funds that are placing bets, uh, you know, that they perceive to be
positive expected value in various markets.
Ben Chugg: I mean, using a huge, a lot of data and statistics and a boatload of assumptions about how the past, you know, the last five days of the market reflects something to do with the next day of the market, right?
Vaden Masrani: it's completely pot. It's so at the beginning of this conversation, we, I started at least by saying Bayesian statistics, all good. Bayesian epistemology, bad boo. So the Bayesian statistics part is what you're just asking about because you have like 50 years of financial data and you can run like trials and you could do simulations with your data and you can see.
What gives you a slightly better return and that is just a completely different thing than what the long termists and the Bayesians Like the you Kelsey style Bayesians are doing so just make sure that we in this
Ben Chugg: Yeah, we're not anti
statistics.
Vaden Masrani: you have data all good Yeah, yeah,
or anti expected values because we if you have data All good.
Like, you could do it wrong, of course, but I'm assuming that, like, in this purpose of this conversation, we're doing it well, and, like, Detroit, like, yeah, Jane Street and all these, like, uh, big hedge funds, like, they are, um, their whole life is trying to get, like, slightly better odds using, like, super computers and blah blah blah, and so, yeah, yeah, so that's fine, yeah.
Liron Shapira: For the audience watching the podcast, you guys know I recently did an episode where I was reacting to Sayesh Kapoor, and I think he made a lot of similar claims. I don't know if he subscribes to Karl Popper, but he had similar claims about like, look, probabilities are great when you're doing statistics, but when you're just trying to reason about future events that don't have a good statistical set, then just don't use probabilities.
Do you guys know Sayesh Kapoor, and does that sound
Ben Chugg: No, but that sounds great. yeah,
Yeah, it
sounds like we should
Liron Shapira: Okay.
Vaden Masrani: know where he's coming from, but that's, uh, yeah, totally. Which, well, actually,
so can I, can I actually riff on that a little bit? Which is that Popperianism bottoms out onto common sense. So it does not surprise me at all that someone who doesn't read Popper, know Popper, know Deutsch, is saying similar things, because if you just Are not in, so people say Popper's a cult too, so we're both in cults.
If you're not in the Popperian cult, or the, um, Joukowsky cult, then you, and you're just thinking about how the world works, you'll likely just come up with a bunch of Popperian stuff because it just bottoms out into a common sense, typically, yeah.
Liron Shapira: I would also like to claim that Bayesianism bottoms out into common sense.
Vaden Masrani: That's fair, yeah, we'll let the audience decide, yeah,
Liron Shapira: Okay, um, two can play at that game. Uh,
but yeah, so, what I was saying are
Vaden Masrani: yeah,
Liron Shapira: So a while ago I was saying, look, when we do Solomonoff induction, I claim that that's the theoretical ideal of what I do in practice, which is often just using my muscle memory. Similarly with expected value, I would make a similar claim that, like, realistically, right, in my life, I don't make that many quantitative bets, right?
I'm usually just, like, coasting, not really doing that much math. But I do think expected value is the theoretical ideal of what I do. For instance, I think we can probably all agree that, like, if you had to. Let's say you had to bet 100 on something and you either had to bet that like the sun will rise tomorrow or that Kamala will win the election.
It's like sun will rise tomorrow is gonna be like a much stronger bet, right?
So like, that would be like some primitive form of the expected value calculation.
Ben Chugg: It's a, I mean, we have a very good explanation as to why the sun will rise tomorrow. Um, yeah,
Liron Shapira: So it,
Vaden Masrani: and we don't really know too much about the, um,
Liron Shapira: So, okay, so,
let me, let me ask you about your MO, right? So, so let's say I'm just, I'm asking you to, to, you have to bet 10, uh, and in exchange for, you know, you have to give me odds at which you'd be willing to bet 10 to. There's probably, like, with Kamala, let's say, I'm sure you guys don't have a good explanatory model whether she's definitely going to win or not, right?
Because it's, like,
too hard to know. But if I said, look, it's your 10 to my 1, 000, you know, just take either side, wouldn't you just take a side because it seems pretty appealing?
Ben Chugg: sure.
10.
Yeah.
Vaden Masrani: I mean, like, if you're, yeah, um, I mean, like, if you're guaranteeing that we'll, yeah,
Liron Shapira: I ask you this question, you're not like, No, Liron, run the election, run the election. No, you're not stonewalling, right? You're saying, sure, I'll put down 10 to your 1, 000. That seems pretty appealing, right? So didn't you just imply that you think the
expected value of betting on Kamala is more than, you know, more than 10 in that situation?
Ben Chugg: Yeah. So you're sort of defining expected value after the fact. All I'm saying is like, I'll take this bet cause it seems like a good deal. I have 10 of disposable income and, you know,
Vaden Masrani: what I'm not doing, so what I'm not doing, to be very clear,
Liron Shapira: like a good deal, I think you're approximating the mathematical ideal of expected value.
Vaden Masrani: uh, because you can use that framework. To describe whatever the hell you want to describe, right? So, the expected value, so, okay, like, like, for, for the listeners. So, what is expected value? So, let's just talk about discrete stuff. So, it's a summation, yeah? So, summation of the probability of a thing happening times the utility of a thing happening, and then you sum it all up.
Okay, great. So, what are you gonna put in that sum? Well, you have to make that up. So, if, whatever you want to make up, You can do it, then you have to make up the utilities, and then you have to make up the probabilities. So, what you're doing is you're taking one made up number, uh, multiplied by another made up number, and then you're doing this a bunch of times, and then you're adding up all these made up numbers, and then you're getting a new made up number, and then you're making a decision based on that new made up number.
If you want to do that, and make your decision based on that, Go nuts. You just don't need to do that. So,
Liron Shapira: when you're putting up your
Vaden Masrani: numbers and coming up with a new made up number, you could just start with the final made up number. And you could also just start with the realization that you don't actually need these numbers in the first place because the election is like a knife edge.
And if someone is, you know, like a thousand to one odds or something, then you can take money off of them because they are mistaken in their knowledge about what's going to happen because they're falsely confident about something. And so you don't need expected
Liron Shapira: knife edge. The term knife edge is such a loaded term. You're implying that it's equally likely to go either way. How are you making this loaded? You don't know the future, Vaden.
Vaden Masrani: so because we have a data set here, which is the 330 million people who are going to vote and the polls that are trying to approximate that. So polls are a sample of a population. This is statistics. This is what I'm going based off of. And this is why Bayesian statistics is fine because it's, we know how polls work and we know how counting works and we have 330 repeated trials.
Well, people are going to do this. This is where statistics makes sense.
Liron Shapira: But there's a dark art where all of these pollsters are building their models, right? Because
the election is so close that the way you build your model is going to
really define which candidate you
think wins.
Vaden Masrani: and that's a huge problem. Yeah.
And I could rephrase that. I could rephrase that problem by saying there is way too much subjectivity injected into these equations.
Liron Shapira: It sounds like we all really agree though, in our gut, that like, the election is pretty close to 50 50 of who's going to win. Like, am I, do you want to push back on that, that it's not 50 50 roughly who's going to win?
Vaden Masrani: Um, I have, I have a pet theory about this, but it, uh, it's, it's not, uh, worth taking
seriously. So I'll just go with the polls, but I think polls are inaccurate, but
Liron Shapira: And I want to bring up the prediction markets, right? So now there's like, PolyMarket, Kalshi, Manifold. So these markets have gotten a lot of action in recent weeks and months. And it's pretty sweet, right? Because you can watch these markets and like, A lot of people betting on these markets have a Bayesian interpretation of what that fluctuating number means, right?
I mean, how else, or like, do you think that it's crazy to give a Bayesian interpretation of those numbers as like, good odds that you could use if you're placing bets?
Vaden Masrani: Yeah. I pretty much just ignore prediction markets, but that's my personal choice. Yeah. Uh, Ben. Um, I mean, what have you learned from the prediction markets that you haven't learned from polls? Out of curiosity.
Liron Shapira: Oh, what have I personally? Um,
Vaden Masrani: Yeah. Just like, what value have you got from the prediction markets that you haven't got from polls?
Liron Shapira: A lot of times it's redundant. I just, I see prediction markets as
being a little bit finer grained than a poll, so when the polls are kind of ambiguous, sometimes I'll look at a prediction market and I'll see more signal. Um, I think with, maybe with Biden dropping out, that, I guess that's not, there's no direct poll about that.
I mean, there was probably no single representative poll that was the Biden dropping out poll, but like, I guess I just want to invoke, like, there's some times where I see something will happen where it's not fully captured by, like, the ontology of a poll, but, like, there's a specific prediction market for it, and it spikes, and then I really do think, like, oh man, that spike seems like I should update my expectation of what's going to happen.
Ben Chugg: there's a lot going on here. Uh, let me just touch on a few points. So I think your initial thrust with like the betting 10 to 1, 000 odds, like these are really good odds, you'd probably take that, and if I keep decreasing the 1, 000, you're gonna hit a point where you don't wanna take that bet anymore, right?
Um, sure, so if you wanna then do some arithmetic on this and come up with like the expected value that, where I think, uh, I'm willing to bet on Kamala, uh, versus Trump, and you wanna call that expected value, Sure, you can do that. Um, we, we just want to emphasize that this is an extremely different quantity than what statisticians are doing when you have a well defined data set and you're looking at a well defined outcome and you're counting things or making very rigorous statistical assumptions.
And calling them both expected value is Unhelpful at best and actively harmful at worst because these are not the same sort of quantity now I don't I don't I'm not disputing that people, you know Think think someone's gonna win versus another person more or less or have different risk tolerances Some people like to bet, some people don't like to bet.
Uh, they have different utility functions. And so if you wanna, you know, if you wanna press them on that and make them bet at certain odds and call that expected value, fine. But this is a different thing than maybe statistical expected value. Uh, where I'm, you know, statistical at the beginning. Okay, fine.
That's one point. The second point is, yeah, prediction markets are interesting as an aggregate, uh, an aggregate knowledge about how, People, sometimes few people, sometimes many people, with money, are willing to bet on an election. It's a summary of information, in that sense, right? Um, And again, so you can now talk about different people in this market's expected value.
Uh, the whole point is that their expected values are different. That's why you see differential outcomes in markets, right? Um, and there's not, there's no way to adjudicate that precisely because it's subjective. And this is, again, why this is different than statistical expected value when we have well defined statistical models.
Vaden Masrani: I ask you a question before you, before you respond, which is how do you deal with the fact that you have many different prediction markets and they all say different things?
Liron Shapira: I mean, usually arbitrage brings them pretty close together, no?
Vaden Masrani: Uh, but that doesn't because they are still saying different things, right? Like, um, I may be factually
Liron Shapira: where there's a persistent, yeah, I mean, like I know on
Trump versus Kamala, it's always plus or minus 3%.
Vaden Masrani: yeah, actually I'll, I'll, I'll back off on this and just, um, let the listeners or the viewers check because I haven't checked it recently, but the
Liron Shapira: you're correct, that arbitrage is possible. I mean, that's not even necessarily a statement about epistemology. That's just like weird. Why aren't
Ben Chugg: I
think maybe a,
Liron Shapira: I mean, I guess that could
potentially be a
statement about epistemology, right?
If you're saying
Vaden Masrani: want to fact check myself and
fact check myself and also fact, like, I would love for the, the, the commenters on here to just, whatever time they're looking at it, just pull up four prediction marks, take screenshots and put them underneath. And let's see if, how
Ben Chugg: or maybe a better example here is just the discrepancy between, like, Nate Silver's models, right, and, like, Polly Market. So Polly Market, I think, is a way bigger,
uh, edge for, uh, Trump at this
Liron Shapira: yeah. So, so polymarket is allowed to have more models, right? Nate Silver constrains which models he's allowing
himself to use, right? Other people are like, oh, I also want to weight this potential model, right? So it's not that surprising that Nate Silver hasn't captured every possible model that you might want to weight into your prediction.
Vaden Masrani: but my, um, My question is, if you are willing to grant for the sake of argument that they are different, how do you decide which one to follow? That's my question. Because if they're all saying different things, and
Liron Shapira: it's the crux of our disagreement, right? I think that
if you were correct, so I'm happy to take a hit if you're actually correct, that prediction markets have persistent disagreements in the probability of something, that
absolutely would be, and given that they were liquid, right, assuming it's
easy to make
money by like buying one and shorting the other, um, that absolutely would be evidence for the meaninglessness of Bayesian probability, right?
And then conversely, the fact that I claim that this isn't factually true, that you actually have
very narrow spreads, Right? I think is evidence for the
meaningfulness of Bayesian probability. And I actually have further evidence along those
lines, which is, have you ever checked their calibration? Um,
Vaden Masrani: market, calibration is,
yeah, okay, yeah, let's go down
Liron Shapira: So manifold manifold is the one I saw. So
they look through a bunch of past markets, right? And these are Bayesian markets. These are not like doing statistics. They're just predicting an uncertain future about random questions. And when the market says at a given randomly sampled time that there's, let's say a 70 percent chance that the market is going to resolve, yes, like whatever it's predicting, right?
Like will Russia invade Ukraine, right? Like all these random
questions, if the market is saying at any given time, 70%, they went back and they checked the calibration. And, uh, I can put a graph in the show notes, but it's ridiculously accurate. Like, uh,
for example, the data point for 70 percent is it's like 68%.
This
is across a random sample of, of manifold points.
Vaden Masrani: there's a post on the EA forum that says that that's only true with a time horizon of less than five years and it might even be one year.
So the, um, the thing
Liron Shapira: so we can use Bayesian probability to predict all
events one year into the future. That seems like a pretty big win for
Vaden Masrani: No, hold on though, because the whole super intelligence thing is not one year into the future, and let's talk about, okay, we're gonna go down this path, let's do it.
Hold on, wait, no, no, let me say a few things, um, let me say a few things, which is, if you want to talk about super forecasting and Philip Tetlock and stuff, um, you have to read the appendix of his book where he says that any predictions beyond ten years is a fool's errand, and you are a fool's errand. You shouldn't even try it, and you'll embarrass yourself if you're going to.
So point number one. Point number two is that on the EA forum, someone who is very sympathetic to Bayesians, did an analysis on the calibration of, I think it was manifold, and when you look at these scores, you have to account for how far into the future they are, and so yeah, it's, it's interesting. It's totally possible to make predictions successfully within a year, but the thing that you're predicting matters a lot.
So if you're going to predict that the next year is going to be kind of similar to this year, that's like a default prediction. It's going to get you pretty high calibration, but that's also completely boring and uninteresting. It's not a huge concession at
Liron Shapira: you're basically saying as it, you know, these Bayesians who think that they can take something that doesn't have a mass of statistical data and slap a quantified probability on it as given by a prediction market, yes, as long as the time horizon is less than one year, they can expect near perfect calibration.
Vaden Masrani: Yeah, I can do that too. I predict that in a year on Christmas, there will be a flex of flights. It depends on the predictions you're making. If the predictions are simple, anyone can do it and get great calibrations. If the predictions are really complicated,
Liron Shapira: look at the predictions, they're complicated, right? They're, they're
things like Russia will invade
Ben Chugg: There are things that these people are willing to bet on that they have differential knowledge about with respect to the rest of people, right? It's not the same people always betting
Liron Shapira: to paraphrase what you guys are telling me, you're basically saying if there's a market saying Russia will invade, let's say it's, you know, January 2022. So we've got like a one month time horizon or let's, right, let's say there's a market saying, Hey, Russia will invade Ukraine by end of the quarter, right?
Like, cause I think
that's when they
Vaden Masrani: there's, there's not a
Ben Chugg: Yeah, there's no hypotheticals. Like, there
Liron Shapira: right? So in that scenario,
Ben Chugg: there's, superforecasters gave this 15 percent before Russia invaded Ukraine, and they gave COVID 3 percent that there'd be over 100, 000 cases in the U. S. by March.
Liron Shapira: Yeah, and remember, we're talking about calibration here, right? So I'm not saying the market gave it a 99 percent chance and then it happened. I'm saying if they gave it a 15 percent chance, then it would fall in a class of markets that were saying 15%. And what I'm saying is calibration data shows us that 15 percent of those markets do resolve.
Yes, like a market is generally well calibrated. So it sounds like you guys might be conceding that with a small timeframe under one year, there is such a thing as a well calibrated Bayesian
probability. Oof,
Vaden Masrani: completely worthless.
Liron Shapira: I mean, I just think that that concession is, you know, as a, uh, in the context of a debate, I do. I feel like that concession is almost conceding everything because
Vaden Masrani: it's not, I mean, there's a complete difference
between making predictions in, uh, the, yeah, yeah, what,
Liron Shapira: Because it didn't, by the way, this is unexpected, right? it's not,
like you came in being like, Okay, you Bayesians are so cocky because you have this amazing tool called the
prediction market where you can nail calibration for things in a year, but let me tell you how bad Bayesians choke after one year.
Like,
that's your
position?
Ben Chugg: I'm, just, wait, wait,
Vaden Masrani: are you talking about? Yeah. What are you talking about here? Uh, let's just zoom out a little sec. Like, um,
no, Ben, you go first, but I
Ben Chugg: I'm, I'm confused. Um, yeah, I'm just confused about the claim you're making. So, what prediction markets are not is consensus on probabilities. So, right, so what they're doing is, you know, so you'll have a prediction market, for instance, would converge to 50 percent if half the people thought there was a 25 percent probability of something, and half the people thought there was a 75 percent probability of something. What's not going on is, like, a bunch of Bayesian updating where, like, you have consensus of people all updating their prior probability. So, like, I just,
Vaden Masrani: You don't have to
be a Bayesian to bet that. Yeah. So you'd have to be a Bayesian to play a
Liron Shapira: and I'm not, and by the way, I'm not using
prediction markets as an example of somebody running Solomonoff induction. I'm using prediction markets as an example of having a number, uh, you know, Sayesh Kapoor's whole thing. I know you guys don't know him, but it's related to your point of like, you guys are basically saying where do these probability numbers come from?
Wow, you can't do expected value unless you're doing statistics. It seems like you could very successfully do Bayesian probability and expected value calculations if you simply refer to the numbers being outputted
Ben Chugg: I don't know, but you're already selecting
Vaden Masrani: phenomenon for prediction market. We know how prediction market works. Sorry, Ben. No, it's just, it's different,
Ben Chugg: But I mean, so you're already selecting for people who are like choosing to bet on these markets Which are people who think they have better information than the average person They think they have an edge, hence they're willing to bet, right? Meaning they think they have like a good explanation of whatever you're betting on, okay?
Right? Do we agree there? Okay, so we're already in a very restricted class of people Um, who are, you know, they're taking bets for some reason. advantageous information for something, uh, that's what betting is all about. You bet when you think other people are wrong about something, you have an explanation as to why they're wrong. Um, and so you, you put money down on it. And then what a market is is like aggregating all this information. Uh, and people think other people are wrong.
So they bet on the other side of that, uh, et cetera. Um, I'm a little confused how this relates to like a win for you. Each person, what Bayesianism says, and I think the claim you're making is you should walk around at all times, putting probabilities on every hypothesis that you conceive of, right, and constantly updating on new information.
Uh, I fail to see how this, like, you know, people are betting on very specific
Liron Shapira: So, so let me restate something you might, let me test your claim here, okay? So would you claim that humanity as a whole, as a team, using the technology of prediction markets, humanity can be a really great Bayesian because humanity can just list a bunch of hypotheses, run prediction markets on them, and then plug in those probabilities as Bayesian updates.
Ben Chugg: Uh, no, like not for not anything in longterm, not like meaningful, like sometimes the future resembles the
past for very important
Liron Shapira: And to be precise, it's not Bayesian updates, but it's like for betting, right? For expected value, for policy. And by the way, what I just described I think is
Robin Hanson's concept of futarky, right? You make
prediction markets telling you the different probabilities of different outcomes and then you just maximize expected value.
You choose the policy that's going to maximize expected value according to the probability that you now know pretty well.
Vaden Masrani: Uh. You. This is an interesting thing. Okay. So your question is, if we could get all of humanity to make predictions about stuff, uh, then we still have to wait for the time to pass and then see if it was right. And a prediction will either be right or wrong. And if the prediction is greater than, like, a year or two, then all of the predictions are eventually just gonna be 50 50 because we have no frickin idea about what's happening.
And then we have to
Liron Shapira: I'm happy that you've conceded one year, so let's just talk
about
Vaden Masrani: not, it's not a con it's not a concession because we understand how this works. Like, so there are certain predictions that can absolutely be made within a year time horizon. It depends on what's being predicted. So I can predict all sorts of stuff
Liron Shapira: for one year.
time horizons?
Vaden Masrani: I wouldn't, no, of course not.
futa sounds insane. Why would I predict, why would I support this?
Um,
because, uh, so I don't, futa if what you just said is that you make a decision based on the whole planet's probability about what's gonna happen in a year, is that what we're doing?
Liron Shapira: Kind of, yeah, the
idea of futarki, and we'll limit it to one year, is anytime there's a policy proposal that's going to yield returns within the next year, like let's say you want to make sure that GDP
grows 2 percent this year,
right, this calendar year,
so, and there's different policies, right, like would a tax cut
improve
GDP,
Vaden Masrani: All, all this is telling you, it's not telling you what is going to happen. It's telling you what people believe is going to happen and what people believe is going to happen can be completely wrong all the time. So,
Liron Shapira: because, you know, you, you basically asked me. So
a future key would be like, you know, there's, there's two different, should we do a tax cut? Should we do a tax increase? Should we, uh, cut interest rates, right? So there's a few different policy proposals where everybody agrees there's a target growth rate for the economy, right?
Like that's the policy outcome you want. And so future key would say, great, run prediction markets on all the different proposals saying conditioned on this proposal, you know, you're allowed to do conditional prediction markets. What would then be the resulting change in GDP? And that way voters could see, ah, okay, this policy is the one that has the
best change in GDP, and then you implement that, and I think you were starting to push back, saying like, look, this isn't a prediction, but may I remind you, these prediction markets have shown themselves to be very well calibrated, meaning you could use them as predictions, and your expected value formula would be very, you know, it'd
yield good
Ben Chugg: not everyone is
voting in these, I guess. Right. You're just restricting it to the people who like want to bet on these markets. Cause the whole point about prediction markets
Liron Shapira: Let's, let's say you're literally running it on, like, PolyMarket, right, using the same rules as PolyMarket, where it's literally just a financial incentive to participate if you think you know something. Um,
Ben Chugg: Okay. So you're going to delegate democratic
decision making to just like an expert class of people betting on poly market. I would definitely not support this.
Liron Shapira: I mean, let's say, let's say we keep the
same system, but we vote in politicians who are like, look, if there's like a weird emergency, I won't do futarchy, but like, I get that we were all smart people. We all get the value of futarchy. So I will be setting up these prediction markets and you guys can help advise on my policy that way.
Vaden Masrani: So you want to find an elite class of super smart people that will be included into the prediction markets and you want to get rid of all the dummies because
Liron Shapira: No, no, no, no, but that, but that's a solved problem. Like whatever current prediction markets are doing, the data is showing that they're yielding calibrated predictions. So you just, you, you just amplify what's working.
Vaden Masrani: It's so that you're talking about predictions. I would want to see if you, if you're seriously talking about this as a policy proposal, I would want to see the set of all predictions that were made and I want to figure out, okay, are these like kind of trivially easy predictions or are they like, holy shit, that is impressive.
So first of all, I'd want to look at the kinds of predictions that are being made. And then I want to see, like, which ones were right and which ones were wrong, and for what reason. Um, but just to zoom out for a sec, like, this is very analogous to a question of, like, direct democracy. And if I, if my car is broken, um, I could do one of two things.
I can talk to, like, a few, like, uh, well knowledgeable mechanics, and ask them what they think is wrong, and they can tell me, This and I can get a couple different opinions, or I could average the opinions of 330 million people in the population and just do whatever the average says. And you're saying that the second camp, just averaging the opinions of a bunch of people is preferred to like domain knowledge about what's going to happen.
And I would, in every case, take domain knowledge and sometimes that domain knowledge is going to be In the minds of, um, people in the Bay Area, in particular, who are extremely online and like to bet on all sorts of different things. And, depending on the question, that may or may not be a good source of, of information.
But, the, there's no, like, massive, ah, you just destroyed the whole Popperian approach because some predictions are possible within a year. It's like, we have to think about what's going on here. Um, and certain predictions are definitely possible within a year, yeah.
Liron Shapira: So say you're the president and you ran on a platform of like, I will pay attention to the prediction markets because I'm Bayesian and I understand the value of paying
attention to prediction
markets. And then, And you're, and you're considering a tax cut, right, a generous tax cut across the board.
And the prediction market says, um, GDP growth will increase, uh, more than 1 percent compared to what it would have been if this tax cut were implemented. Markets are saying 70 percent chance. Right? And now you're saying, just to repeat back what you just said now, you said, Okay, yeah, sure, the president could listen to that prediction market, but he hired Lauren Summers, right, or just like some famous economist, right, who's telling
him, Mr.
President, I give a 30 percent chance that
Vaden Masrani: No. He would give an explanation. He would give an explanation as
Liron Shapira: and it would come with an explanation, yeah. So, so you would say, because it comes with an explanation, and because this guy is trusted by the president, the president should just listen to him, and not the prediction market.
Vaden Masrani: they should listen to the explanation and maybe get a couple different ones and see what makes more sense and maybe get the people to debate a little bit
Ben Chugg: also, I mean, if, yeah, if there's an explanation as to like why the prediction market might be accurate in this case, like say you have like all these expert economists. It's betting on this, on, on, on this market, right? So in some sense, the market is reflecting the view of, uh, some giant class of people who we have, for some reason to expect, they know what they're talking about.
Then yeah, I would take that information on board. Well, I'm still, I'm confused about the Bayesian aspect here, right? So there are certain questions where we want to use statistics. We've said that all along, right? So in statistics is valuable insofar as it helps us with prediction, right? Especially when there are huge, uh, Okay, so markets, uh, prediction markets can reflect that in some sense.
Um, I'm, the Bayesian picture for me comes in, like, at the individual level. And at the individual level, I'm super skeptical of the ability to, for people to make, like, quote unquote super forecasts, right? So I think the literature there has been, like, very overblown, right? So there was this, there was a good review actually written by, um, I'm going to blank on their name.
Gavin Leach and Misha Yagodin, maybe? Right? So they like, um, they, and they were, uh, I think rationalists of some flavor. So very sympathetic to the Superforecasting Project. Um, and they took like a look at Tetlock's literature, um, and found that these initial claims of like 30 percent more accurate than, expert pundits were way overblown.
First of all, they were being measured by different metrics. And so once you correct for this, it's more like a 10 percent difference. Secondly, this 10 percent difference didn't even reach statistical significance. Uh, and so I, yeah, Okay,
Liron Shapira: that, right? I mean, I think this, this is absolutely a crux. I mean, so if I'm wrong about this kind of data, then I'm absolutely open to, um, to downgrading my assessment of the usefulness of Bayesianism. But the data that I would point to is if you look at manifold markets, for instance, the one, the one that published the data about the extremely good calibration. There's no one user in Manifold Market who has this kind of consistent calibration, right? It's the market's
calibration,
Ben Chugg: no one
Liron Shapira: so yeah,
no,
Ben Chugg: is okay, so I think we're getting somewhere, right? Like, there's no one user with good calibration. Okay, so this is saying like, doing a bunch of, Okay,
Liron Shapira: If they were forced to vote on everything, right? Maybe there are some users that have good calibration on the bets they choose to make.
Vaden Masrani: I just add one thing?
Ben Chugg: Uh, yeah, the other point just on the individual thing was like the actual Breyer scores that superforecaster are getting. So like, you know, 0. 25 is like a perfect 50 percent Breyer score. So if you just bet 50 percent on everything, um, assuming there's an equal number of yes, no's in the answer set, you're going to get 0.
25. What sort of Breyer score is where superforecaster is getting typically something around like 0. 2. Okay. This corresponds to like. 60 to 65 percent accuracy. So what we're saying is super forecasters who I guess do this for a living, right? They bet on stuff. Um, when they're maximally incentivized to truth seek, right?
Um, they can get like 60 to 65 percent accuracy on questions. Um, if you want to call that as like a gotcha that they're seeing clairvoyantly into the future, that's fine. I'll just acknowledge that. Um, but I don't view 60 to 65 percent accuracy as some huge win for putting probabilities on everything. I basically view it as like, they're running into hard epistemological limits of how easy it is to see the future.
If you have very, if you have very good domain knowledge of an area, it doesn't surprise me that you can beat a coin flip, literally random guessing. And expert knowledge in an area and are, are incentivized in the right way to actually care about outcomes as opposed to like political punditry, for instance.
Um, and so that's where all my
Liron Shapira: yeah. Let me tell you what I'm claiming here, though. Okay, why are we even talking about prediction markets, right? Let me tell you what I'm claiming here. I, so, and you bring it back to like, look, individual humans, uh, an individual humor, human is much weaker than a prediction market. That's what you say.
Fine. But let me tell you why I'm bringing this up. It's because I think a lot about AI and the powers that AI is going to have. If it's programmed correctly, right? If we keep on this progress of putting the right code into an AI, what's possible? Well, a single AI. It could take on all of humanity. Like, yes, there's a lot of different humans making a lot of different models, but you could also just copy the AI's code and run a bunch of instances of it and have them wire up to each other, and you It literally is, in my mind, a question of One A.
I. Versus all of humanity. And so for me, when I see the prediction market aggregated across all of humanity's experts, the way prediction markets know how to aggregate information, I see that as a lower bound for what one A. I. If programmed correctly could do in its own head and come up with its own Bayesian probabilities.
So when I imagine an A. I. Functioning in the world, I imagine it putting probabilities on things and Having those probabilities be well calibrated, using the expected value formula, and then placing very well calibrated bets.
Vaden Masrani: I ask a question? Can I ask a quick question? Um, what do prediction markets say about the likelihood of superintelligence?
Liron Shapira: So, currently they're saying I think AGI is coming at around 2032. I think that was Metaculous last I checked.
Vaden Masrani: Uh, no, the probability. So what's the probability of, like, the scenarios that you're describing that, um, the super, um, Forecasters on these markets, uh, what do they assign? What probability?
Liron Shapira: Uh, what's the question exactly?
Vaden Masrani: Um, for, uh, the doomsday apocalyptic scenario that ORD gives a 1 in 10 probability to, um, that you're really worried about, uh, I'm not asking when superintelligence is going to arrive because you can define superintelligence in a thousand different ways. I'm asking for the doomsday nightmare scenario that keeps you up at night, what's probability assigned to that?
Liron Shapira: So, I don't know which prediction market I would go check for that, because the problem is
Vaden Masrani: I thought you said they're all the same.
I thought you said they're all the same.
Liron Shapira: Or I don't know which prediction market even has enough volume on a question that corresponds to what you asked because the reason is Prediction markets are a powerful methodology, but they're they do have the issue of you know, counterparty risk and platform risk, right?
So if
you're saying hey What are the chance that everything is going to essentially go to zero that human value is going to go to zero, right? How am I going to collect on that if I if I think it's 90 percent likely why would I bet on that? I'm just losing my money today for something I can't collect on
Vaden Masrani: I see, so you'll follow their predictions up until the point that you have a reason to think that they're wrong and then you'll ignore them, is that right?
Liron Shapira: Well, this is a systematic failure, right? It's like saying, will prediction markets still work if somebody hacks their server? Well, wait a minute, there are some
Vaden Masrani: No, no, right, no, no, right now, there's certainly some prediction market that says some apocalyptic student day scenario, and I think Scott Alexander has blogged about this and it's very, very low. Um, I can find the source, I think it's something like 3, 5 percent bend, if you recall this, please let me know.
Liron Shapira: markets are a way to aggregate information by financially incentivizing their participants. There's no financial incentive to a Doom prediction.
Ben Chugg: Then why can we be confident in like your doom predictions or anything like that? Like, what, like, why should we, why should we, why should
Liron Shapira: My Doom predictions come from just a Yeah, yeah, yeah. I'm, I'm just actually using Bayesian epistemology, right? So everything we've been talking about now, I haven't been saying we're doomed because prediction markets say we're doomed. I'm saying no. I have a strong epistemology. It's called Bayesian epistemology.
It's called approximations to Solomonoff induction. You can see how strong this epistemology is When you go and look at the calibration of prediction markets like Manifold, who are not using statistics, to get great estimates. This helps you see that my epistemology is strong. Now, as a, as a reasoner with a strong epistemology, let me tell you how I got to a high PDoom, right?
That would be the shape
of my
Ben Chugg: okay. One,
Vaden Masrani: I just wanna, I just
wanna, yeah, sorry, this is really quick. So, your answer about, it doesn't make sense to bet on these, um, particular questions because we'll all be dead if they turn out to be true. So, that would just mean that there aren't those questions on these markets, right? Like, people aren't betting on them, they just aren't on there.
Um,
Liron Shapira: putting on the
question,
Vaden Masrani: not true. I think that that's not true. True end. But all I'm curious about is there is some number that they're giving that you have some reason to ignore because yours is much higher and why do you, um, support prediction markets in every case except when it disagrees with you, at which point you don't support them anymore.
Liron Shapira: Okay, so why do I trust prediction markets besides just their track record, right? Because it sounds like you're modeling me as like, look, if you like prediction markets track record, why don't you just extrapolate that no matter what the prediction is, you should expect it to be calibrated. But yeah, I do take a structural fact that I know about prediction markets, I take that into account.
For instance, if I knew for a fact that Bill Gates was going to spend all his money to manipulate a prediction market, right? Like there are some facts that could tell me for some periods of
Vaden Masrani: Yeah, so you have insider knowledge. You have insider knowledge as to say that these prediction markets are wrong and so you're presumably like leveraging that and making some
Liron Shapira: my, my point is just prediction, it's, it's not, you can't be quite so naive to be like, okay, no matter what the prediction market says, you have to trust it. There are some boundaries and I didn't, this isn't an ad hoc limitation. The idea that the whole prediction market shuts down under certain, uh, under certain bets you make.
I mean, there's, it's called
platform risk. Like this is a known thing in trading. Like you're, you're basically just,
you know,
you're,
coming at me for, for just doing a standard thing about trading where you look
Vaden Masrani: No. No, no, no. You, you, you, you were saying you have insider knowledge here to, um, for, you, you have justifications and reasons to assume that the probabilities that are being assigned to these particular questions are wrong. Um, and you should make a lot of money while the apocalypse is coming.
I get that when the apocalypse comes,
we're all going to be
Liron Shapira: only pays out during the apocalypse, right? The presumption is you make money because when the apocalypse happens, you get paid out. That's like a contradictory model. It's almost like betting, like,
you know,
it's, it's like a, it's like a logical paradox to have a prediction
Ben Chugg: can't you just do end
of the
Vaden Masrani: I saw, this
Ben Chugg: end of the world bets here? Like you'd Kowski Hanson style. Where the person
Liron Shapira: But the, the problem. Yeah, Yeah,
Vaden Masrani: And, and I,
Liron Shapira: do make, which I, uh,
So there is a bet I can make, but I can't make it on PolyMarket
or Manifold, but I can make it informally, I can make it with you guys if you want, where it's like, if you guys give me 1, 000 today, so I can have it while the world still exists, I could do something like, in 20 years, which is when I think there's like a 50 percent chance that the world is going to be ended then, I can pay you back 2x plus 5 percent interest or whatever, right?
So it's like you will have made like a very attractive return over those 20 years if I get to use your money now because I think that I do want to, I do place a significantly higher value for your money today.
Vaden Masrani: So, yeah, I just want to say for the listeners that I just might be mistaken about the internal mechanics of how prediction market works, works. Like, I thought you could get paid out before it resolves, but maybe that's just not
Liron Shapira: can get paid out, you can buy out if you find somebody else, like let's say the probability of doom keeps creeping out, so you could sell your contract to somebody else and you could cash out early, but the problem is why would somebody come in with a higher bid than you even if they thought doom was a higher probability? Right, because they're being stupid because they should know that they're not going to get paid out unless they find another sucker. It just becomes a Ponzi scheme essentially.
Vaden Masrani: I agree prediction markets can, no, that was a bit cheap,
Liron Shapira: Yeah, you heard it here first guys. Doomers just, uh, are pulling a Ponzi scheme on everybody.
Vaden Masrani: Yeah, um, I, I didn't say it, but,
um,
Uh, yeah, I think we should, we should pivot off of this because I, I just don't understand the mechanics enough to, um, uh, to adjudicate and I'll take your word for it, but it seems like you have insider knowledge that you should leverage somehow.
If you're right, there should be some way to just make a bunch of bank.
Liron Shapira: insider knowledge with platform risk, right? These are two distinct concepts.
Vaden Masrani: yeah, no, no, totally. Yeah, I'm totally acknowledging that I'm missing some of the details. I'd love for commenters underneath to, um, to clean this up for us. Yeah.
Liron Shapira: Okay. All right. All right. Cool. So, um, yeah, so I guess we're, we're, we're starting to come close to the end of time. Um, so yeah, let, let me just open it up to you guys. Um, just if you want to throw out a few topics to make sure we hit before we end, I can write them down and we can, uh, plan the rest of the talk.
Vaden Masrani: Well, so we, um, we've done three hours on Bayesian epistemology, and I think this is a good place to pause, and then let's do another three hours on superintelligence. This has been a blast. Um, like, uh, we haven't even talked about superintelligence yet, and, uh, and like, this is kind of why, when we had initially talked about this, I'm like, let's just extend the time, because we are not going to make it past the first, uh, the first set of questions.
Liron Shapira: All right, guys. So we've been talking quite a lot, and I talked with Ben invaded offline, and we all agree that there's so much more interesting stuff to talk about that we're gonna do a part two. It's also gonna be pretty long. Check out some of these coming attractions. We're gonna be talking about Ben's blog post called You need a theory for that theory,
and we're gonna be talking about Pascal's mugging. We're gonna be talking about Hume's paradox of induction, talking about, um, utility maximization as an attractor state for a eyes. Then we're going to have a whole David Deutch section talking about his AI claims like creating new knowledge and, uh, a certain captcha that I invented based on David Deutch's arguments. We're going to talk about what is creativity, how will we know when AIs are truly creative. Talk about intelligence, what is intelligence, can we talk about general intelligence. What separates humans from all other life forms? Is there much headroom above human intelligence? Is AGI possible eventually? What about in the next hundred years? How powerful is superintelligence relative to human intelligence? Can there be such a thing as thousands of IQ points? What's a fundamental capability that current AI doesn't have? And then also AI doom topics. We're going to talk about agency, the orthogonality thesis, instrumental convergence, AI alignment. and maybe even unpack some of Elon Musk's claim about AI. So all of those, sounds like we might need ten episodes, but most of those I think we'll hit on in part two. So how about, let's go through everybody and we'll just summarize, uh, where do we stand, what do we learn about the other person's position, did we change our mind about anything, uh, starting with Ben.
Ben Chugg: Sure, yeah, so I'm slightly worried we verbosely circled the disagreement without precisely getting to the key differences, perhaps, between Popperianism and Bayesianism. But hopefully I'm just a little, uh, I'm being a little negative and the differences did shine through. Um, I think To be fair to you, I think the biggest challenge to Popperianism comes in the form of betting.
If people are doing, like, you know, significantly better than random, what the hell's going on there, right? And is proba if probability is the only way to do that, then presumably that justifies some sort of probability, um, Epistemologically speaking. Um, I remain skeptical that that's true because at the individual level, I just haven't seen the statistics that superforecasters, like I said, are doing much better than like 60 65%, which I think can be explained with incentivizing truth and limiting, uh, Uh, thinking about questions where you have very, uh, good domain expertise.
Um, but that would definitely be, I think that's like a good crux to label maybe if I see, and I think actually Bain and I discussed this in some episode, this is sounding familiar as it comes out of my mouth. Like if, you know, if we start to see super forecaster accuracy, really just keep going up over time and start hitting 70, 75%, 80, 85%, then I'm totally gonna, that's going to start, uh, you know, Verging on falsifying my claims, right?
If people just become like more and more omniscient with more and more, uh, if they just become smarter and, and better, better able to
Liron Shapira: Wait, wait, why do you need an individual? Can I just clarify here? So does, when you say they have to get more and more accuracy, do they specifically have to give like 99 percent probability or something? Because normally we look at calibration, right? Like they'll say 60 percent chance and it happens 60 percent of the time.
So are you talking about calibration?
Vaden Masrani: No, accuracy as well, like, so calibration is one metric, but accuracy is another completely valid metric to look at, right?
Liron Shapira: When you say accuracy, do you mean like confidence? Like
high probability?
Vaden Masrani: so any machine learning person who's listening to this will know what I'm talking about. You can look at calibration, which is comparing the probabilities over a set of stuff, but you also just have a bunch of questions and whether or not they happened, right?
And then you can just count the numbers of successful predictions and,
Ben Chugg: Yeah. Like if I see Briar
score is
Vaden Masrani: has
Liron Shapira: outputting a bunch of
probabilities, Okay, so Breyer score does depend, the only way you can have a good Breyer score is by often having high probabilities as your answer, right? You can't
just punt and be like, oh, 51%,
like you
Ben Chugg: But that's the epistemologically relevant thing, right? If, if you're, if really you're using probabilities to reason about the world and updating your probabilities. Um, in such a way as to really be able to predict the future, then yeah, you're going to be predicting the future with high confidence.
That's the claim, right? The whole point about my 0. 25 comment was that you can do, you can get a low, quote unquote, prior score quite easily by just predicting 50%. So that's not interesting, right? What you want to do is start pushing,
Liron Shapira: But the universe is chaotic, right? If I give you, hey, here's your test, it's a hundred problems of like three body problems, right? Like super chaotic stuff, then you're going to fail the test, even if you're a
Ben Chugg: yeah, in other words, the universe is fundamentally, there are epistemological limits about how much we can know
Liron Shapira: fair. You're saying you're never going to
be convinced.
Ben Chugg: What you're saying is probabilities.
Liron Shapira: You're giving me a false test
Ben Chugg: No, no. What I'm saying is probability is not the best tool to reason about the future precisely because the future is chaotic and unpredictable, right?
The best thing we can do is just, like, argue about details, not put random probabilities on things that by your own lights, it sounds like you just admitted, are inherently unknowable. So when Ord says things like there is a one sixth probability of some crazy event happening in the next hundred years, yeah, I wanna I want to appeal, exactly like you said, to the chaotic nature of the universe to say this is a totally unjustifiable move to make and it's doubly unjustifiable to start comparing this to the probability of asteroid collision.
Liron Shapira: Okay, but what if, what if everybody, what if the current manifold prediction for asteroid impact in the next year, let's say, and for some reason that wasn't a world ending event, so like a small asteroid impact, right, a non world ending asteroid impact happening in the next 11 months, right? What if the prediction market was saying 1 in 6?
You wouldn't think that 1 in 6 was a trustworthy probability,
Ben Chugg: wouldn't need to look at a prediction market in that case. Like, we would have a theory of like, this asteroid is coming towards Earth. We'd talk to astronomers. Like, there's, this is not the place
Liron Shapira: Yeah, but to the extent that that theory was good, wouldn't the prediction
Ben Chugg: Yeah, I'm sure it would, and precisely because we have a good theory, right? But this is the whole disagreement between us.
We're saying, yeah, sure, prediction markets are useful sometimes. They're not useful most of the time, especially in the far future, because there are things that are inherently unknowable. In those realms, probability is a totally meaningless thing. Mathematization to put on trying to quantify ignorance.
The Bayesian position, um, and maybe you're not trying to argue this, I'd be surprised if so, but the Bayesian position is you should always put numbers on your uncertainty, for close events and far events. We're just trying to say, these are not the same thing when you're predicting what's happening in the next 5, 10, 15 years from predicting, like, you know, the election tomorrow.
Um, yeah.
Vaden Masrani: Can I
Liron Shapira: like that you were starting to give us a good test of what would change your mind. But then the test proved to be like kind of impossible, right? Like the test, like, what do you need to see from prediction markets to change your mind?
Ben Chugg: Yeah, I would,
Vaden Masrani: that the accuracy absolutely improves. Like, that things get better over time, right?
Liron Shapira: But can we well, why isn't your standard calibration? Why is your standard accuracy? Because accuracy is impossible. If we, if we put questions that are hard to have higher than 51 percent confidence on, then
for sure,
Ben Chugg: well there's a reason, I
Liron Shapira: right? So, you know. you're
giving an
Ben Chugg: there's a reason, like you're begging the question, right? There's a reason it's hard to get a high
Liron Shapira: Okay, okay, but you gotta admit it's not really a good faith test if you're just saying this is logically
Vaden Masrani: Well, so, okay, no, let me, let me rephrase it. So, um, if, this is a hilarious closing
Ben Chugg: we're right back,
Vaden Masrani: uh, it clearly indicates that we have much more to discuss about, um,
which is, which is fine, and which is good, but let's just, let's just try to wind, wind things down, and, and we'll leave
the, uh, leave the audience with, um, with a tease that we clearly have much more to, to discuss.
But, um, okay, let's just use calibration. Fine. Let's say that it gets more, and more, and more, and more calibrated over
Ben Chugg: And for, and for more and more
Vaden Masrani: Then
Ben Chugg: like we bet on everything, say.
Vaden Masrani: and for more and more events.
Surely that would have some s S surely. Unless you want to just handle the, the boring case where it's just the calibrations, 50%. If you're getting more and more calibrated, then that should improve your accuracy as well, right?
It won't be exactly the same, but you should get a better accuracy because that's why we care
Liron Shapira: is this? Just be like, hey, I'm going to filter prediction markets out for only the data points where there's more than 70 percent or less than 30 percent probability. I'm only going to use those data points, and then I'm going to measure the calibration of that, and if it stays high, then I'm going to keep being impressed that Bayesian epistemology has a lot to offer.
Vaden Masrani: because we aren't impressed and we aren't going to keep being impressed. We're talking about that, which would falsify our view and force us to be impressed because the standards for ourselves are different than you. And we're just trying to say, like, like, I would be super impressed if the accuracy started going up because the calibration started going down and it wouldn't have to be like perfect accuracy just like showing that over time people get more knowledge and then they can predict better and that's one falsifiable test that you don't need because you are already convinced but the question is what would change our
mind
Ben Chugg: and
let me concede, like, actually, something Vaden said earlier, like, if you just did more and more event, like, if you had prediction markets for everything, and we were predicting, you know, the, like, we were predicting everything from the weather to, like, uh, who's gonna get A pluses on their tests, so, like, is, you know, like, if we're predicting everything and there's calibration, we have perfectly calibrated, Like, everything's perfectly calibrated, all these markets are perfectly calibrated, that would be amazing.
I claim that's impossible, as, especially as the fidelity of these events gets, like, more and more precise, smaller, I don't, I don't know what word I'm looking for here, but, um, anyway, I think that was, uh, not a very coherent closing statement, but you, you understand what I'm trying to say, like, we use prediction markets to literally predict everything, and they were always perfectly calibrated, label me impressed, and I'm gonna, I'm gonna, definitely gonna reformulate my thought.
I'm still slightly confused about the, your, the relationship in your mind between Bayesianism, which is an individual thing for me, and prediction markets, But I think we're not going to resolve that right now. So maybe we'll relitigate
Liron Shapira: Yeah,
Vaden Masrani: should say that for next time yeah
Liron Shapira: Sweet.
Vaden Masrani: I agree with everything Ben said. Um, I, the only comment I want to make is for the listeners who are, um, overwhelmed right now, because there's a lot of various things, though, like the way that I think about learning the difference between Popperianism and Bayesianism is, um, do you remember like in elementary school where you put like a leaf under a piece of white paper and then you take charcoal and you keep doing passes and then over time the image underneath starts to become clearer and clearer and clearer but any one particular pass doesn't totally give you the resolution?
That's the metaphor I have with regards to the difference in these two kinds of methodologies because any one conversation will be interesting but it's not going to fully Boom, here's the difference. It's, it's more about listening to a set of conversations with different people, um, and listening to our podcasts, but listening to other podcasts as well.
Um, and just seeing the difference and seeing the difference. I only say that not in a self serving way, but also because this is where like a lot of stuff, um, is being, um, uh, put into direct, um, comparison. But over time, you'll just start to see different methodological differences, different emphases.
Emphasize different, um, cognitive tools for how to think through problems. Obviously we disagree on a lot of object level things, but the underlying difference is just how we think about the world. Um, and that's not going to be, um, made clear in any one particular conversation. It's something that's going to gradually become clearer and clearer over time.
As with the leaf and the charcoal. Um, and so just to the listener, who's like, Whoa, there is a lot of stuff and I still don't totally understand what the differences here are. You're not expected to, it's not possible to understand it in one conversation, but over time you'll start to see differences in approach and methodology.
And that's what I want to say.
Liron Shapira: Awesome. Yeah, thanks for the summary, guys. Uh, and you guys have been such great sparring partners,
you know, as fellow podcast hosts, right? You're old pros at this, uh, and so, and it shows. I think this was a really fun conversation. I think, you know, we didn't pull any punches, right? We were both going after pretty strong, which I, you know, I think we all enjoyed it. Um, yeah, it's, you know, it's all good nature. I mean, uh, uh, like I, you know, there's like no hard feelings, right? Just
Ben Chugg: No, this was great. This was so much
Vaden Masrani: Well, for the listeners, every time we go off pod, every time like this off Potter cut, there's like just great vibes. We're just fantastic. I'mloving it. Yeah, totally. That's great. Yeah.
Liron Shapira: it's, Yeah. It's, it's it's not like tribal or whatever. Um, and also I'll take a quick stab at being interfaith, right? I'll probably won't work, but I'll try to do a compatibilist solution here. What if we say that, uh, Solomonoff induction is like a nice theoretical ideal, the same way that, you know, the, the chess player that searches every move is a good ideal, but as humans, right, when you're occupying a human brain and you just have to be like totally limited, you can't even get close to approximating Solomonoff induction.
If you follow Pauper's recommendations, then by your own metric of trying to approximate Solomonoff induction, then you're going to do well. How's that?
Vaden Masrani: Nope. Popperianism was, Popperianism was born via the fight against all induction. of which Solomonov induction is one. So if you want to understand Popperianism, read literally any of his books with the exception of, like, maybe the, um, All of Life is Problem Solving. And every one of them has some attack about induction.
And induction is a much deeper concept than Solomonov induction. And so once you kill induction, you kill any Derivations or derivatives of induction. So, for that reason, we will leave the listener on a cliffhanger. Or maybe check out our episode with, with, with, uh, Tamler Summers. From Very Bad Wizards, where we talk about induction for two hours.
Liron Shapira: my last ditch effort to try tobroker a ceasefire has failed. So this is, so we're going to have to continue in a part two.
Vaden Masrani: Yeah. Induction. Saying induction was kryptonite.
Liron Shapira: Okay, great. So yeah, listeners, just stay tuned. Hopefully in the next few weeks, part two is coming. And, uh, yeah, stay tuned. We got other great debates coming up right here on Doom Debates.
Vaden Masrani: this is great. Honestly, I had a complete blast.
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, joined me on Doom Debates to debate Bayesian vs. Popperian epistemology.
I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.
We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes. The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.
Timestamps
00:00 Introducing Vaden and Ben
02:51 Setting the Stage: Epistemology and AI Doom
04:50 What’s Your P(Doom)™
13:29 Popperian vs. Bayesian Epistemology
31:09 Engineering and Hypotheses
38:01 Solomonoff Induction
45:21 Analogy to Mathematical Proofs
48:42 Popperian Reasoning and Explanations
54:35 Arguments Against Bayesianism
58:33 Against Probability Assignments
01:21:49 Popper’s Definition of “Content”
01:31:22 Heliocentric Theory Example
01:31:34 “Hard to Vary” Explanations
01:44:42 Coin Flipping Example
01:57:37 Expected Value
02:12:14 Prediction Market Calibration
02:19:07 Futarchy
02:29:14 Prediction Markets as AI Lower Bound
02:39:07 A Test for Prediction Markets
2:45:54 Closing Thoughts
AI-Generated Transcript
Liron Shapira: Welcome to Doom Debates. Today I'm speaking with Vaden Masrani and Ben Chugg. They're the hosts of their own podcast called The Increments Podcast, which has a lot of overlap in terms of talking about epistemology and occasionally AI topics. In this debate, next couple hours, you're going to hear Vedan and Ben challenging me about my Bayesian I'm going to challenge them about David Deutsch's AI claims and, uh, Karl Popper's epistemology. And we're going to talk about the risk of extinction from super intelligent AI. So, guys, welcome, and please introduce yourselves one at a time and tell us a little bit about your background.
Ben Chugg: Awesome. Yeah. Thanks for having us excited to be here. Uh, I not sure exactly how much you want me to say, but yeah, my background is mostly academic, uh, studying math and computer science at currently I'm doing a PhD in statistics and machine learning had a brief stint at a law school pretending I knew something about law, but yeah, mostly in, uh, mostly my backgrounds in math.
Vaden Masrani: yeah, um, stoked to be here. Um, yeah, so my PhD was in, uh, machine learning. I was working in a Bayesian, um, machine learning lab that, uh, on the website is all about building superintelligence. So, um, I kind of in that, uh, space started reading a lot about Popper and Deutsch and, um, have, uh, A lot of positive things to say about Bayesianism with regards to statistics and engineering.
I think it's amazing. And a lot of negative things to say about Bayesianism with regards to epistemology and beliefs. Um, and so, kind of like to walk that difficult tightrope and defend it to some people and then attack it to other people. Um, and then on the podcast, Ben and I have been doing that for about four years and we've been, um, Um, old buddies growing up in Calgary and started the podcast as, um, COVID began just as a means of continuing to argue and learn and talk to one another.
And, um, we explore a multitude of different topics. Uh, yeah. Popper and Deutsch come up a lot, but also things like recycling and things like, um, uh, the patriarchy and things like AGI super intelligence and everything in between. So we try not to limit ourselves. to just a few topics, but, um, because both, um, because Ben was coming from an EA background and I was coming more from a popper background, that tends to be, um, kind of the locus of stuff that we talk about, but the future is open and we have no idea what the podcast is going to be in a couple of years.
Liron Shapira: Everyone check out increments podcast. It's a ton of interesting content. I'm enjoying it. So to set the stage, we're going to start by talking about epistemology. And as viewers probably know, my own background is I'm a software engineer. I'm a small tech startup founder. I'm a lifelong student of computer science. A theory of computation, that kind of thing. Uh, and I'm an AI doomer since reading Eliezer Yudkowsky in 2007. Pretty much got convinced back then, and can't say I've changed my mind much, seeing how things are evolving. So that's my background, and we're gonna kick off, uh, talking about epistemology, and just to get the lay of the land of your guys position, I guess I would summarize it as kind of what you said, where you're not really fans of Bayesian epistemology, you don't think it's very useful, Your epistemology is more like Karl Popper style, more or less. And you just think the AI doom argument is like super weak and you're not that worried. Is that a good summary?
Vaden Masrani: exception of, um, There's a lot of things about AI to be worried about. autonomous weapons, uh, face recognition technology, um, that kind of, uh, stuff I am worried about. And I think it's a huge problem. Um, and like other forms of technology, uh, it absolutely needs to be worked on.
And if we don't talk about it, it's going to become very problematic. So I'm not naive that there are certain, um, huge difficulties that we have to overcome. The stuff that I'm not worried about is super intelligence, paper clips, Bostrom, Simulation, Brokos Basilisk, all that stuff. That, to me, is all just, um, science fiction nonsense, basically.
However, the caveat is, um, I haven't read all of Yudkowsky, and at some point in the conversation, I'd love for you to just take me through the argument as if I hadn't heard it, because it could be that we're operating with asymmetric information here, and so I'm completely open to having my mind changed, and, uh, we don't have to do it now, but at some point I'd love to hear just, like, From the step one, step two, step three, what the full argument is, because I could have just missed some stuff that Youkowski has written that would change my mind.
So that's the caveat.
Liron Shapira: Okay.
this is a question I like to ask every guest.
Here it comes!
Robot Singers: P(Doom), P(Doom), what's your P(Doom)? What's your P(Doom)? What's your P(Doom)?
Liron Shapira: Ben, what is your P(Doom)?
Ben Chugg: I'm almost, I'd almost be unwilling to even give you a number because whatever number I gave you would just vary so wildly from day to day and would just be based on some total gut hunch that, um, I'd be unwilling to defend or bet on. And so I think it's much more fruitful in these conversations to basically just talk about the object level disagreements.
and not try and pretend knowledge that we have about the future and come up with random guesses and put numbers on those guesses and then do calculations with those numbers as if those numbers had some sort of actual epistemological relevance. So, I'm sorry to break the game here, but, uh, yeah, it would be silly of me to even say,
I think.
Liron Shapira: Vaden, wanna add to that?
Vaden Masrani: um, I completely agree with everything Ben said. Yeah, I have a deontological principle to not put numbers on my beliefs. However, if by Pidoum, you simply mean just like, what do I believe? Um, then I would categorize it in the same place as, um, the Rapture or the Mayan Apocalypse or Roko's Basilisk.
That's my conclusion.
Liron Shapira: What if we zoom out and we're not just talking about AI, right? So like nuclear war, pandemics, even asteroid impacts, although those seem unlikely in a hundred years. But just, yeah, looking at everything together, just the probability that humanity goes extinct or gets reduced to cavemen in the next hundred years, any ballpark estimate for that?
Vaden Masrani: Meaningless question. Um, I won't give you a number. I don't think that we can know the probability. Uh, if you want to know my beliefs about stuff, that's a different question. So I can tell you how much I believe, but I won't give you a number. No.
Liron Shapira: Would you tell me if you think that it's more than one in a million chance?
Vaden Masrani: Numbers are meaningless.
Ben Chugg: Also, I mean,
Vaden Masrani: I, I can, I can, yeah, I can compare it to other stuff though. So that's, I, I will give you a comparative thing. So the reason why people ask for numbers is because I want to compare. Um, so I can give you things to compare it against. And one thing I would compare it against is Roko's Basilisk.
Solid.
Liron Shapira: obscure topic, right? So it's, and the question is pretty straightforward of humanity's going extinct, so maybe we can compare it to, like, an asteroid impact, right? So, compare the chance of humans going extinct to the chance of a large asteroid the size of the dinosaur one coming in the next century.
Ben Chugg: so, yeah, I think, I think it's much better to just take these one topic at a time, right? So when we talk about asteroid impacts, this is a very different class of event than something like AI or nuclear war. In particular, we have models of asteroids, right? We have both counts and we have physical explanations of how often, uh, asteroids, uh, Uh, enter our orbit, and, you know, uh, we have a sense of our deflective capabilities with respect to asteroids.
So there's lots of, like, there's lots of knowledge we actually have about the trajectories of asteroids. And, uh, and then we can use some statistics to start putting numbers on those risks. That's completely unlike the situation of geopolitics, for instance. We have no good statistical models to model the future of
Liron Shapira: yeah, no, I hear ya. Well, I'll just explain where I'm coming from with this kind of question. So, as I walk through life, I feel like I'm walking on a bridge, and the bridge is rickety. Like, it's very easy for me to imagine that in the next hundred years, like, the show is gonna end, right? It's gonna be game over for humanity.
And to me, that feels like a very salient possibility. Let's call it beyond 5 percent probability. That's how I would normally talk about that. And then, so the reason I'm asking you guys is, you know, we don't even have to get that deep into the epistemology. I'm really just asking you, like, hey, In your mind is the bridge like really really solid or is it rickety?
Ben Chugg: So, uh, I, uh, Yeah, I would argue, um, DeVayden and I might disagree about certain object level things, right? There's very, there's geopolitical risks that I'm certainly worried about and, you know, I think nuclear war, uh, is a possibility in the next hundred years and I'm worried about nuclear deterrence and I'm worried about, uh, the U.
S. getting involved in certain geopolitical conflicts that increase the likelihood of nuclear war. So all of that we can talk about. When you say the words, put, you know, what's the probability of this? You're already bundling in a lot of assumptions that we're operating in some well defined probability space here.
Probability is this technical tool that we use, that, you know, mathematicians use sometimes to solve certain problems. It has a domain of applicability. Uh, machine learning is one of those domains, right? We use statistics a lot to reason about algorithmic performance and reason about how to design algorithms to accomplish certain goals.
Uh, when you start talking about the probability of nuclear war, we're totally outside the realm of probability as a useful tool here. And this is You know, now we're sort of getting to the heart of the matter about the critique of Bayesian epistemology. It, it views, you know, it has this lens on the world where everything can be boiled down to a number, and those numbers can be compared, uh, with one another in coherent ways.
And those are premises that I and Vaden reject.
Vaden Masrani: Wholeheartedly
Liron Shapira: guys you guys are being pretty quick to throw my question back at me But
I feel like I'm asking about something that you can probably interpret meaningfully for
instance just to help you perhaps answer more easily I mean Vaden did answer saying that he feels like the next hundred years are solid in terms of Probability of human extinction or in terms of
fear.
Let's say subjective fear of human extinction,
right? It's
Vaden Masrani: say probability,
but yeah.
Liron Shapira: Okay, so it's solid in some sense that maybe you could describe as your subjective sense, right? When you say solid, it's
the sense that you
Vaden Masrani: But
to be clear, subjective sense is different than probability,
Liron Shapira: Yeah. Okay. Fair. So, I can make the question, um, perhaps even, uh, more meaningful by saying like, hey, imagine this is the peak of the Cold War crisis, peak of the Cuban Missile Crisis. And people are saying like, man, this blockade, the U. S. is going to do the blockade around Cuba. The Soviets have threatened to respond. They might even use their missiles before they lose them. So imagine it's like that evening, right, where a lot of people are like, I sure hope they don't launch those missiles. During that evening, if I ask you, Hey, does the next century of humanity's future seem to you solid or rickety, would you still give that same answer, solid?
Vaden Masrani: Not in that circumstance, no.
Liron Shapira: Okay, and so from our vantage point today, could you imagine that sometime in the next few decades, like what's going, happening right now with Ukraine and Russia, or Israel and Iran, could you perceive yourself entering another type of evening like that, when you're like, oh, maybe I don't feel solid anymore.
Vaden Masrani: I can imagine all sorts of things, for sure.
Liron Shapira: So generally when we're imagining the future and we're thinking about the past and we're just, we're not sure what's going to happen, that's generally a time when a lot of people would be like, well, there seems to be some significant probability that things are going to go really bad.
Vaden Masrani: A lot of people would, but we don't. Totally.
That's what we're
Liron Shapira: you would rather dismiss those people and just be like, nope, my answer is solid.
Vaden Masrani: No, you're misunderstanding the claims that we're making. Um, I don't dismiss those people. I ask for their reasons for the number, because the number itself is next to meaningless. It's not entirely meaningless. But it's close to it. Um, if you want to know my subjective belief about something, I will absolutely tell you.
If you want to know how strongly I believe that, I'll tell you that too. What I won't do is put numbers on it, because putting numbers on it allows for fallacious comparisons between things that should not be compared. talk about subjective
Liron Shapira: now I'm just
Vaden Masrani: and your answer. yeah, yeah, but I won't put numbers on it.
If you want
Liron Shapira: when you.
Vaden Masrani: if you want me to put numbers on it, then we're going to stalemate here. But if you want to have something else, then we can go. Yeah.
Liron Shapira: Right now I'm just pushing on your answer when you said solid. Do you want to perhaps revise solid, or do you still want to go with
Vaden Masrani: No, uh, no, I'm an optimist. Um, yeah, I'm an optimist about the future. I think that there's definitely things to be worried about, but there's also many things to be excited about. Um, technology is awesome. Um, we can talk about Ukraine and Israel and Iran, and those are things to be worried about. We can also talk about, um, the mitigation of poverty.
We can talk about, uh, getting to Mars. We can talk about the amazing things that, um, uh, a, uh, diffusion models are making and how that is going
Liron Shapira: yeah.
but none of those things are directly irrelevant to my question of the risk of something like the Cuban Missile Crisis really coming
to a
Vaden Masrani: that wasn't your question. That wasn't the question, if I recall it. The question was about, do we think we're standing on a solid bridge or a rickety bridge? And then you use the Cuban Missile Crisis as an example, right?
Liron Shapira: So, if a lot of good things are happening on the bridge, right, like there's a candyland happening on the bridge, but the bridge still might collapse, that's really more of what I'm asking about, is the risk of collapse.
Vaden Masrani: Yeah, I don't think we're going to collapse. No.
Liron Shapira: Okay. All right. Um, so yeah, you guys mentioned a lot of different topics that I definitely want to drill down into. Um, I think, yeah, a good starting point is like to zoom all the way out and like, let's talk about epistemology, right? Epistemology is like the study of how people are allowed to know things.
Is that basically right?
Ben Chugg: Sure, yeah. The study of, you know, how we know what we know, I think, is usually how it's phrased.
Vaden Masrani: Yeah, yeah. Knowledge about knowledge. Knowledge about knowledge.
Liron Shapira: Why is epistemology important? And what are the stakes for getting epistemology right? Ben.
Ben Chugg: So, I mean, epistemology is at the center, perhaps, of this mystery of how humans. have come to do so much in so little time, right? So for most of even human history, let alone world history or let alone universal history, not much was going on and humans weren't making tons of progress. And then all of a sudden, a couple hundred years ago, we started making huge leaps and bounds of progress.
Um, and that is a question of epistemology. Right. So now we're asking questions, how are we making so much progress? Why do we know that we're making progress? Can we actually say that we're making progress? We seem to understand the world around us much, much better. We're coming up with theories about how the world works.
Everything from, you know, cellular biology to astronomy. Um, and how is this mystery unfolding? And epistemology is. A key question at the center of that, right? To be able to say, um, how and why we're making progress. And also to start analyzing, uh, the differences between how people go about making progress and how that differs maybe across cultures.
Are there better and worse ways to generate progress? Are there ideas that stultify making progress? Uh, you know, these are all important questions when it comes to future progress and, you know, just human, human welfare in general.
Vaden Masrani: Yeah, one thing I would
maybe add to that is, I like to sign off on everything Ben said. I'd also say epistemology is like the grand unifier. So if you like science, and you like, um, literature, and you like, um, If you like journalism, and you like art, and you like thinking about the future, epistemology is the thing that underlies all of that, which is why our podcast just keeps branching out into new subjects every other episode, because epistemology is the center of the Venn diagram.
So for that reason, and for Ben's reason, yeah, I like it. Mm
Liron Shapira: a major breakthrough in popular epistemology, right? This idea of like, hey, if you want to know what's true, instead of just like arguing about it and getting stoned and deferring to whoever is higher status, why don't we go outside and conduct an experiment and let reality tell us what's right, right?
Vaden Masrani: Exactly.
Liron Shapira: Yeah, so epistemology is powerful in that sense. And then also, as we stand today, I think we argue over epistemology as it relates to how we're going to predict the future, right? I mean, you saw it a few minutes ago, and I'm like, hey, is the next century, are we all going to die? And it sounds like we're kind of on the same page that we can't even agree on whether or not we're all likely to die because of a conflict that's going to trace to our epistemologies.
Right?
Vaden Masrani: Mm hmm. 100%. Exactly. Yeah.
Liron Shapira: okay, great. So I just wanted to set that up because I think a lot of viewers of the show aren't epistemology nerds like we are, but now we've raised the stakes, right? So the rest of the conversation is going to be more interesting. Okay, so my first question about your epistemology is, would you describe yourself as Popperians, right, in the style of Karl Popper?
Vaden Masrani: Um, I only say reluctantly because I don't like labels and I don't like a lot of obnoxious frickin pawperians on Twitter who also identify as pawperians and every time you label yourself, now you are associating. Now other people with that label, their bad behavior or their annoying tendencies maps onto you.
So that's why I don't like the label, but I have to just, yes, I'm a pawperian through and through. He's the one who's influenced me the most. Um, and every other utterance of mine either. It either has his name cited directly or is just plagiarizing from him. So yeah, I'm a pauperian, definitely.
Liron Shapira: think you said on your podcast you spent like hundreds of hours studying all of Popper. Is that your background?
Vaden Masrani: Yeah. Um, that was what I was doing while I was in a Bayesian machine learning reading, uh, machine learning, uh, research group. Yeah. Um, so it was, uh, Bayesian in the day and pauper at night. And uh, that was, uh, exactly. Yeah. Yeah.
Liron Shapira: Okay. Ben, how about you?
Ben Chugg: Um. Probably more reluctantly than Vaden if only because I don't know popper's stuff as well. So my knowledge of Popper, you know, I know I've read some of popper's works in, in great detail, uh, and argued with vaden almost endlessly about much, much of popper's views. So, you know, it'd be cheap to say that I don't understand Popper. Uh, but you know, I haven't read all of his work and. I've become extremely allergic to labeling myself with any particular view, but yeah, if you press me at the end of the day, I would say that I think Popper and his critical rationalism makes the most sense of any sort of epistemology that I've come across previously.
So I'd have to adopt that label.
Liron Shapira: Okay.
Vaden Masrani: And, and you came from an EA background. I think that's important for the listeners to, to know.
It's not as if you were totally neutral. And they should listen, yeah, they should listen to our first 10 episodes because that's where the battle began. And so you were familiar with the EA stuff. And there's through a long, slow battle, which this two, three hour conversation is not going to resolve at all.
Um, but hopefully the conversation will spark some sort of interest in the viewers who, and those of, And those who want to explore this more can listen to our 70 plus episodes where we gradually explore all of this stuff. So no minds are going to be changed in this particular debate, which is why I don't like debates too, too much, but if it kindles some sort of interest and people actually do want to explore this slowly, then there's a lot of stuff to discover.
Um, so
Liron Shapira: Great. Okay. And, uh, as people may know, I'm coming at this from the Bayesian side, uh, people who read Less Wrong and Eliezer Yudkowsky. That whole framework of rationality and AI Doom argument, it does tend to come at it from Bayesian epistemology, and it explains why Bayesian epistemology is so useful from our perspective.
And in this conversation, I'll put forth some arguments why it's useful, and you guys will disagree, right? So that's kind of where we're going with this, is kind of a Popper versus Bayes argument. Epistemology debate, is that fair, Vaden?
Vaden Masrani: let's do it.
Liron Shapira: And then before we jump in further, when I think about popper today, I feel like David Deutsch has really popularized it in the discourse.
So I feel like most people, myself included, haven't read almost any popper directly, but we have read or seen indirectly a good amount of David Deutsch. And when David Deutsch was on your podcast, he was a great speaker. I think he said he's not an official spokesman, right? He's not a Popper surrogate.
He's just somebody who's read a lot of Popper and been highly influenced by Popper, but he doesn't claim to speak exactly like Popper would speak. But from your perspective, isn't David Deutsch very closely aligned with Popper?
Vaden Masrani: Uh, yes, if you don't know Popper's work very well, if you do know Popper's work very well, then you start to see major distinctions and differences between the two. Um, so, from an outsider perspective, I think understanding Deutsch's work is a great entry point. It's more approachable than Popper, for sure.
But, um, but there's no substitute. Reading Deutsch is not like, So actually, let me take one step back. Um, for about five years, I read the beginning of infinity and fabric of reality, and I just thought to myself, ah, you know what? I basically get conjectures and refutations. I get the point wrong. You do not get the point.
You have to read conjectures and refutations. There is so much more in that book than, um, you have learned in beginning of infinity, and it is not like a surrogate at all. You have to read, uh, conjectures and refutations at least, um,
to start to have the picture, uh, filled in. Well,
Liron Shapira: about Bayes that maybe there's some deep stuff that you guys don't get yet, right? So maybe we'll bring out some of the deep stuff in this
conversation.
Vaden Masrani: so, just to add, so, um, in Logic of Scientific Discovery and Realism and the Aim of Science, about three quarters of both those books is discussing probability and Bayes. So, it's math and it's equations and, um, everything that I know about Bayes comes from Popper, and that's not in the book. So if you want to really understand Bayes and probability, then you have to read Popper.
Um, it's not enough to read Joukowsky because Joukowsky is coming from the Jaynes line. Um, so E. T. Jaynes is the famous Bayesian and so Jaynes is, uh, Joukowsky's Popper. But, um, Jaynes just gives one glimpse into how probability works. Um, and so if you actually want to understand it at the root, you can't just read, um, Joukowsky or Pop, uh, Jaynes.
You have to go down to Popper and then branch out from there. Um, so just add that.
Liron Shapira: And just to tie David Deutsch into this argument a little more directly, I heard when he was on your podcast, you were talking about how you're not a fan of Bayes these days, and you spend a lot of the time on your podcast telling people why Bayes is wrong, or the arguments are weaker than they look, and David Deutsch was really nodding along.
I think he gave you like an attaboy. So he basically supports your mission of being kind of anti Bayes, right?
Vaden Masrani: Our mission was because of like one page in beginning of infinity and that then that got my little cogs turning and then being in a Bayesian machine learning reading group or research lab Couple of reading popper is what made the whole argument starts to become very interesting to me.
But
Liron Shapira: Okay. So our debate today, it's a little bit of a proxy two on two, where you've got this team of Karl Popper and now David Deutsch, who's actually still alive and well. And then on my side, we've got, uh, you know, the Reverend Thomas Bayes, or, you know, who, the group who actually invented, uh, Bayesian reasoning. Um, and, and Eliezer Yudkowsky, right, who's been highly influential to a lot of people like me, teaching us most of what we know about Bayes. So yeah, so Eliezer, uh, as a successor to Bayes, versus David Deutsch as a successor to Popper, all battled through us random podcasters. Sound
Ben Chugg: With the caveat, yeah, there's always a bit of trepidation, I think, at least on my part, and I'm sure on Vaiden's part as well, to speak for anyone in particular. I mean, David Deutsch has his own lines of thought and, you know, I, I, I would be very hesitant to label myself, uh, a Deutsch or Popper expert.
And so, you know, I always prefer it if we just, keep the maybe debates at object level. Um, of course, in the background, there's going to be these Bayesian versus Deutschian, Popperian dynamics. And, you know, that's inevitable given how we're, we've all been influenced, but just to put it out there, I'd be, I'm, I'm comfortable saying that my views, uh, comport precisely to someone else's views.
Vaden Masrani: Yeah, just to, uh, clarify for the, uh, listeners, um, the Reverend Thomas Bayes is not equivalent to Bayesianism, and the guy, Thomas Bayes, is legit and he's fine and that's just like where Bayes theorem came from or whatever, but, uh, Bayesians I think of as E. T. Jaynes and I. J. Goode and Eliezer Yudkowsky.
And so these are the people who, um, I would put on the opposite side of the ledger.
Liron Shapira: Great, and also, the other correction I would make is that, uh, I think Pierre Simon Laplace is actually the one who publicized, uh, Bayesian methods, and he kind of
named it after Bayes, so yeah,
you.
know, and this isn't really a history lesson, I don't really know what happened, but it just
is what it is, in terms of
Vaden Masrani: That's great. Yeah, great.
Liron Shapira: Okay. Alright, so, to kick this part off, uh, Ben, how about just give us really briefly, um, like, explain Popperian reasoning, and like, pitch why it's valuable.
Ben Chugg: Uh, sure. So I think the way I like to think about Paparian reasoning at a high level, and then we can go more into the details, is just trial and error, right? So it comes down to how do we learn things? You know, if, how you, if you ask a kid how they learn how to ride a bike or how they learn to cook or how they learn anything, you try stuff and it doesn't work, you learn from your mistakes, you try again and you slowly reduce the errors in your, in your, uh, thinking and your habits.
Uh, and then Popper just takes that same attitude to epistemology. He says, okay. Um, How do we learn things? Well, we conjecture guesses about the word, uh, about the world, how the world works, whether it's politics, whether it's science, um, and then we look for ways to refute those guesses. So this is where the critical experiment comes into play for Popper in the realm of science, right?
So we have a theory, that theory makes predictions about how the world works. It says certain things should happen under certain conditions, uh, and that gives us something to test, right? So then we go out, we run that test and, and then again, follows his famous falsification criterion, right? So if that test does not succeed, we say, okay, uh, theory falsified, uh, and then we come up with new guesses.
Um, and so there's of course a lot more to say, but it's really the method of trial and error at work in the realm of epistemology. And so Pauper really does away, um, with the notion of seeking certainty. So, at, you know, he was operating at the time of the Vienna Circle, and people were talking a lot about how do we get certainty out of our science, right, or how, and how do we justify our certainty, um, and also talking about demarcations of, like, meaning versus, uh, meaningfulness versus meaninglessness.
Um, and Pauper basically takes a sledgehammer to both of those traditions and says, these are not, uh, useful questions to be asking and certainty is not achievable, it's not attainable. So let's just subvert that whole tradition and instead, uh, We're not going to search for certainty, um, but that doesn't mean we can't search for truth.
Um, and that doesn't mean we can't get closer and closer to the truth as time goes on. But there's no way to know for sure if something's true, so we can't be certain about truth. Um, and then this also starts to subvert certain notions of Bayesianism, which wants to, they, you know, Bayesians want to approach certainty, but now via the probability calculus.
Um, and so, you know, that gets us perhaps farther down the line, but that's maybe a, The scope of the debate, and then I'll let Vaden correct anything I've said wrong there.
Vaden Masrani: Great. Um, just one thing to, to add is, um, what Pauper says we don't do is just open our eyes and observe evidence getting beamed into our skulls such that the probability of a hypothesis goes up. Up, up to some threshold, and then bang, you know it's true, and that's how you get knowledge.
It's not about just opening your eyes and having the evidence beamed into you. It's about conjecturing stuff, and then actively seeking evidence against your view. Trying to find stuff that falsifies your perspective. Not opening your eyes and observing stuff that you want to see. Um,
Liron Shapira: Great. We'll definitely get into that.
So me and the Bayesians, we don't have a problem with taking in a bunch of evidence and then updating your belief on that
evidence, right? So I guess we'll talk more about that. That does sound like an interesting distinction. Let me give the quick pitch for what Bayesianism is, what it means. Uh, so Bayesianism says that you go into the world and in your head you basically have a bunch of different possible hypotheses, some mental models about how the world might be working, right? Different explanations for the world. That's what Bayesianism is. And then you observe something, and your different competing hypotheses, they all say, oh, this is more consistent with what I would have predicted.
This is less consistent than what I would have predicted. And so then, you go and you update them all accordingly, right? You make a Bayesian update. You, the ones that said, hey, this is really likely, the ones that gave a high prediction, a high probability to what you ended up actually observing, They get a better update after you observe that evidence and eventually once you keep observing evidence You hopefully get to a point where you have some hypothesis in your head Which is really high probability compared to the others and you go out in the world and you use the hypothesis and it steers you In the right direction like it turns out to keep giving a high probability to things that actually happen So that's the model of Bayesianism and It sounds like a lot of what Bayesianism tells you to do is similar to what, uh,
Popper tells you to do.
I mean, Bayes and Popper, they're not like night and day, right? They're not enemies, and they're arguably more similar than different. I mean, there's major differences we're going to get into, but like, when you guys said, Hey, there's no certainty, right? There's just like, doing your best. I mean, I feel like that fully dovetails with what Bayes would tell you, right?
Because you're not supposed to give like a 0 percent or 100 percent probability.
Vaden Masrani: What do you, can I ask a question there? What do you mean by update? So, um, what I do is I change my mind all the time when, uh, stuff that I think, uh, turns out to not to be true or I see new evidence that, um, either confirms my view or disconfirms it. So I'm changing my mind all the time. Um, but you didn't use that phrase, change your mind, you said update.
And so I'm just curious what the difference is between updating and changing your mind.
Liron Shapira: Yeah, so when you talk about, hey, what's on my mind, right? Like, what do I think is the correct hypothesis? Um, like, uh, maybe a good example is the election, even though, you know, politics is such a controversial topic. But I'm just talking about predicting who will win, let's say Trump versus Kamala. If you ask me, Liron, who is going to win?
And I say, um, I don't know, I saw the latest poll, I guess Kamala. And then tomorrow, like, oh, another poll just moved the, moved the win probability one percent. So now it's like, I guess Trump. But it's not like my mind has changed from Kamala to Trump. It's like I, I was always very much entertaining both the hypothesis that says tomorrow Kamala will win, and the hypothesis that says tomorrow Trump will win.
And when I update their probabilities, I'm just like, okay, yeah, if I had to bet, I would now bet slightly higher odds on one than the other. So that's what I mean by changing my mind. It's very much not binary.
Vaden Masrani: No, I didn't ask what you meant by changing your mind, but you meant by update. Um, so update is the same as changing your mind or is it different? Um,
Liron Shapira: So I, I don't really have such a thing as changing my mind because the state of my mind is always, it's a playing field of different hypotheses, right? I always have a group of hypotheses and there's never one that it's like, oh this is my mind on this one. Every time I make a prediction, I actually have all the different hypotheses weigh in, weighted by their probability, and they all make the prediction together.
Vaden Masrani: where did you get the
Ben Chugg: Wait, wait, wait, let's,
Vaden Masrani: Like just for like Yeah, no, it's, it's, can we just have some, no, I just want to have a conversation. Like, um, I, I just don't understand your, your answer. Right. Uh, but Ben had a question first.
Ben Chugg: uh, yeah, like, maybe let's just make this concrete. Um, so, when, if you're designing a satellite, Uh, you're going to send the satellite into space, right? Uh, you're not going to base the mathematics of that satellite, uh, on some combination, some weighted combination of theories of physics. Um, you're going to base it on general relativity, hopefully, otherwise it's not going to work.
Uh, and so in what sense, you know, you're not assigning a probability one to general relativity because we also know it's wrong. In some fundamental way, right? Specifically, it doesn't count for, uh, certain very small subatomic effects that we know to be true. So, yeah, in what sense is like, you know, you're taking a decision there, uh, it's not a weighted average of physical theory, so what's, uh, what's going on there?
Liron Shapira: Great question. If I'm going to go and invest 100 million on an engineering project, it's because whatever combination of hypotheses are going in my head are agreeing with sufficiently high probability on predictions that my engineering is going to work. So, I don't have a hypothesis in my head that has more than 1 percent probability, saying, you're gonna launch that satellite and some other non Newtonian, non Einsteinian force is just going to knock it out of its trajectory.
I don't have a hypothesis like that that's getting sufficiently high probability. So, this is a case where I feel very confident, so my dominant hypothesis about how the physics is going to work has already established itself with more than 99 percent probability.
Vaden Masrani: I don't understand that, but
Ben, did you
Ben Chugg: Uh, yeah, I mean, I, okay, that's, that's fine. I, I, dis, I think, heh, Yeah, we can, we can move on. I mean, I, I, I don't think this is actually what's going on in your head. I don't think you have these explicit theories and you're actually assigning probabilities to them. I think what's going on is you've been swayed by arguments that if you send a satellite into space, it's going to
Liron Shapira: a fair criticism, right?
Ben Chugg: relativity.
So I think Bayesianism in this way is both descriptively, uh, and normatively, we'll get into that later, false. Um, but you know, I can't sit here and examine the context, the contents of your
Liron Shapira: If I understand it, I
think this is an interesting point you're making. You're basically saying, look, Liron, you kind of retconned, right? You retroactively described yourself as using Bayesian epistemology to justify why you funded this satellite project, but realistically, you never even thought of that.
You're just retroactively pretending like you're Bayesian. Is that, like, basically your criticism?
Vaden Masrani: but hold on though, cause Ben's question wasn't about if you have a hundred thousand dollars and you need to allocate it to different engineering projects, it's if you were the engineer. And we don't know how to make a satellite yet, how are you going to do it? And that's a different thing, right? So, we're not talking about assigning probabilities to which project is going to be more or less successful.
We're talking about, like, how do we get a satellite into the sky? Um, and to do that, you need to understand general relativity. And quantum mechanics. And these two things are mutually exclusive. So if you sign probability to one, you have to necessarily assign less probability to the other under the Bayesian framework.
However, that isn't how scientists make satellites, because as we get more evidence for the quantum mechanics, that doesn't take away what we know from general relativity, because we have to use both frigging satellite into the sky. And so you just kind of answered a question that was adjacent to, but not the same as the one that Ben was asking.
Liron Shapira: To make this specific, is there a particular prediction, like, you're basically
saying, hey, how am I going to resolve the conflict between these different theories, but can you make it specific about which part of the satellite engineering feels tough to resolve for you?
Ben Chugg: Yeah, I, it's.
Vaden Masrani: Uh, does it,
Ben Chugg: It's just more when you were saying like, how do you reason about the world, right? You're not, you're not tied to any specific hypothesis. It sounded like your worldview is not like, okay, uh, for the purposes of achieving this, I'm going to assume general relativity is right. That's anathema to the Bayesian, right?
The analogy there is assigning probability one to general relativity. You're not going to do that because we know general relativity is false in some important way. Um, and so you said what you're doing, you know, what you're thinking and the actions you're taking, correct me if I'm wrong, of course, are some, you know, weighted average of these hypotheses that you have about how the world works.
But that just doesn't comport with, like, If, you know, if you were to be an engineer, in terms of how you're actually going to design the satellite and send it up into space, um, it's not, you know, you're not relying on a mishmash of physical theories to get the job done. You're relying on general relativity in this case.
Liron Shapira: I mean, there's specific things that I need to model to get the satellite to work, and I don't necessarily need to resolve every contradiction between different physical theories. I just have to say, what are the relevant phenomena that need to follow my models in order for the satellite to perform its function and not fall down to earth? And I probably don't have major conflicts between different theories. I mean, if I'm not sure whether Einstein's Relativity is true, right, if I'm not sure whether time dilation is a real thing or not, then I, as a Bayesian, I don't think that Bayesianism is the issue here, right? If engineers launching a satellite didn't know if time dilation was going to be the issue, I think even as a Popperian, you're like, uh oh, well they better do some tests, right?
I think we're in the same position there.
Ben Chugg: Yeah, for sure. Yeah.
Vaden Masrani: can I go back to a different thing that you said earlier? Maybe the satellite thing is getting us a bit stuck, um, you said that you never change your mind because you have a fixed set of hypotheses that you just assign different weights to. First, is that accurate summary of what you said?
I don't want to
Liron Shapira: If you want to drill down into, I wouldn't call it a fixed set of hypotheses, in
some sense it's a variable set of, but it's always, there's a community of hypotheses, right, and they're all getting different weights, and then they're all weighing in together when I make a
Vaden Masrani: so when you said you never changed your mind, just maybe flesh out a bit more what you mean by that, because I don't want to
Liron Shapira: Okay, if, if, I mean, if I walk into a room of strangers and I say, guys, I never changed my mind, I think that's very much sending the wrong message, right,
Vaden Masrani: Totally, totally, which is why I'm not trying to star man you at all. So maybe just, just, just clarify a
Liron Shapira: Because on the contrary, right, the takeaway is really more like, no, Bayesians, I'm a master at the dance of changing my mind, right? I
don't just see changing my mind as like, oh, hey, I have this switch installed that I can flip.
No, no, I
see myself as like a karate sensei, right, where I can like exactly move to the right new configuration of what my mind is supposed to have as a belief state. So does that answer your question?
Vaden Masrani: Um, so I gotta, I guess, why did you say you'd never changed your mind in the first place? I'm totally understanding that, that you don't mean,
Liron Shapira: I meant,
yeah, I feel like I threw you off track. When I say I don't change my mind, what I meant was that when you use that terminology, change your mind, it seems to indicate that like somebody's mind has like one prediction, right? Or like they've picked like this one favorite hypothesis and then they threw it away and took a different one. And I'm just saying that's, that doesn't describe my mind. My mind is always
this community of different hypotheses. Yeah.
Vaden Masrani: Gotcha. Yeah. So yeah, so that's actually a nice distinction between like a Popperian approach and a Bayesian approach. So for me, once I have enough disconfirming evidence, I do exactly what you said the Bayesian doesn't do. I take that hypothesis and it's gone now. I don't assign less probability.
It's just, it's dead up until the point where there's another reason to think that it's no longer dead and then I'll revive it again. But um, but so that's just a distinction between how my thought process works and yours, I guess. curious. Another thing, though, which is, um, where do you get your hypotheses from in the first place?
Uh, because I understand that under the Bayesian view, you start with hypotheses and then you just assign different weights to them, but, um, but I'm just curious, before that stage, before the reweighting, where do the hypotheses come from in the first place?
Liron Shapira: Uh huh, that's a popular gotcha that people like to throw at the basins, right? They're like, hey, you
guys keep talking about No, no, no, I know, I know, I know. Uh, people like to, you know, Bayesians love to keep updating their probabilities, but if you don't start with a really good probability in the first place, then you still might get screwedup.
For example, like, if my a priori probability that, like, Zeus is the one true god, if it's, if I start out with it at 99. 99%, then even if I see a bunch of evidence that the world is just, like, mechanical and there is no god, I still might come out of that thinking that Zeus has a really high probability. So, you know, this is just kind of fleshing out your, your kind of
Vaden Masrani: no, you misunderstood the question, you misunderstood the question. I'm not talking about, um, how do your prior probabilities work. Where do they come from? Um, I'm, cause when you talk about Bayes Theorem, you have your likelihood and your prior. Um, so P of E given H, P of H over P of E, yeah? Um, so we can talk about the probability for P of H, and that's what you were describing.
I'm not talking about that, I'm talking about H. Where does H come
Liron Shapira: Sure,
Vaden Masrani: So before the probabilities are assigned, just where does H come from?
Liron Shapira: So this is not necessarily a point of disagreement between us. I mean, just like you can generate hypotheses, I'm also happy to generate hypotheses and consider new hypotheses. So the short answer is, you and I can probably go source our hypotheses from similar places. The long answer is, there is an idealization to how a Bayesian can operate called Solomonoff induction. Are you guys familiar with that at all?
Vaden Masrani: Yeah, yes.
Liron Shapira: Yeah, so Solomonoff induction just says like, Hey, there's a way as long as you have infinite computing resources, right? So it's an idealization for that reason, but there is a theoretical abstract way where you can just source from every possible hypothesis and then just update them all, right?
That's the ideal. So I do some computable approximation to that ideal.
Ben Chugg: But the approximation, that's where the details are hidden, right? You clearly don't
have every, you're not running every possible hypothesis in your head, right? At some point, you're coming up with new ideas. Like, sometimes you wake up, you have a creative thought that you haven't had before. Um, you know.
Bayesianism can't really account for that. All in, and in, you know, if you want to get into the math, it really complicates things. Cause now all of a sudden you're, you're, you're working with a different probability space, right? And so like, what happens with all the probabilities that you assign to some other fixed hypothesis class?
Now it's like, okay, now I have a new hypothesis. Everything's got to get rejiggered. Um, and so it's, it's, it's just doesn't account for idea creation in a satisfying
Liron Shapira: So this is, um, this is how I perceive the current state of the conversation. I'm basically like, Hey. My epistemology has an uncomputable theoretical ideal that I'm trying to approximate. And you guys are like, well, that's so fraught with peril, you're never going to approximate it. Like, what you actually do is going to be a shadow of that. Whereas, I would make the opposite criticism of you guys of like, okay, well, you guys haven't even gotten to the point where you have. A, an uncom computable ideal. So I feel like I'm actually farther along to be in this because approximating an uncom computable ideal, we do that all the time. Right? This whole idea of like, Hey, we're going to go do math.
Well, math is actually uncom computable, right? Like the,
the general task of evaluating a mathematical statement is so in, in all areas of life, we're constantly approximating uncom computable ideals. So I, I, I'm not ashamed of approximating un uncom computable ideal.
Vaden Masrani: we do it on this side of the aisle too. If, again, if you want to set this up as a debate, then we can, I guess, do that. Um, the Turing machine is an uncomputable ideal that we approximate with our brain. So we have that on our side too, if that's what you're looking for. Right.
Liron Shapira: And how does that relate to Arianism?
Vaden Masrani: Um, it doesn't totally because Popperian or Popper, so it does relate to Deutsch. So that, um, Deutsch, church, church, Turing thesis is where he gets his universal explainer stuff. And I. We can maybe go into that if you want, but, um, but in terms of Popper, it doesn't at all. But in terms of giving you what you said we didn't have, it does.
Because you were saying that on our side of the aisle, we don't have the, uh, uh, Incomputable ideal that we approximate, but we do, because I'm talking to you on it right now, which is a MacBook Pro. And that is an approximation of an incomputable ideal. So, yeah.
Liron Shapira: Okay, got it. So when I ask you guys, Hey, where do Popperians get their hypothesis? You're basically saying, Well,
we do at some point consider every possible Turing machine as a
Vaden Masrani: No, no, no, no, no. So this, this is great. So, um. We don't know. So the answer is, I don't know. Um. So, POPR starts with trial and error. Um, but the question of like, where do the conjectures come from, where do the ideas come from? We don't have an answer. I don't know. And I would love to know and to me answering that question is equivalent to solving AGI.
Um, so I have no idea how the brain comes up with its conjectures in the first place. POPR starts with trial and error. There, it just says, there is some magical mystery process that we'll leave it for the neuroscientists and the psychologists to figure out. We're just going to say, Dunno, but that's the foundation, the trial and the error.
So that's the answer from our side. Uh, yeah.
Liron Shapira: question that I think you might have answered now, which is, so Popper talks a lot about explanations, right? Like good explanations. It sounds like you're saying that when you think about an explanation, you can formalize what an explanation is as being a Turing machine.
Would you agree?
Yeah.
Ben Chugg: Uh, no, I don't think so. I mean, if we, if we knew how to program a good explanation, presumably that would allow us to generate them computationally, right? If you understood them deep enough at that level. And also I suspect something like that is impossible because then you might be able to litigate what is a better and worse explanation in every circumstance.
And I highly doubt that that's possible, right? This is like the realm of argument and debate and subjectivity enters the fray here. And like, you're not going to be able to convince everyone with. with an argument, and so I don't think computation is the right lens to have on something like good explanations.
Vaden Masrani: and, Just to add a metaphor to, to that, so, um, It's kind of like saying, uh, could you automate, um, the proof process? Well, in some sense, absolutely not, no, like this is what gold, uh, like the incompleteness stuff is, is about, which is that, like, for different kinds of mathematical problems, you have to entirely invent new kinds of proof techniques, such as, like, Cantor's diagonalization argument, right?
Um, that was completely new, and you can't just take that, uh, uh, and apply it to all sorts of new kinds of problems. That's what mathematicians are doing all day, is coming up with entirely novel kinds of proofs. And so if you grok that with the math space, so too with explanations. I think that different kinds of phenomena will require different modes of explanation, um, such that you can't just approximate them all with a Turing machine.
Liron Shapira: Now in the math space, I think we're now at the point where, you know, we've got set theory and formal proof theory, and I think we're at the point where I can say, what a math. Math, what do mathematicians do? They're approximating this ideal of creating this mathematical object, which you can formalize within proof theory as a proof.
Like we, we actually have nailed down the ontology of what a proof is, but it sounds like you're saying okay, but we haven't nailed down the ontology in epistemology of what an explanation is. So, but now you're saying, well compare it to math, but I feel like math is farther
along.
Vaden Masrani: So, uh, can I just jump in here for a sec, Ben, which is that, uh, Ben will never say this about himself, but listeners, just type in BenChug Google Scholar, and look at the proofs that he does, they're brilliant. So you're talking to a mathematician. And so, not me, Ben. Um, and so I just pass that over to Ben because he is absolutely the right person to answer the question of what do mathematicians do
Ben Chugg: uh, I'm just more curious about what you mean by we've solved the ontology of proofs, um, as a genuinely curious question because this might make my life a lot easier if I could appeal to some sort of book that will tell me if I'm doing something right or wrong.
Liron Shapira: let's say, uh, a mathematician, grad student, goes to his professor and he says, Hey, I'm trying to prove this. I am trying to write up a proof once I figure it out. Well, that thing that he writes up, these days, almost certainly is going to have an analogous thing that could, in principle, might take a lot of effort, but it could be formalized, right, purely symbolically within set theory.
Is that fair? Okay.
Ben Chugg: I mean, yes, I mean, okay, I'm, I'm, I'm, I'm confused. I mean, once you have the proof, the point is that it is, it's, it's logic, right? So you should be able to cache this out in terms of, yes, like going down, down to like ZF set theory, for instance, right? You should, you can cache this out all in terms of like certain axioms.
You don't tend to, uh, descend to that level of technicality in every proof. You stay at some abstract level. But yeah, the whole point of a proof is that it's written in tight enough logic that it convinces the rest of the community. That doesn't mean it's certain. That doesn't mean we're guaranteed truth.
That just means everyone else is convinced to a large enough degree that we call this thing published and true. Okay, great. The hard part is coming up with the proof. What the hell the proof is in the first place, right? So once you have a proof, yeah, we can start doing things like running proof checkers and stuff on it.
The hard part is, you know, proving the thing in the first place.
Liron Shapira: You're right. So the reason I'm bringing it up is, you know, I don't even want to talk about the hardness of finding the proof yet. I just want to talk about the ontology, right? This idea that when you ask a mathematician, What are you doing? The mathematician can reply, I am conducting a heuristic search.
My human brain is doing a computable approximation of the uncomputable ideal of scanning through every possible proof in the set of formal proofs and plucking out the one that I need. That proves what
Vaden Masrani: don't know a single mathematician who would say that, and you just asked a mathematician and his reply wasn't that. This isn't a hypothetical, you're
Liron Shapira: isn't, I'm not making a claim about what mathematicians say, right? I'm just making a claim about this, uh, the ontology of what is, uh, right? So, so an informal, informally written English language mathematical paper containing a proof maps to a formal object. That's all I'm
Ben Chugg: Sure. Yeah, yeah. I mean, you're, I mean, math, math is a formal
Liron Shapira: I, yeah, go ahead.
Ben Chugg: as proofs are about manipulations in this formal language, then sure. Yep.
Liron Shapira: So the reason I brought that up is because when I'm talking about Bayes and you're asking me, Hey, uh, where do you get hypotheses or what is a hypothesis? I'd be like, Oh, a hypothesis is a Turing machine that outputs predictions about the world if you also encode the world, you know, in bits, right? So I have this ontology that is formalizable that grounds Bayesian reasoning.
But when you guys talk about Popperian reasoning, it sounds like you haven't agreed to do that, right? You haven't agreed to take this idea of an explanation and have a formal equivalent for it.
Vaden Masrani: False analogy because a hypothesis is a natural language, not in a formal language. So the analogy doesn't work because the ontology
Liron Shapira: so is a mathematical paper, right? So is a research paper.
Vaden Masrani: uh,
Ben Chugg: Step outside of
Vaden Masrani: saying that what you just,
Ben Chugg: Yeah. Go to, go to physics or chemistry or something.
Vaden Masrani: yeah, like I'm just saying that the stuff that you were just asking about the ontology of a mathematical proof using that as, um, an analogy to, The hypothesis, the H in Bayes theorem, the analogy is broken because the hypothesis is some natural expression.
It's not a formal language. So it's just, the analogy just doesn't work. That's all I'm saying.
Liron Shapira: Yeah.
I'm not saying that a hypothesis is a proof. What I'm saying is, when I talk about a hypothesis using natural language, or when I'm saying, hey, my hypothesis is that the sun will rise tomorrow, there is a formal, there's a corresponding formal thing, which is, hey, if you take all the input to my eyes of what I'm saying, and you codify that into bits, and you look at the set of all possible Turing machines that might output those bits, my hypothesis about the sun rising tomorrow is one of those Turing machines.
Ben Chugg: Sure, I mean, okay, so let me just, let me just try and restate your critique of us just so I make sure I'm on the same page. I think you want to say, you know, in theory Bayesianism has this way to talk about the generation of new hypotheses. Right? As abstract and idealized as this is, we've put in the work in some sense to try and formalize what the hell is going on here.
You pauperians are sitting over there, you know, you're critiquing us, you're making fun of us. You haven't even tried to put in the effort of doing this. Where are your hypotheses coming from? You can't criticize us for doing this. You have no, you don't even have a formalism for God's sakes. You just have words and stuff.
You, you know, is that kind of, that's kind of where you're coming from? Without the snark. I added that
Liron Shapira: It's rough, yeah, it's roughly accurate because I do think that formalizing the theoretical ideal of what you're trying to do does represent epistemological progress.
Vaden Masrani: Only if the theoretical, uh, philosophy assumes that a formalism is required. So part of Popper's view is that formalisms are useful sometimes in some places, but most of the time you don't want to have a formalism, because having a formalism is unnaturally construed. Constraining the space of your conjectures.
So, the theory on our side is that formalisms are sometimes useful in some places. Not always useful in all places. And so I, I totally accept your critique from your view. Because your view is that a formalism is always better. And we don't have one. Thus, we're worse. But our view is that formalisms are sometimes useful in some places.
Not always in every place.
Liron Shapira: What would be the problem with you just saying, okay, I can use a Turing machine as my formalism for an explanation, because when we look at the actual things that you guys call explanations, it seems like it's pretty straightforward to map them to Turing machines. Okay.
Vaden Masrani: And, yeah, I guess you
could,
oh, go ahead,
Ben Chugg: Well, I think it just doesn't help you try and figure out the question of like, really where these things are coming from, right? So if you're interested at the end of the day of trying to figure out, uh, philosophically and presumably neuroscientifically how humans are going about generating hypotheses, mapping them to the space of all possible Turing machines is not helpful.
Like, sure, the output of your new idea could be run by some Turing machine. Great. The question is, you know, what, you know, there's an entire space of possibility as you're pointing out, you know, like vast combinations, endless combinations, in fact, of possible ideas. The human mind somehow miraculously is paring this down in some subconscious way and new ideas are sort of popping into our heads.
How the hell is that happening? I don't see how the Turing machine formalization actually helps us answer that question.
Liron Shapira: It's because we're talking about the ideal of epistemology. It might help to think about, Hey, imagine you're programming an AI starting from scratch. Isn't it nice to have a way to tell the AI what a hypothesis is or what an
Vaden Masrani: But the ideal of your epistemology is that a formalism is required. Not our epistemology.
Liron Shapira: Right? But so what I'm saying is, okay, let's, you're saying a formalism isn't required, but let's say I take out a, a white sheet of paper and I'm just starting to write the code for an intelligent ai, right? So when, what, what you say as formalism I say is like, Hey, I have to put something into the
Ben Chugg: yeah, yeah.
Liron Shapira: how
do I teach the AI.
Ben Chugg: I mean, I agree, like, this would be awesome if you could answer this question, but I just don't, I don't think you're answering it by appealing to, like, one thing I don't quite understand about your answer is, you're appealing for a process, rather, okay, let me say that again, for a process that is taking part in our fallible human brains.
As an explanation, you are appealing to this idealized system. By definition, we know that can't be what's going on in our heads. So how is this helping us program an AGI? Which I totally take to be a very interesting question. And I, and we're, you know, we'll get into this when we start talking about LLMs and deep learning.
I don't think, This is the right path to AGI. And so, and very interesting question from my perspective is what is the right path? Like if we could have some notion of like how the human brain is actually doing this. I agree that we, you know, once we figured out we could write, sit down and presumably write a program that does that.
Uh, and that's a very, that's a very interesting question. I just don't think we, we know the answer to that.
Liron Shapira: Yeah, so I agree that just because I have a formalism that's an uncomputable ideal of Bayesian epistemology doesn't mean I'm ready to write a super intelligent AI today. Uh, and by analogy, uh, if, you know, they understood chess when the first computers came out, it was pretty quick that somebody's like, hey look, I could write a chess program that basically looks ahead every possible move, and this is the ideal problem, uh, program.
It will beat you at chess, it'll just take longer than the lifetime of the universe, but it will win. So I agree that your criticism is equally valid, uh, to me as for that chess computer. My only argument is that the person who invented that chess computer did make progress toward solving,
uh, you know, uh, superhuman chess ability, right?
That was a good first step.
Ben Chugg: Yeah. Yeah. That's fair. Can I, um, can I just pivot slightly and ask you to clarify whether you're talking. Do you think Bayesianism is true descriptively of the human brain, or are you making a normative claim about how rational agents ought to act?
Liron Shapira: Right, yeah, yeah, you said this a few times, I'm glad we're getting into this because this is definitely one of the key points, and remember like what you said before, like, okay, you're telling me now that this engineering program, you used Bayesian reasoning to say you were 99 percent confident of the theory, but it sounds like you're retconning,
right, like that's kind of the same, the same style of question.
So I, retconning,
Vaden Masrani: retconning? What's retconning? I don't
Liron Shapira: it's Like retroactively rewriting history basically, right, like, oh yeah, I was totally Bayesian.
Vaden Masrani: Okay, cool, I'm sorry, I didn't know that term. Sorry, sorry to interrupt you.
Liron Shapira: No, that's good, yeah. Um, So
okay, it's totally true that like as I go about my day, right, like why did I open the refrigerator? What was my community of 50, 000 hypotheses, right, that told me different things were going to be in my fridge or there wasn't going to be a black hole in the fridge, right? What were all the different hypotheses? And the answer is, look, I just opened the fridge, right, because it's like muscle memory, right, like I was thinking about something else, right? So I'm not pretending like I'm actually running the Bayesian algorithm. What I'm claiming is To the extent that I can reliably predict my future and navigate my world, to the extent that my brain is succeeding at helping me do that, whatever I'm doing, the structure of what I'm doing, the structure of the algorithm that I'm running, is going to have Bayes structure.
Otherwise, it just won't work as well.
Ben Chugg: Uh, okay.
Vaden Masrani: descriptively you're saying.
Ben Chugg: you're saying descriptively if you do something else you'll fall short of perfect rationality. Like you'll have worse outcomes.
Liron Shapira: I'm saying is like, sometimes my muscle memory will just get me the can of coke from the fridge, right, even without me even thinking about Bayes Law, but to the extent that that was true, it's because it dovetails with what Bayesian epistemology would have also had you do.
Like, Bayesian epistemology is still the ideal epistemology, but if you're doing something else that ends up happening to approximate it, then you can still succeed.
Vaden Masrani: According to you, sure. Um, yeah, it's not Bayes law, it's Bayes theorem, first of all. Uh, but, sure, yeah. Um, that's, that's, that's the worldview that we are saying we disagree with. But, sure.
Liron Shapira: Yeah, I mean, look, similarly to you guys, right? Like, when you're opening your fridge, right, you're not, you don't have the, the one Popperian model with a good explanation, and what's that, right? You're just, like, thinking about something else, most likely.
Vaden Masrani: Are you conjecture that you're thirsty? Or you have a little, like, um, I don't, I guess I don't entirely know what the question is. If, if you're asking, what is the Popperian approach to getting something from the fridge? It's probably, um, Pretty simple. It's, you have an idea that you're hungry and you go there and you open the fridge and you get it.
Um, but if the claim is something deeper, which is like, does the Popperian view say something about Bayes being the ideal, et cetera, et cetera, then it definitely says that that is not the case. Um, so we can go into reasons why that's like, not the case, but it, your answer is assuming the very thing that we're disagreeing about is the point.
Um,
Liron Shapira: Mm hmm. Okay, a couple things,
Vaden Masrani: Yeah. No,
Ben,
Ben Chugg: was just gonna, yeah, just, I, we're somewhat in the weeds, so I just maybe wanted to say to people how I, how I envision the Bayesian debate often is like there's two simultaneous things often happening. One is like the descriptive claims that you're making about like how humans do and how brains do in fact work and that they're doing something approximating, uh, Bayesian reasoning.
Um, and Baden and I both think that's wrong for certain philosophical reasons. You know, we can get into empiricism and stuff, but I don't think observations come coupled with numbers and that those numbers aren't being represented explicitly by your brain, which is updating via Bayes theorem. Um, so there's this, like, whole descriptive, uh, morass that we've sort of entered.
And, but then there's, there's, you know, where rubber really meets the road, um, is, like, the normative stuff. Right, so Bayesians want to, because they want to assign numbers to everything, uh, like you wanted to do at the beginning of this episode, right? You'll assign new numbers to geopolitical catastrophes and, you know, P Doom, and, and then you'll compare those to numbers that are coming from, you know, robust statistical models backed by lots of data.
And I think, Faden, correct me if I'm wrong, I think Faden and I's core concern is really with this second component of Bayesianism, right? I think the descriptive stuff is philosophically very interesting But it's sort of less important in terms of like actual decision making and real world consequences Like if you want to sit there and tell me that you're doing all this number manipulation with your brain that helps you make Better decisions and like that's how you think about the world then, you know, like honestly, that's that's fine to me but What really, you know, when this stuff starts to matter is, I'll just steal Vaden's favorite example, because I'm sure it'll come up at some point, which is, uh, you know, Toby Ord's book of probabilities in, in the precipice, right?
So he lists the probability that humans will die by the end of the century, I forget, correct me if I'm wrong, um, and he gives this probability of one sixth. Where does this one sixth come from? It comes from aggregating all the different possibilities that he's, that he analyzes in that book. So he does AI and he does, um, uh, bioterrorism and he
Vaden Masrani: Volcanoes and asteroids
Ben Chugg: does all this stuff. And this is an illegal move that from Vaden and I's perspective, and this is the kind of stuff we really want to call out and that we think, you know, really matters and really motivates us. Most of the Bayesian critique and sort of goes beyond this like descriptive level touring machine stuff that we've been arguing about now So anyway, I guess I just wanted to flag that for the audience.
Like I think there's more at stake here in some sense than just deciding How to open the fridge in the morning, which is is fun and interesting to talk about but I just wanted Maybe frame things
Vaden Masrani: Yeah.
May I just, yes. I just want to add something to what Ben said. Beautiful. Exactly right. I think it's so important to continuously remind the listener, the viewer, why we're arguing in the weeds so much. We're arguing so much about this because of exactly this high level thing that you said, which is, um, it is illegal, it is, um, uh, duplicitous, and it is misleading the reader when someone says the probability of superintelligence is 110, and they compare that to the probability of volcanic, uh, extinction, which is 1 in 1 million.
Because you can look at the geographical, geological history to count. Volcanoes and make a pretty rock solid estimate But you are just making shit up when you're talking about the future and then you're dignifying it with math And a hundred years of philosophy. And so why Ben, er, I can't speak for you actually on this one, but why I like to and need to argue in the weeds so much is that I have to argue on the opponent's territory.
And so when I'm getting all annoyed by this 1 in 10 to 1 in 1 billion comparison, to argue against that I have to go into the philosophy of the Turing machines and the this and that and the whatever. And we get super in the weeds. Um, but the reason I'm in the weeds there is because Toby Ord has been on multiple podcasts and probably blasted this number into the ears of over 10 million people, if you can fairly assume that Ezra Klein and Sam Harris, who both swallowed this number, um, uncritically, uh, if their listenership is somewhere, um, I think it's one in six for the aggregate of all extinctions and then one in 10 for the super intelligence one, if I'm remembering the precipice correctly.
And that was compared against. Um, I don't remember the numbers for volcanoes and supernovas and stuff, but one in one million, one in ten million, that, that, that order of magnitude, yeah.
Liron Shapira: Yeah, and then, so, you're making the case why we're getting into the weeds, why epistemology is so high stakes, because basically the upshot in this particular example is that humanity should be able to do better than this Bayesian guy Toby Ord, because it's kind of a disaster that Toby Ord is saying that, like, nuclear extinction, for instance, might have a probability of, just to oversimplify what he actually says, something in the ballpark of 10%, right?
Which gets to what we were discussing earlier. So you consider it kind of a failure mode that people like myself and Toby Ord are making claims like, Hey guys, there's a 10 percent chance that we're going to nuclear annihilate ourselves in the next century. You think it's a failure mode because you think something better to say is, Hey, we don't know whether we're going to get annihilated and nobody should say, quantify that.
Vaden Masrani: the, the, the, the claim. Um, so I didn't use nuclear annihilation intentionally, because I think that is also in the camp of we don't really know what the numbers are here. I used, uh, volcanoes, and I used supernovas, and I used asteroids. I did not use
Ben Chugg: No, that's what he's saying. That's what
Liron Shapira: and I think we're all on the same page that those things are unlikely in any
given century, right? But so, so why don't we talk about the, the thing that's like the more meaty claim, right? The
Vaden Masrani: No, no, but, but, but, but my claim is, is not that it's, we can't reason about nuclear annihilation. I think that's very important. I'm just saying that if I talk about the probability of volcanoes and then I talk about the probability of nuclear annihilation, when I say the word probability, I'm referring to two separate things.
I should talk about like probability one and probability two or probability underscore S and probability underscore O or something. They're just different and we can't use the same word to compare
Liron Shapira: you might label it frequentist probability, right, would that be a
Vaden Masrani: No,
uh, no, no, frequentism, yeah, frequentist is a philosophical interpretation. Um, I've been using objective probability, but just probability based on data, probability based on, on counting stuff, data, but frequentist is not, right, no,
Liron Shapira: Okay. Yeah, maybe you could call it statistical probability.
Vaden Masrani: um, let's just call it probability that's based on data,
Ben Chugg: Or stitch.
Vaden Masrani: CSVs, Excel, JSON, yeah,
Ben Chugg: Yeah, it just goes fine for the purpose of this this conversation, honestly Um, and yeah, just to just to maybe answer the question you you asked in a minute ago It's certainly not that we can't talk about the risk of nuclear annihilation, right? What we're saying is let's skip the part where we all give our gut hunches and like scare the public with information that no one can possibly have.
Uh, and so I would just turn it on you. Like, so if, you know, say you're very worried about, uh, nuclear annihilation, you give a probability of 1 over 10 in the next 50 years, then someone comes up to you, some geopolitical, Analysts, say John Mearsheimer comes up to you and he says my probability is 1 out of 50, okay?
What's your next question? You're gonna ask why is your probability 1 And he's gonna say why is your probability 10? What are you gonna do? You're gonna start descending into the world of arguments, right? You're gonna start talking about mobilization of certain countries, their nuclear capacity, their, you know, incentives, right?
You're going to have like a conversation filled with arguments and debates and subjective takes and all this stuff. Uh, you're going to disagree. You're going to agree. Maybe you'll change his mind. Maybe he'll change your mind. Great. Uh, and then at the very end of that, the Bayesian wants to say, okay, now I'm going to put a new number on this.
Um, but Bain and I are just saying the number is totally irrelevant here and it's coming out of nothing. Let's just
Skip the number.
part and have arguments, right? And that's not saying we can't think about future risks. We can't prepare for things. It's not throwing our hands up in the and You know, claiming that we, yeah, we absolutely can't take action with respect to anything in the future.
It's just saying, let's do what everyone does when they disagree about things. Let's take arguments very seriously. Arguments are primary, is a way to say it on our world view. Numbers are totally secondary. Numbers are secondary and only useful when they're right. They're right for the problem at hand.
And they're certainly not always useful. Yeah.
Vaden Masrani: Typically when you have a data set is when it's
useful to use numbers. Yes.
Liron Shapira: imagine none of us were Bayesians and we just had the conversation behind closed doors about the risk of nuclear annihilation and we come out and we're like, okay, we all agree that the likelihood is worrisome. It's too close for comfort. It's still on our minds after this conversation. We didn't dismiss the possibility of being a minimal, right?
So that, that'd be one kind of non Bayesian statement that normal people might say, right? Okay, and, or alternately you can imagine another hypothetical where people, maybe it's in the middle of the Cuban Missile Crisis and people walk out of the room, which I think actually something like this did happen in the Kennedy administration where people walked out of the room saying like, I think this is more likely than not.
Like, this looks really, really bad.
So where I'm going with this is, I think that there's a, a, you could bucket a number of different English statements that people, normal people often say after leaving these kinds of meetings. And it's pretty natural to be like, okay, well in the first place where they said too close for comfort, maybe the ballpark probability of that is 1 percent to 20%.
Vaden Masrani: Hold on. Hold on. That's the move. That's the move that I want to Excise. So I think it's completely legitimate 100 percent to bucket degrees like strengths of your beliefs I think that this is done all of the time when you answer survey questions So like a 1 to 10 scale is very useful. How do you agree with this proposition?
Sometimes it's like strongly disagree Disagree, neutral, agree, strongly agree. So that's,
um, a five point scale that indicates strength of belief. Uh, sometimes it's useful to go to ten. Uh, I think for like certain mental health questions I do that. All great, I'm so on board with that, that's important.
Where I say, hey, hold on people, is calling it a probability. Okay, you don't have to do that.
You could just say, you could just say, how strongly do you believe something? Um, and, um, Then as soon as you start calling it a probability, now we are in philosophically dangerous territory because the arguments to assign probabilities to beliefs and then equating probabilities that are just subjective belief gut hunches with like counting fricking asteroids.
That's where all the, the, the difficulties come. So I am totally in favor. Of quantizing, discretizing, um, strengths of belief, and I think it's about as useful as, um, a 10 point scale, but that's why doctors don't use, like, 20 point scales very often, and only when I'm answering surveys from, like, the less wrong people, or the frickin Bostrom people, do they give me a sliding scale, uh, 1 to 100, it's the only time I've ever been given a survey with a sliding scale, is when I know that they want to take that number, because I'm an AI researcher, and turn it into the probability of Blah, blah, blah, blah, blah.
But, uh, most people don't think that, um, Granularity beyond 10 is very useful. That's why doctors don't use it.
Yes,
Liron Shapira: surprising to me that people get really worked up about this idea that like, yeah, we're just trying to approximate an ideal. Maybe if there was a super intelligent AI that I might be able to give really precise estimates as humans. We often say something like, hey, So, an asteroid impact, we've got a pretty confident reason to think that it's like less than one in a million in the next century. Because it happens every few hundred million years, statistically, and we don't have a particular view of an asteroid that's heading toward us. So, roughly, that's going to be the ballpark. And then, I can't confidently tell you the probability of nuclear war in the next century, right? Maybe it's 1%, maybe it's 5%, maybe it's 90%. But, I feel confident telling you that nuclear war in the next century is going to be more than 10 times as likely as an asteroid impacted in the next century.
Am I crazy to claim that?
Ben Chugg: let's just descend into the level of, back to the weeds of philosophy for one second. What do you mean by approximating ideal? What's the ideal here? Like, is the world,
Vaden Masrani: thank you. Yeah, and
Ben Chugg: Well, no, no, but, but, no, no, not even normative, not even a normative idea. When you say like, you know, am I create your, okay, correct me if I'm wrong.
You're saying there is a right to probability. And I'm trying to approximate that with my degrees of belief. So there is an X percent chance for some X that there's a nuclear strike on the U S in the next hundred years. Do you think that?
Liron Shapira: Yeah, I mean Solomonov induction is going to give you the ideal Bayesian probabilities to make decisions
Ben Chugg: Okay, okay, okay, but that's different. Okay, so that's, that's a claim about rationality. I'm asking you, is there a probability attached to the world? Is the world like, is the world stochastic in your, for you?
Liron Shapira: No, probability is in the mind of the model maker, right? So, um, the universe, you might as well treat the universe
as being deterministic because you don't, there's actually no ontological difference when you build a mental model. There's no reason to take your uncertainty and act like the uncertainty is a property of the universe.
You can always just internalize the
Ben Chugg: Okay, good.
Liron Shapira: Or,
Vaden Masrani: one of the good Bayesian critiques about frequentism that I like. So I, we, I totally agree with you. That, that, that the world is deterministic, non stochastic, and randomness doesn't actually occur in nature. I, I agree. but
Liron Shapira: we, or we might we, we, there's, there's just no epistemic value to treating the universe as ontologically fundamentally, non deterministic, and the strongest example I've seen of that is in quantum theory, like the idea that a quantum collapses. ontologically fundamental to
the universe and like the probabilities are ontologically fundamental instead of just saying, hey, I'm uncertain what my quantum coin is going to show you know, to me, that seems like the way to go and by the way, I bounced this off Eliezer because it's not officially part of the Eliezer canon but Eliezer says he thinks what I just said is probably
Ben Chugg: Yeah, nice. Um, I think, yeah. So for the purposes of this, I think we're all comfortable agreeing the world's deterministic. So, yeah, so now the question is, when you say, ideal, now you're, you're appealing to a certain, uh, normative claim about how rational agents ought to behave, right? And so now we need to descend into like, by whose lights is it rational to put probabilities on every single proposition?
Um, but I just wanted to, I just wanted to, because when, it sounds like you're, you know, It sounded, when you were talking, like you were saying, you know, there is an X percent probability that some event happens. We're trying to figure out what that X is. That's not true, right? So, you know, the world is
Vaden Masrani: the, um, the, yeah,
the ideal,
Liron Shapira: hmm.
Vaden Masrani: it's the ideal Bayesian reasoner, right? It
is what the ideal means.
Liron Shapira: Let me give you more context about the Bayesian worldview, or specifically the Solomonoff induction worldview. So the game we're playing here is, we're trying to get calibrated probabilities on the next thing that we're going to predict. And this ideal of Solomonoff induction is, I take in all the evidence that there is to take in, And I give you a probability distribution over what's going to happen next and nobody can predict better than me in terms of like, you know, scoring functions, like the kind that they use on prediction markets, right?
Like I'm going to get the high, provably get the highest score on predicting the future. And that's the name of the game. And remember, like the stakes, the one reason we're having this conversation is because we're trying to know how scared we should be about AI being about to extinct us. And a lot of us Bayesians are noticing that the probability.
Seems high. So the same way we would, if there was a prediction market that we thought would have a reliable counterparty, we would place like a pretty high bet that the world is
going to
Ben Chugg: good. We're getting into the meat of it. Um, I just have a, uh, a historical question. Is Solomonoff induction tied to the objective Bayesian school or the subjective Bayesian school? Or do you not know?
Liron Shapira: I, I don't really know, right? So, so this is where maybe I pull a David Deutch and I'm like, look, I don't necessarily have
to represent the Bayesians, right? I think that I'm, uh, faithfully representing Yud, Eliezer Yudkowsky. I think you can consider me a stochastic parrot for his position because I'm not seeing any daylight there.
But I, I don't, I can't trace it, uh, back to, you know, what Eliezer wrote about Solomonov induction. He indicated that it was, uh, part of it was original. So this could just be
Eliezer only at this
Ben Chugg: Yeah, that wasn't supposed to be
Vaden Masrani: Yeah. Solomon, Solomonov induction is, it's, it's, um, it is induction, like philosophical induction, the stuff that we've been railing against, um, except with a Bayesian, uh, theorem interpretation on top of it. So all of the critiques that we've made
about
Ben Chugg: no, I know, but I was just curious because, um, you know, there are two schools of Bayesianism, the objective Bayesians and the subjective Bayesians. Jaynes comes from the objective school, um, and Solomonoff induction,
Vaden Masrani: Oh, he comes from the
Ben Chugg: that's what the ideal rational agent is about. Like, he thinks there is a correct prior, there are correct probabilities to have in each moment.
And it sounds like Sol No,
sorry. Within Bayesianism, which is still a subjective interpretation of probability, there, there's an objective, there's, there's, or call it logical probability versus
subjective Bayesianism. These are different
things, right? So, subjective Bayesians, I think, wouldn't sign off on the Solomonoff induction.
This is a total tangent. You can cut this out if you want. But they, I don't think they'd sign off on Solomonoff induction because they're, they're, like, for them, probability is completely individual. And there's no way to litigate that I have a better probability than you because it's totally subjective.
Then there's a large, the Logical or objective Bayesians want to say, no, there is a way to litigate who has a better, uh, uh, a better, uh, credence in this proposition, but they're both still Bayesian in the sense that they're putting, uh, probability distributions over propositions and stuff, right? Like there's still, yeah.
Um, anyway, sorry.
Vaden Masrani: think you should keep that in. That was helpful for me. Yeah, yeah. You should keep that in. Yeah, yeah.
Liron Shapira: You know Ray Solomon off came a couple centuries after Laplace I think so there was a long time when people like hey Bayesian updating is really useful But where do the pyres come from? I'm not really sure but if you have pyres This is a great way to update them and then Solomon off came along and is like hey Look, I can just idealize even the priors, right?
I can give you I can get you from 0 to 60 of having no beliefs to having the provably the best
beliefs
Ben Chugg: Okay. Yeah. So probably the objective is cool.
Vaden Masrani: Yeah, but, Yeah, can I say for the listeners, all this, like, ideal, provably, blah, blah, blah, it all rides on Cox's theorem. And so just, you know, Google my name and just type in the, the credence assumption, and then you can see the three assumptions that underlie Cox's theorem. The first one, the second one, and the third one are all something that you have to choose to assume.
And this is what Yudkowsky never talks about. And when he talks about laws and you have to be rational, blah, blah, blah. All of that is only if you voluntarily decide to assume the credence assumption, I don't because that assumption leads to a whole bouquet of. Paradoxes and confusion and nonsense about superintelligence and yada, yada, yada.
Um, but just for the listeners, when you hear that there's Bayes law and the law of rationality, all of that is only if you voluntarily wants to assume the credence assumption. And if you don't like myself, then none of this stuff applies to you. So just take that
Liron Shapira: maybe, maybe we'll get into that, um, but I, I got
Vaden Masrani: That was more for the listeners than for, than for you.
Liron Shapira: okay, okay, okay.
Vaden Masrani: sure. Yeah.
Liron Shapira: Um, where I'd like to try next is, so, you guys, uh, just put in a good effort, which I appreciate, uh, zooming into some potential nitpicks or flaws of Bayesianism. So let me turn the tables, let me zoom into something in Popperianism that I
Vaden Masrani: Yeah,
please.
Liron Shapira: I might be able to collapse a little bit.
Uh, let's see, so, okay, so we talked about, uh, how, okay, you, you're not entirely, uh, You're not really liking the idea of let's formalize the definition of what an explanation is. It's just like, look, we do it as humans. We kind of, we do our best, right? It's a little bit informal. One thing Popperians say about explanations is that better explanations are hard to vary, right?
Certainly, Deutsch says that. Do you want to like elaborate a little bit on that claim? Yeah, yep.
Vaden Masrani: from, um, Deutsch. That's one of the things that he, um, kind of built on, uh, Popper's stuff with. And all he means there is, um, that, Just consider two theories for why the sun rises in the morning. Theory one is that there's a God, which if they're happy that day will make the sunrise. And another theory is the heliocentrism where you have a sun in the center of the solar system and the earth rotates around it and the earth is on a bit of a tilt and the tilt, the earth rotates, um, was the, um, The earth itself is rotating around the sun, and it's the rotation of a spherical earth which causes the, the sunlight to rise the next morning.
So the first explanation, the God's one, is completely easy to vary and arbitrary because you could say, why is it when the God is happy, why is it one God, why is it six gods, and just whatever you want to justify in the moment can be justified under that. theory. So to actually with super intelligence, but we'll come to that later.
Um, with the soup, uh, with the heliocentrism theory, that one is very difficult to vary because if you change any detail in it, so why spherical, let's switch it to, um, cubic. Well, now all of a sudden the predictions are completely different because the sun is going to rise in a different fashion. Um, and so it's, um, That's what Deutsch is getting at with the hard to vary stuff.
Um, some critiques of this though is that it's not, like, I give you a theory, um, it's not like you can just naturally categorize this into those which are hard to vary and those which are easy to vary. Um, and so I'm assuming you're about to say something like, um, well, it's, uh, this is a difference in degree, not in kind, um, because everything is, um, Kind of easier, hard to vary, and you can't, um, uh, naturally bucket them into one camp or the other.
To which I'd say, I agree. That is true. You can't, um, The hard to vary criterion, uh, I think is rather useless as a critique of other people's theories. You could try to tell astrologers, and homeopathy people, and all these people that, Their theories are hard to vary, are not hard to vary, and thus it's wrong.
They're not going to listen to you. It's not a very good critique for other people. It's a great internal critique, though. And so if you take this on yourself, and you, um, and subject your own thought process to is my explanation easy to vary here? Like, Is the explanation that the superintelligence can just create a new reality whenever it wants?
Is that easy to vary? Is that hard to vary? Then you can start to, um, uh, weed out different kinds of, of theories in your own thinking. So, um, so it just adds to, to what Deutsch said, which is that it's, um, it is a, a degree, not a kind. And it's a kind of useless critique on other people, but it's a great internal critique.
Um, I don't know, Ben, if you'd want to add anything to, to
Ben Chugg: Maybe the only thing I'd add is that while this might sound, uh, perhaps like. philosophically, uh, in the weeds a bit. This is Precisely the kind of thing that people do on a day to day basis, right? If you drop your kid off at kindergarten, uh, you go to pick them up. There's many theories that, you know, they could have been replaced by aliens while they were there.
Now they're a different person or they've completely changed their personality over the course of the day. Like many possible predictions you could make about the future. What are you doing? You're saying those are totally unlikely because like, if that was to happen, you know, you have no good explanation as to like why that would have happened that day.
So this also just comports well with like how we think about, you know, Reality day to day, like, why do I not think my T is going to all of a sudden start levitating? Like, yeah, precisely for this sort of reason. Even if people don't really think of it like that, I think that's sort of what's going
on.
Vaden Masrani: And maybe a little plug for our conversation with Tamler Summers because we go into this in much greater detail, um, and so just for people who want a more fleshed out version of what we just said, check out that episode, yeah.
Liron Shapira: so personally, I do see some appeal in the particular example you chose. Like, I think there, it's, you know, I get why people are using it as a justification for their epistemology. Because, like, if somebody is, like, reading
Vaden Masrani: It's not a justification for the epistemology, just to be clear. It's, it's more of a consequence of the epistemology. It's a, it's a heuristic and a criterion, not a justification for it, but yes.
Liron Shapira: Do you think it's a corollary, or do you think it's one of the pretty foundational rules of thumb on how to apply the epistemology?
Vaden Masrani: No, it's not foundational. No, it's um, it's a corollary, yeah.
Liron Shapira: Interesting, because I feel like without it, you might not, it might be hard to derive from the rest of Hopperianism.
Vaden Masrani: Nothing is derivable in Popperianism, um, and it's not a foundational. No.
Liron Shapira: But you're saying nothing is derivable, but you're also saying it's not foundational and
Vaden Masrani: Oh, sorry, sorry, uh, if by, sorry, good, good claim, uh, if by derivable you mean like formally, logically derivable, then no, nothing is derivable, it's, it's conjectural, conjecture. If by derivable you just mean like in the colloquial sense, like, um, oh yeah, I derived the, the, yeah, so just to be clear there, just cause the formal natural distinction, yeah, uh, seems to be important in this conversation.
Liron Shapira: haven't gone that deep on preparedness and so I am actually curious, like, so this, this rule that Deutsch brings out a lot or heuristic or whatever it is, right? That, that a good explanations are hard to vary. Did Deutsch infer that from something else that Popper says?
And if so, what's the inference? Okay.
Ben Chugg: Yeah, Vayden, correct me if I'm wrong here. Can't, uh, doesn't this come somewhat from Pauper's notion of content? Like empirical content of theories, right? If you want theories with high empirical content, that's
Vaden Masrani: Uh,
yeah, yeah, yeah,
yeah.
Ben Chugg: want things that are hard to vary.
Mm,
Vaden Masrani: just because the, um, there is a, uh, important distinction between just the way that Ben and I, and you think about stuff, which is, uh, formal systems compared to natural language. So words like derivable, infer, et cetera. I just feel like we need to, to, to plant a flag on those because translational difficulties there.
So just because of that, um, yes, it is absolutely. Colloquially derivable from his, uh, theory of content, absolutely, but, um, Deutsch just kind of re newed it, um, so, it's consistent with Popper for sure, but it's just like a, it's a rebranding, it's a, what is the, a concept handle? It's like
a concept handle, um, yeah,
Liron Shapira: Ben, do you want to elaborate on that? I'm curious to learn a little bit more. Because, I mean, look, I find some merit or some appeal to this concept. So, can you tell me more about the connection to the content,
whatever Popper's
Ben Chugg: Yeah, yeah, I'll let Vaden go, because he
loves this stuff,
Vaden Masrani: yeah. Do you, do you want to hear a full thing about content? I could spiel about that for like an hour, but Ben, maybe
Liron Shapira: can you just tell me the part that grounds the explanation should be hard to vary claim? I don't
Vaden Masrani: yeah, so this, yeah, I'd love to talk about content, but I need to explain what it is. Like, do you know what proper stuff
on content is?
Um,
Liron Shapira: hmm. Mm
Vaden Masrani: okay, so content is a really interesting, um, concept. So the content of a statement is the set of all logical consequences of that statement. Okay? Yeah, so, um, and I'm going to expand upon this a little bit because, um, it's actually going to lead somewhere and it's going to connect nicely to what we've been discussing.
So far. Um, so just to give an example, so the content of the statement, uh, today is Monday, would be, um, a set of all things that are logically, um, derivable from that. So today is not Tuesday, today is not Wednesday, today is not Thursday, et cetera. Um, the content of the statement, um, it is raining outside, would be it is not, um, sunny outside, there are clouds in the sky, that, that kind of thing.
Um, so that's what the content is. Uh, and then there's different kinds of content. So there's. There's empirical content and there's metaphysical content. So um, empirical content is a subset of all the content and that is things which are derivable that are empirically falsifiable. So if, for example, I say, um, uh, what's the content of the statement that all swans are white?
Um, well one, uh, derivable conclusion from that would be there is not a black swan in Times Square on Wednesday. 2024. Um, that would be a empirically derivable, um, uh, claim. Uh, the content of, um, a metaphysical statement would be something like, um, uh, the arc of progress bends towards justice or what's that, um, quote from MLK.
Um, so, and then the content of that would be something like the future will be more just than the past. Um, okay. If you let me elaborate a bit further, I promise this is going to connect to what we're So now we can talk about, um, uh, how do you compare the content of different kinds of statements. So, with the exception of tautologies, essentially everything, it has infinite content.
Um, because you can derive an infinite number of statements. statements. from today is not Monday. You can just go today is not Tuesday, etc. So it's infinite, but you can do class subclass relations. So, um, the content of Einstein's theory is strictly greater than the content of Newton because you can derive Newton from Einstein.
So Einstein is a higher content theory from Newton precisely because anything that Newton can derive, you can derive from, from Einstein. Um, you can't compare the content of say Einstein and Darwin. For example, because they're just infinite sets that can't be, can't be compared. Um, So going a bit further now and where this is going to connect really nicely to what we've been discussing so far.
So let's talk about the content of conjunctions. Um, so the content of a conjunction, um, so we have two statements today's Monday and it is raining. So the content of a conjunction is going to be strictly greater than or equal to the content of the, uh, statements, uh, on there on its own. Um, the content of a tautology.
It's zero, if you want to put a measure on it, if you want to put numbers on it, it's zero because nothing can be derived from a tautology. The content of a contradiction is infinite, or one, because from the law of, um, what's it, the law of, uh, explosion principle or whatever, from a contradiction anything can be derived.
So it's infinite, but because it's infinite, you can immediately derive a empirical, uh, Um, falsifier that would show that the content of a contradiction is, is false. So now we're going to connect. So let's talk about the probability of a conjunction. So the probability of a conjunction, today is Monday and today is not raining, strictly goes down.
Probability is less than or equal to. The probability of a tautology is one. The probability of a contradiction is zero. So if you want, in science and in thought. To have high content, you necessarily must have low probability. If you want, um, Your theories to be bold and risky than they necessarily have to have low probability.
So on this side of the aisle, we claim that the project of science is to have high content propositions, theories that are bold and are risky, and that's necessarily low probability. On your side of the aisle, you want high probability. So if you just want high probability, just fill your textbooks with tautologies.
Um, if you want low probability film with contradictions from our perspective, we want high content. Um, so we want low probability, so we are completely inverted. And I would claim, and Ben I think would claim, and Popper, this is, I'm just ventriloquizing Popper entirely, that the goal of science is to have high content, risky, bold, empirical theories, such as Newton, Einstein, Darwin, and DNA, et cetera, et cetera, and that means low probability, which means that Bayesianism is wrong, please.
Liron Shapira: Yeah, thanks for that. Let me make sure I fully understand here because in the example of the you know the I think we talked about the Sun going there on the earth or like we see the Sun rising and setting and One person says I think this is because the earth Is spinning right? So we see the Sun coming up and down and another person says I think this is because I Believe in the Greek gods and this is clearly just Helios right as said in the Greek mythology You And you're saying, well, look, we prefer a higher content theory.
And so when you talk about Helios, because it's easy to vary, that makes me think it's using fewer logical conjunctions, which would make it lower content. Am I verifying you correctly? Mm
Vaden Masrani: great. Yes, I actually didn't connect those two. Um, and there's a nice relationship between, um, complexity, which is about, um, conjunctions of statements and simplicity. And what we look for in science is simple statements with high content because those are the ones which are the easiest to falsify.
Um, and so if we have certain statements that, um, A lot can be derived, such as you can't travel faster than the speed of light, um, then it makes a lot of falsifiable predictions, and thus touches reality much more, um, and it's harder to vary because if you change any part of it, then you're falsified, you're falsified, you're falsified.
So there is directly a relationship there, yeah,
Liron Shapira: Okay, but what if the ancient Greek pushes back and he's like, Oh, you need logical conjunctions, eh? Let me tell you about Helios. Okay, Helios rides his chariot around in the sun, and he wears these sandals made of gold, and he's friends with Zeus, right? So he gives you
like 50 conjunctions. He's like, I actually think that my theory is very high content.
Vaden Masrani: yeah, and so this is where there's a difference between content and, um, easy to vary this, right? So, all the conjunctions that he just made up, he could just make up a different set. And that's why it's so easy to
Ben Chugg: But,
Liron Shapira: But,
but what if, okay, just playing along, I
Vaden Masrani: Oh yeah, yeah, yeah, no, but
Liron Shapira: me just push the bumper car here, right, so what if he's like, but okay, but I'm specifically just telling you all the conjunctions from my text, right, and we haven't varied the text for so
long.
Ben Chugg: think there you'd want to talk about the consequences of his view. You want to look in, in conjunctions of the content, which are the, the, the, in this case, it's supposed to be an empirical theory. So the empirical consequences. So you ask him, okay, like, given all these details about your theory, that's fine.
But like, what do you expect to see in the world as a result of this theory? And there it's very low content, right? Because it's going to be able to explain anything that can happen. War, no war. Clouds, no clouds. Um, I don't know. I don't know. I don't actually know what chariots
Liron Shapira: what you're saying, and, you know, I'm playing devil's advocate, right, I'm not even necessarily expecting to, to beat
you in this argument, but I'm really just pushing, just to see if I can,
right,
Vaden Masrani: I mean it's
Liron Shapira: so imagine then,
Vaden Masrani: it's not win or losing, it's just trying to learn, learn from each
Liron Shapira: yeah,
Yeah, imagine that he says, um, okay, but I have this text. It's been around for a thousand years and it specifically says every day Helios comes up and then down, right?
And it can vary a little bit, but it's always going to be like up and down and an arc pattern in the sky. So I'm not varying it. Right. And it has like a 50 conjunctions. So like, why does this
not beat out the earth is spinning theory? No,
Vaden Masrani: to like induction and stuff, and I've seen it a thousand times in the past, is that where you're going with
Liron Shapira: no,
I'm not moving. I'm actually still, I'm actually still trying to prod at this idea of being
hard to vary. Right. Right. So.
Vaden Masrani: sorry. Sorry.
Liron Shapira: is a critique that the helios going around the
Ben Chugg: So it's, it's not, um, so, oh, I see. Okay. So I'd, I'd rather talk about that. That's why I think content is actually sort of like the more primal concept here, the more primitive concept rather, because there you can talk about, it's not that that has no predictive power or no content, right? That you're, as you said, it's going to predict that the sun rises and sets.
But then you start asking, like, what's beyond that prediction? Like, what else does this say about the world? Well, the theory of heliocentrism says a lot, right? It says things about seasons, it makes very It posits a very rigid structure of the world, and we can go and test this structure. Like, you know, tilt theory of the world comes to mind.
It's related to this, I guess, um, you know, and this comports with other theories we have of the world, which together make this web thing of things that are like, that's when the hard to variance comes in all of that together is very hard, hard to vary. So it's true that it makes some, uh, predictions and has some empirical content, right?
That's presumably why they thought it was like a useful predictive theory in the first place. But you ask, okay, what does heliocentrism have on and above that? And it's got much more posits, way more content. And so we prefer it as a theory. Did that answer your question or
Liron Shapira: Okay, and just to make sure, let me try to summarize, I may or may not have understood
you correctly. You're saying like, look, the, the, the earth spinning model, it can also make a bunch of other predictions that we can even go test. And so just by virtue of doing that, it's, it's kind of like you're getting more bang for the buck.
It's kind of like a, it's a compact theory. It's getting all these other, it's constraining the world. But it almost sounds like hard to vary might not even be the main argument here, but it's more like, hey, look, there's a bunch of different types of evidence and it's compact. I feel like those are the attributes you like about it.
Vaden Masrani: Well, so the hard to variance again is not like the core thing that if you refute this, you destroy all of Popperianism, right? It's a, it's a heuristic as a way to think about stuff. It's related to content. Content is a bit more of a fleshed out theory. All of this is related to falsification. So content is part of the way that you connect to falsification.
Um, and it's related to like Occam's razor and stuff with the compactness. So compactness connects you to simplicity. Um, but again, it's, it's not, this like, uh, ah, you got, like, you gotcha, man. It's, it's like, yeah, sometimes I think about hard to variance and other times I think about empirical content. And, uh, what Ben just said was, was beautiful and perfect, which is the rigidness of the, um, the, the theory and how it's like locked and tightly fit on top of reality.
And then it gives you extra things that you can think about that you hadn't realized that if this is true, this leads to this other thing. So for example, heliocentrism leads pretty quickly to the idea that Oh, shit, these little things in the sky that we see, they're lights, they're stars. Maybe they're just far away and maybe there's other planets there too and maybe, on these other planets, there's other people contacting other planets.
Contemplating how the world works and so it's not like it's derivable from it, but it's it just your thought leads leads there, right? And that's part of the content of the theory. So
That's not yeah, so
Liron Shapira: So my perspective is, you know, if you were to come into my re education camp and I wanted to reprogram you into Bayesianism, what I'd probably do is, like, keep pushing on, like, okay, what do you mean by hard to vary? What do you mean by, like, following Occam's razor? I feel like if I just keep pushing on your kind of heuristic definitions of things, I'll make you go down the slippery slope and you're like, okay, fine, Solomonov induction perfectly formalizes what all our concepts really mean.
Vaden Masrani: but it's not making us go down the slippery slope. Like Ben is a statistician. He understands Bayes I grew up in a We understand this stuff. We've read it all. I've read a lot of Yukowsky. Like I know the argument, like there's maybe a deep asymmetry
Liron Shapira: So
Vaden Masrani: which is that we know your side of the argument, but you don't totally know our side.
And so it's like the reeducation has already happened because I started as a Bayesian and I started as a less wrong person as to Ben. And so we have been reeducated out of it. And so you're talking about re re educating, but you wouldn't be telling us new things. You wouldn't be telling us
Liron Shapira: I, I actually haven't heard my side represented well on your podcast,
so, so let's see, let's, let, let, let's
see how much you know my side by the end of this, okay?
Vaden Masrani: Sure, sure. Yeah.
Liron Shapira: All right, so here, I've got another related question here on, on the subject of, uh, harder to vary, and I think you mentioned this yourself, that like, yes, technically, when somebody says, um, when, when somebody says that it's Helios Chariot. Technically, or sorry, wrong way. When somebody says, hey, the Earth is spinning on its axis, and that seems kind of hard to vary. Technically, it's not hard to vary because you could still come up with infinitely many equivalent explanations. Uh, so, like, so what I mean is like, okay, the Earth spins on its axis. And there's like
angels pushing the earth
around, right? Like, so you can just keep adding random details or, or even make like equivalent variation, like build it out of other concepts, whatever. Like,
so there's this infinite class, but the problem is you're wasting bits, right? So it's just not compact, I think is the main
Vaden Masrani: no, no, no, no, no, no, no. Good, sir. It's not about bits. No. So the problem with that Is, yeah, you can take any theory and then, I actually gave a talk about this at a high school once and I called it the, the tiny little angels theories. But you can say, take everything you know about physics and just say it's because of tiny little angels and the tiny little angels are doing, doing that.
Um, the problem there is not that you have to add extra bits. It's that as soon as you posit tiny little angels, you are now positing a completely different universe that we, Um, that we would be having to live in, that would rewrite everything we know about, it's the same with like homeopathy and stuff, like if the more you dilute, the stronger something gets, that rewrites all of the periodic table
Liron Shapira: They're just there, but they're inert.
Vaden Masrani: uh, so then,
Ben Chugg: did the
Vaden Masrani: That is the hard to vary stuff.
Ben Chugg: how do, what is their
Vaden Masrani: and why why not angels? Why not devils and the very way that you are in varying the explanation as we speak Is what we're talking about, right? So it's easy for you to vary
Liron Shapira: turns on its axis, but there's just like one extra atom that's just sitting there. Can't I just pause at that, right? Like, isn't that an easy variation?
Ben Chugg: But then take that
seriously as a theory right? Like, take that, so what's that, is that extra atom interacting with anything? Like, if not, then what use is it? If so, then it's gonna have effects. So why haven't we witnessed any of those effects? Like, where is it in our theories? Like, you know, like, um, yeah, like,
Vaden Masrani: also the heliocentrism theory is not a theory of there are this many atoms like it's not a theory of that level, right? Who's counting
atoms in heliocentrism?
Ben Chugg: theory, but, yeah.
Liron Shapira: So I think, I guess, let me, let me summarize my point here.
I think you guys do have a point when you talk about harder to vary, and I think it maps to what Bayesians would claim as like, and Occam's Razor would claim as like, let's try to keep the theory compact. Like it gets, it has a higher a priori probability if it's compact.
So if you just add a million angels that are inert, you're violating Occam's Razor, which I think maybe both worldviews can agree on. But if you're saying, no, no, no, we don't care about Occam's Razor, we care that it doesn't make extra predictions, Or like, that it makes, you know, maybe it'll make other predictions that are falsified.
I feel like now you're starting to diverge into a different argument, right? So I do feel like the hard to vary argument kind of seems equivalent to the Occam's razor argument.
Vaden Masrani: homeopathy probably could be represented with much fewer bits than the periodic table, but I still prefer the periodic table, even though it's more complex, right? So it's not just Occam's razor and low number bits. Like quantum field theory would take up a lot of bits. Um, there's many simpler kinds of theories you could use, but they don't explain anything.
They don't explain the experimental results that we
Liron Shapira: with homeopathy is, doesn't,
Vaden Masrani: simplicity itself is not valuable. The, okay, well now we're back to content. But I'm saying that if, if the only, Criterion is small number of bits being compact and simplicity. There are so many theories which are complex, use a lot of bits, are not simple, but I still prefer them.
And that's my point.
Liron Shapira: If you're trying to set up an example though, you have to make sure that it's an example where two different models make the same prediction. So when you brought up homeopathy versus the periodic table, I wasn't clear on what was the scenario where they're both making the same prediction.
Vaden Masrani: Um, they are both predicting that if you have my drug, it will make your cold, um, go away faster. If you have my, buy my product at Whole Foods, and they will both address your cold.
Liron Shapira: But in this scenario, doesn't the homeopathy remedy not work? Okay, but I mean,
Vaden Masrani: but from the homeopathy people think that they're predicting that it's medicine, right? And there's a reason why people don't use traditional medicine and go to homeopathy because they're both making predictions that if they take this, they're going to feel better, right?
Um,
Liron Shapira: example you're trying to set up is one where we actually have the same phenomenon, right? Like in the other example, it was, hey, the Sun is going to come up and go in an arc in the sky, right? So it's the same
phenomenon and you have two theories and we're
Vaden Masrani: No, the type of example I was just trying to set up here is much simpler, which is just simplicity and Occam's razor aren't sufficient. Um, they're just. modes of criticism. They're useful heuristics sometimes, but they're not the primary. And if all we care about is small numbers of bits and simplicity and compactness, then I can give you a bunch of theories that meet that criterion that I don't like very much.
Liron Shapira: This is actually interesting, by the way, I guess. I wasn't even really expecting you to say that you basically don't think Occam's razor is useful, or like, what's your position on Occam's razor?
Vaden Masrani: Um, I think Occam's razor is good. Sometimes it's, it's a, it's one way to criticize stuff, but it's not the only thing. Um, sometimes the theory is super complicated and has a bunch of superfluous assumptions that you need to shave off. That's when I'll pull off Occam's razor. Sometimes I'll pull out Hitchin's razor.
Hitchin's razor says that that which can be asserted without evidence can be dismissed without evidence. That's also a useful criticism and a useful heuristic. There's a whole toolkit of different razors that one can pull out and neither of them, none of them, are at the base level. They're all just kinds of, you know, Shaving equipment that shaves off shitty arguments.
Liron Shapira: What do you guys think of the Bayesian view of Occam's razor?
Vaden Masrani: I think it's as fallacious and mistaken as Bayesianism itself. Um, or, that's cheap. Ben, give me a less cheap answer.
Ben Chugg: yeah, so the, I mean, so the, you wanna say that, uh, theories that are simpler should have high prior prob higher prior probabilities, right? That's the view of Bayesianism with respect to
Liron Shapira: Right, and that's what Solomonoff induction does. Uh, right, so it, it just, it, it basically orders all the different possible Turing machines that could ever describe anything, and it puts the, the ones that are shorter earlier in the ordering, which means that if you have one Turing machine that says there's a million angels doing nothing, right, that's going to be deprioritized compared to the Turing machine that's like, okay, there's not angels, right, it's, it's just simpler.
Vaden Masrani: So I think a lot, can I step just back like one second and if this is, if in your view I'm dodging the question feel free to re ask it but I just want to like frame a little bit and then Please criticize me if it seems like a dodge, but, um, Solba, Sol, Solomonov,
Ben Chugg: Solomonov.
Vaden Masrani: Solomonov Induction?
Solomonov Induction and Bayesianism and all this stuff tends to, um, fiddle with the probabilities enough to come up with a justification that Pauperians are already, already have. So there are reasons why we like simplicity. Um, one of the reasons is that simple theories have higher content and thus, they're more powerful.
are easier to refute. So that is in my view, why we like simplicity. However, you could come up with a Bayesian story about why we like simplicity. You could, you could talk about it in terms of induction. You could talk about it in terms of Solomonov induction, and you could fiddle with the math and say, and this is how we get this conclusion from Bayes.
And you can do this all over the place. You can, this is like, The Bayesian epistemology hypothesis testing stuff where you can come up with a post talk story about why Bayes theorem is what led us to the discovery of the double helix. Um, but it's just that it doesn't give us anything new. We've already discovered that.
And then you could come up with a story after the fact. The double helix is another nice example of why I like content. Um, and I'll stop repeating myself there. But, but just when you ask these kinds of questions. Um, yeah, you can always tell a story from Bayes perspective about why we value this stuff.
And we're giving you an alternate story. And I guess it's just ultimately going to be, have something that the listeners are going to have to decide. Um, but the starting point that Solovanov induction is right is wrong, and we can argue about that. But if you grant that, or sorry, if you start with the assumption that it's right and you don't, So if you don't want us to argue about that, then yeah, of course you'd come up the story from Solomonov's perspective about why we value simplicity.
Um, but it's just, we're talking completely across purposes here because I reject Solomonov induction because I reject induction. And so any kind of induction just is wrong. And we can, and you can listen to us argue about this for hours and hours and hours if you like. Um, but you're starting with the assumption that it's right and that already is meaning we're not talking properly to one another.
Liron Shapira: Yeah, and by the way, I do want to hit on the human's paradox of induction stuff. I know you guys have talked about it. I find it interesting. Uh, so I want to get there. Uh, but first, I think I've, I've got a little more, uh, red meat to throw at you on this
topic of, uh, you know, how, how do we actually apply, um, uh, Popperian reasoning to judge different hypotheses that
seem like they could both apply. I've got an example for you. Uh, okay. Let's say I take a coin out of my pocket, we're just at a party, right? It doesn't look like it's, uh, I'm not like an alien or whatever, it's just normal people, right? I take out a quarter, it just looks like a totally normal quarter, you don't suspect me of anything. Um, and I flip it ten times. And it just comes up with a pretty random looking sequence, say, Heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. Okay, so it
doesn't look like a particularly interesting sequence. Um, and then you say, Okay, this just seems like the kind of thing I expect from an ordinary fair coin. And then I say, I've got a hypothesis for you. This is just a coin that always gets this exact sequence when you flip it ten times. You just always get heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. So if I flip it again ten times, I'm just going to get that exact same sequence again. That is my hypothesis, and I'm like drunk, right? So I don't even seem like I'm like a credible person, but I'm throwing out the hypothesis anyway. You think it seems to be a fair coin unless I'm doing an elaborate trick on you. So my question for you is, what do Popperians think about contrasting these two hypotheses? Fair coin versus that exact sequence every time you flip it ten times. Which of the hypotheses seems more appealing to you in terms of being like, I don't know, harder to vary or just better?
Vaden Masrani: I would just do it again and quickly find out, like, you just flip the coin again, do you get
Liron Shapira: Well, what if,
Vaden Masrani: No? Then, yeah.
Liron Shapira: what if
Vaden Masrani: Then, then you've, you've found out,
Liron Shapira: have to bet a
thousand
Vaden Masrani: here?
Liron Shapira: Whatever you predict, you have to bet a thousand bucks. So I'll let you do it again, but it's just, but we gotta gamble on it.
Vaden Masrani: I would say I don't want to, I just want to do it again. And so, like, if I'm at a party and someone tells me a magic coin, first I'd be like, well that's crazy, how is that even possible? And like, obviously, like, can you actually do that, man, are you doing a magic trick or did you just come up with some, like, do you, can you control the, your thumb enough to get the same sequence?
So it strikes me as prima facie completely implausible. And so yeah, I would take the, take the money. But. I, no I, no I wouldn't take the money, I wouldn't take the bet, because they're probably,
Ben Chugg: asking to bet for a
Vaden Masrani: some trick going on, so,
yeah, exactly, so I would say, no I'm not gonna
Liron Shapira: imagine this really is just, like, a random drunk guy who has, like, no incentive, doesn't want to bet you. Like, it really just does seem like
somebody's dicking around and you have no reason to suspect anything. And like,
there's no
Vaden Masrani: I wouldn't, I wouldn't
Liron Shapira: my question for yeah, forget, forget about the bet, okay?
So my question for you is just like, I mean, you brought up like, let's flip it again, and I'm saying like,
well, for whatever reason, right? Before you flip it again, just between you and me, right? You
just
Vaden Masrani: this, but
this
Liron Shapira: Liron, I'm about to do some Popperian reasoning.
Vaden Masrani: This is fundamentally a great example of the difference between Popperians and Bayesians. And Ben brought up a great example a long time ago about this. But the big difference is that a Popperian would just do it. A Bayesian would go off into their room and spend six hours writing a 20 page blog post about how the They can formalize the probability space of their beliefs on this particular circumstance, and then they would post it to LessWrong, and then spend another 20 hours arguing about the probabilities.
That's
Liron Shapira: So Popperian's wearing Nike.
Ben Chugg: So,
Vaden Masrani: it. They would just say, oh, okay, that's interesting.
Yeah,
let's just do it. Let's try. Oh, it's wrong? Okay, good. Move on.
Ben Chugg: so maybe to answer, make that answer slightly more globally applicable, and remove a tiny bit of the
snark, um, I think a good way to, to,
coherently talk about the differences in worldview, which I think is actually very interesting is that Bayesians are extremely focused on having accurate beliefs, put aside exactly how we define accurate, but accurate beliefs given the information you have right now.
Popperians are very interested in generating new hypotheses and figuring out what we can do to grow our information. So very interested in like, if we have multiple competing, Good hypotheses for some phenomenon. How do we go about discriminating between those? And that's where the crucial test and stuff comes up, right?
So there is
Liron Shapira: Yeah, so, so I, I get that your mindset is to just do it, and I get that you want to find new information, but like,
Ben Chugg: no, no, sorry, I'll, sorry, I'll stop dodging the
Liron Shapira: do you make of the challenge?
Ben Chugg: answer it. But I, I just, I just, the reason Vaden is having trouble answering your question is because there is this extreme difference in emphasis between these two worldviews, so much so that I, I have recently started struggling to call Bayesianism, Bayesian epistemology, because it's not really about epistemology in the sense of growing knowledge.
It's about epistemology in the sense of, like, justifying current credences for certain hypotheses. So the emphasis of these two worldviews are different, and they're still conflicting in important ways, as we've
Liron Shapira: I mean, Solomonov induction does grow its knowledge and grow its predictive confidence, right? So I, I, I think you're going out on a limb to say that Bayesians shouldn't have a right to the term epistemology.
Ben Chugg: No, no, I,
Vaden Masrani: no, that's not what you said.
Ben Chugg: if you want to call it
Vaden Masrani: you just said that.
Ben Chugg: fine. I'm just saying, I think, honestly, this is me trying to give a boon to your side and say, like, I think we're often arguing at slightly cross purposes because the Bayesians are extremely focused on, uh, uncertainty quantification, right? They want to say like exactly what your credence should be given the current information.
Um, there, I think they're less focused on like, I mean, I haven't heard many Bayesians talk about generating these new hypotheses with infinitely many Turing machines and stuff, right? Like it.
Liron Shapira: I mean, I can tell you, I personally don't spend a lot of time trying to precisely quantify hypotheses. I just know that, when I'm just manually doing things, like, Hmm, this seems like a good move to try, when I'm just like, thinking things through roughly, I just know in my head, on a meta level, that I'm approximating the numerical ideal.
And that's it. I just live my life with an approximation.
Vaden Masrani: you're assuming in your head that that's true, but
sure, um, but let me not dodge the question. Can you ask the question
Ben Chugg: yeah, I'll just,
Liron Shapira: Yeah, yeah, sure. So there's this weird sequence of
Ben Chugg: yeah, let me just answer and say like, yeah, all else being equal, I think I would say, um, I would take the bet or whatever, like, I would say like, yeah, this is probably implausible because if, if, if I can see how they're flipping it, I would say it seems extremely implausible that there's a mechanism by which they can control the coin, you know, there's no string in the air or something and like, yeah, so the only plausible mechanism by which this sequence had to happen is, um, basically their finger, right?
Because you're seeing the same. Same coin. And if the coin is like memoryless, which seems like a reasonable assumption, it wouldn't know that it has flipped like a head. It wouldn't know its own history. Right. So it has to be the
Liron Shapira: exactly. And by the way, just my intent. I'm not gonna trick you. I'm not gonna be like, Psyche, the
Ben Chugg: No, no, no, no. I
Liron Shapira: Like, that's not where I'm
going with
Ben Chugg: but no, like, yeah, so I would just, I'd probably say, yeah, it's very implausible that it's exactly the sequence.
And I would, um, you know, if I was in the mood to bet up against it and I had the spare income to do so, I'm a PhD student, so I don't have much spare income, you know, but if it's 30 cents, then maybe I'll take the bet. Um, yeah.
Liron Shapira: Right. So, so this is my question for you, right? Is this, this to me seems like a great toy example of like, how do you guys actually operate Popperian reasoning on a toy problem, right? Because you're saying it's a fair coin, but it seems to me like, the HHH always HHHTH, THHHTH always that hypothesis, that is like a very rich hypothesis, right?
It has, uh, more detail and it's harder to vary, because when you say fair coin, I'm like, wow, you fair coin, that's such an easy to vary hypothesis you could have said it's a 60 40 coin You could have said it's a 70 30 weighted coin. In fact, in my example, you got seven heads. So like, why did you say fair coin instead of 70 30 weighted coin?
You're the one who's picking a hypothesis that's
so arbitrary, so easy to
Vaden Masrani: You just, you just put a thousand words, you just put a thousand words in our mouths that we did not say.
We didn't use
Ben Chugg: also, yeah,
let me just, I think I can resolve this quickly and say like, you're right that saying I, you can flip exactly that sequence of coins. That is an extremely strict hypothesis that is very rich and it has lots of content, right? The content says every time I do this, I'm going to get this exact sequence of numbers.
What is content good for? It's good for discriminating between different theories. Um, and so how would we do that? We'd try and flip the coin again. So Vaden, like, you know, he wasn't trying to dodge the question by saying flip the coin again. Content is inherently tied to, like, how we
Liron Shapira: Okay, but for the sake of
argument, you don't get to flip the coin again. You
have
to just give me your
Vaden Masrani: on, hold on, but, but LeBron, like, you're, you're saying, okay, what, how would a Popperian deal with this circumstance, right? Like, that's, that's your question,
Liron Shapira: Okay, and I get that you really want to flip the coin again, but can't you just assume that you have to give me your best guess without
Ben Chugg: But I just gave you a bunch of
Vaden Masrani: but you're simultaneously, yeah, like, you're, you're saying, how would a Popperian deal with this? Um, I say how we would deal with it, and you say, okay, but assume you can't deal with it the way that you want to deal with it, then how would you deal with it?
Liron Shapira: Okay, so
you're basically saying, like, you, you, so if this, so you
Vaden Masrani: I would run
Liron Shapira: have nothing to tell me before flipping again, like nothing at all.
Vaden Masrani: uh, like, where are you trying to lead us to?
Like, we would run another experiment, or we would, Take the bet because it's so, uh, implausible that a coin could do this, that, like, either the guy has mastered his thumb mechanics in such a way that he can make it happen, or there's some magic coin that somehow knows how to do it. Flip itself in the exact sequence that is being requested and both of these things seem completely Implausible, so I would take the bet.
I wouldn't count the probabilities and then come up with some number in my head But yeah, that's I've given you 20 or a couple
Liron Shapira: Yeah, yeah, so one reason I wanted to bring up this example, that what originally inspired me to make up the example, is to, uh, to show a toy example where hard to vary seems to flip, you know, it's to be counterintuitive, right? Because I do think I've successfully presented an example where the 50 50 hypothesis, the fair coin hypothesis, actually is easy to
Ben Chugg: Uh, you're thinking only in terms of
Vaden Masrani: the hard to variance
Ben Chugg: You're thinking only in terms of statistics though, right? Like the In terms of, um, in terms of, like
Vaden Masrani: Explanations of,
Ben Chugg: like in terms of explanations of the underlying physics and stuff. Then it's like, is he magically doing this with his magic thumb? Right? Like that's easy to vary.
Vaden Masrani: that's the part that's easy to vary. The part that's hard to vary would be, uh, this is not possible and the guy is wrong and I'll take the bet because the easy to vary part would be magic thumb. Is he a super being? Is he, uh, telekinetic? Is he, is, are we living in a simulation? These are, like, I can come up with a thousand ideas all day and that's what's easy to vary and I just reject all of them because I'm just making them up as I go.
That's all rejection and that's how
Liron Shapira: So it, sounds like the resolution to, I don't know if it's a paradox, but it sounds like the resolution has to kind of zoom out and appeal to like the broader context of like, look, we live in a physical world, like coins are physically hard to make be this tricky, right, to always come up in this exact sequence.
So it, as a Bayesian, I would call that my prior probability, but how would you call that?
Vaden Masrani: Being a common sense
Ben Chugg: Yeah, just knowledge about how the
Vaden Masrani: thinking about how the world works. Just like, it's implausible. And so I would, yeah. Like,
anyone who doesn't study any, like, philosophy would come up with the same answer, approximately. That, like,
that doesn't seem
Liron Shapira: I wanted to challenge you more, I think I would
probably have to put in some work where I like, invent a whole toy universe that doesn't have as many, uh, as much common knowledge about, like, the laws of physics, so that it's not, like, super obvious that the coin is, is fair, because I do think there's some essence that I would distill as a Bayesian to just be like, you know, what I would get out as a Bayesian, what I think is, like, a profound lesson that's worth learning, like, you know, imagine, you A million, right? Imagine a million flips in a row, right? So then, even if the laws of physics made it really easy to make unfair coins, the fact that it just looks like there was no setup and you, and, you know, it could just be, it could even just be your own coin, right? Your own coin that you just got from like a random K marker, a 7 11, right?
You, you flip it, um, and it's like your own coin and it came up, you know, a hundred times like a totally random thing and you're like, I have a great theory, it always does this exact sequence, um, um, Yeah, I mean, I, I do find it convincing that it's like, look, the laws of physics makes that a priori unlikely, but I feel like the, a big advantage of the fair coin hypothesis is also that it, uh, you know, it's a priori much more likely than the hypothesis of, like, this exact sequence.
Like, that's kind of a ridiculous hypothesis. Like, where did I get that hypothesis without actually flipping the coin? You know, like, do you see there may be something there
Vaden Masrani: Well, you can always tell a Bayesian story after the fact, yeah, it's, it's a priori less likely, we all agree on that, and then Ben and I would want to say it's less likely because it's a bad explanation, so we just reject it, and you'd want to say it's less likely and, okay, how much precisely likely is it less than the other, let's come up with a count, let's say, okay, so there's eight heads and there's two tails and let's come up with a problem, we just say,
you don't
need that, it's, it's a ridiculous thing to
assume,
Ben Chugg: turn around, like, how would you, so what is your prior probability on the coin being fair?
Vaden Masrani: Yeah, great question. That's a great question. Yeah. What is your
Liron Shapira: Yeah, I mean if it generally if somebody does a party trick and I don't judge them as somebody who like wow this guy could Actually be doing some pretty fancy magic right if it just seems like a random drunk friend Then I'd probably be like okay. There's probably like a 97 percent chance This is just a regular fair coin and the other 3 percent is like okay This drunk guy actually got access to like a pretty good magic
Ben Chugg: Okay. So what you gave is like a bunch of reasons and then a number, right? We're just giving you the reasons.
Liron Shapira: Yeah, and the number is a
Ben Chugg: I know, exactly. We're just giving you the reasons and no number.
Liron Shapira: Yeah, I
Vaden Masrani: it's such a ballpark that we just don't need the number like what's the number four
Liron Shapira: Yeah, I mean, I
get that, right? So, I mean, the standard Bayesian response is to be like, Well, look, if there was, like, a market, right? Like a betting market, or a prediction market, or even a stock market, right? And, actually, this gets to another section that I was going to hit you with, which is like, okay, so expected value, right?
What do you think of this idea of calculating expected value?
Ben Chugg: Oh boy.
Vaden Masrani: think it's like the Pythagorean theorem, it's useful in some circumstances and not useful in other circumstances, and it's a banal mathematical fact that statisticians use all the time, and for whatever reason, it's also this like, crazy, philosophically metaphysical thing that the Oxford philosophers like to say.
Like William McCaskill and Toby Ord and Hilary Greaves and Elias Yukowsky and some day traders too. I know what you're about to say Yeah, then that's where the problems come in and let's let's go have we could talk about this a lot But for my first approximation, that's what I think He
Liron Shapira: the example of, hey, we're at this party. Somebody just did something with a coin who seems to be trying to gaslight me like it's this coin that always comes up, you know, these 10 sequences in a row, but then some trader overhears the conversation and walks by and he's not a confederate.
He's just honestly somebody who likes trading and likes markets and he's like, Hey, let me make you guys a market on this, right? Like, what odds do you want to give for this bet? And that's where, okay, yes, I pulled a number out of my butt, but, like, this guy wants to make odds, right? So, like, you have to plug something in.
Ben Chugg: Yeah, if you're willing to bet,
Liron Shapira: Yeah, and you could be like, well, parents would just walk away. We wouldn't participate, right? But, like,
Vaden Masrani: No, we would just run the experiment again. We'd run the experiment again and find out. Oh, no, we would, no, okay, we would run the experiment again and then decide if it's so ambiguous as to require us doing the experiment like a hundred thousand times and then collecting data on it and then building a statistical model and then using that statistical model to figure out what's actually happening because, yeah, there's many experiments that are really challenging to run and you get differences every time you do it and that's where data and statistics come in.
Is applicable. Um, but, And that's different than just saying the expected value of this is going to be big. And where are you getting this stuff from? You're just making it up and uh, it's useless in most cases unless you have data. And yeah, we can talk about the cultural stuff of traders and um, like Sam Bankman Freed, like if you read Malcolm Lewis's book, they talk about his culture at Jane Street and how they put expected values on everything.
And that is a cultural thing, um, which people do and We can talk about the culture there too, but that's just very different than how most people use it
Liron Shapira: an interesting example from that book is, I mean, if you look at, uh, Jane Street and, you know, the famous medallion fund. So there are funds that are placing bets, uh, you know, that they perceive to be
positive expected value in various markets.
Ben Chugg: I mean, using a huge, a lot of data and statistics and a boatload of assumptions about how the past, you know, the last five days of the market reflects something to do with the next day of the market, right?
Vaden Masrani: it's completely pot. It's so at the beginning of this conversation, we, I started at least by saying Bayesian statistics, all good. Bayesian epistemology, bad boo. So the Bayesian statistics part is what you're just asking about because you have like 50 years of financial data and you can run like trials and you could do simulations with your data and you can see.
What gives you a slightly better return and that is just a completely different thing than what the long termists and the Bayesians Like the you Kelsey style Bayesians are doing so just make sure that we in this
Ben Chugg: Yeah, we're not anti
statistics.
Vaden Masrani: you have data all good Yeah, yeah,
or anti expected values because we if you have data All good.
Like, you could do it wrong, of course, but I'm assuming that, like, in this purpose of this conversation, we're doing it well, and, like, Detroit, like, yeah, Jane Street and all these, like, uh, big hedge funds, like, they are, um, their whole life is trying to get, like, slightly better odds using, like, super computers and blah blah blah, and so, yeah, yeah, so that's fine, yeah.
Liron Shapira: For the audience watching the podcast, you guys know I recently did an episode where I was reacting to Sayesh Kapoor, and I think he made a lot of similar claims. I don't know if he subscribes to Karl Popper, but he had similar claims about like, look, probabilities are great when you're doing statistics, but when you're just trying to reason about future events that don't have a good statistical set, then just don't use probabilities.
Do you guys know Sayesh Kapoor, and does that sound
Ben Chugg: No, but that sounds great. yeah,
Yeah, it
sounds like we should
Liron Shapira: Okay.
Vaden Masrani: know where he's coming from, but that's, uh, yeah, totally. Which, well, actually,
so can I, can I actually riff on that a little bit? Which is that Popperianism bottoms out onto common sense. So it does not surprise me at all that someone who doesn't read Popper, know Popper, know Deutsch, is saying similar things, because if you just Are not in, so people say Popper's a cult too, so we're both in cults.
If you're not in the Popperian cult, or the, um, Joukowsky cult, then you, and you're just thinking about how the world works, you'll likely just come up with a bunch of Popperian stuff because it just bottoms out into a common sense, typically, yeah.
Liron Shapira: I would also like to claim that Bayesianism bottoms out into common sense.
Vaden Masrani: That's fair, yeah, we'll let the audience decide, yeah,
Liron Shapira: Okay, um, two can play at that game. Uh,
but yeah, so, what I was saying are
Vaden Masrani: yeah,
Liron Shapira: So a while ago I was saying, look, when we do Solomonoff induction, I claim that that's the theoretical ideal of what I do in practice, which is often just using my muscle memory. Similarly with expected value, I would make a similar claim that, like, realistically, right, in my life, I don't make that many quantitative bets, right?
I'm usually just, like, coasting, not really doing that much math. But I do think expected value is the theoretical ideal of what I do. For instance, I think we can probably all agree that, like, if you had to. Let's say you had to bet 100 on something and you either had to bet that like the sun will rise tomorrow or that Kamala will win the election.
It's like sun will rise tomorrow is gonna be like a much stronger bet, right?
So like, that would be like some primitive form of the expected value calculation.
Ben Chugg: It's a, I mean, we have a very good explanation as to why the sun will rise tomorrow. Um, yeah,
Liron Shapira: So it,
Vaden Masrani: and we don't really know too much about the, um,
Liron Shapira: So, okay, so,
let me, let me ask you about your MO, right? So, so let's say I'm just, I'm asking you to, to, you have to bet 10, uh, and in exchange for, you know, you have to give me odds at which you'd be willing to bet 10 to. There's probably, like, with Kamala, let's say, I'm sure you guys don't have a good explanatory model whether she's definitely going to win or not, right?
Because it's, like,
too hard to know. But if I said, look, it's your 10 to my 1, 000, you know, just take either side, wouldn't you just take a side because it seems pretty appealing?
Ben Chugg: sure.
10.
Yeah.
Vaden Masrani: I mean, like, if you're, yeah, um, I mean, like, if you're guaranteeing that we'll, yeah,
Liron Shapira: I ask you this question, you're not like, No, Liron, run the election, run the election. No, you're not stonewalling, right? You're saying, sure, I'll put down 10 to your 1, 000. That seems pretty appealing, right? So didn't you just imply that you think the
expected value of betting on Kamala is more than, you know, more than 10 in that situation?
Ben Chugg: Yeah. So you're sort of defining expected value after the fact. All I'm saying is like, I'll take this bet cause it seems like a good deal. I have 10 of disposable income and, you know,
Vaden Masrani: what I'm not doing, so what I'm not doing, to be very clear,
Liron Shapira: like a good deal, I think you're approximating the mathematical ideal of expected value.
Vaden Masrani: uh, because you can use that framework. To describe whatever the hell you want to describe, right? So, the expected value, so, okay, like, like, for, for the listeners. So, what is expected value? So, let's just talk about discrete stuff. So, it's a summation, yeah? So, summation of the probability of a thing happening times the utility of a thing happening, and then you sum it all up.
Okay, great. So, what are you gonna put in that sum? Well, you have to make that up. So, if, whatever you want to make up, You can do it, then you have to make up the utilities, and then you have to make up the probabilities. So, what you're doing is you're taking one made up number, uh, multiplied by another made up number, and then you're doing this a bunch of times, and then you're adding up all these made up numbers, and then you're getting a new made up number, and then you're making a decision based on that new made up number.
If you want to do that, and make your decision based on that, Go nuts. You just don't need to do that. So,
Liron Shapira: when you're putting up your
Vaden Masrani: numbers and coming up with a new made up number, you could just start with the final made up number. And you could also just start with the realization that you don't actually need these numbers in the first place because the election is like a knife edge.
And if someone is, you know, like a thousand to one odds or something, then you can take money off of them because they are mistaken in their knowledge about what's going to happen because they're falsely confident about something. And so you don't need expected
Liron Shapira: knife edge. The term knife edge is such a loaded term. You're implying that it's equally likely to go either way. How are you making this loaded? You don't know the future, Vaden.
Vaden Masrani: so because we have a data set here, which is the 330 million people who are going to vote and the polls that are trying to approximate that. So polls are a sample of a population. This is statistics. This is what I'm going based off of. And this is why Bayesian statistics is fine because it's, we know how polls work and we know how counting works and we have 330 repeated trials.
Well, people are going to do this. This is where statistics makes sense.
Liron Shapira: But there's a dark art where all of these pollsters are building their models, right? Because
the election is so close that the way you build your model is going to
really define which candidate you
think wins.
Vaden Masrani: and that's a huge problem. Yeah.
And I could rephrase that. I could rephrase that problem by saying there is way too much subjectivity injected into these equations.
Liron Shapira: It sounds like we all really agree though, in our gut, that like, the election is pretty close to 50 50 of who's going to win. Like, am I, do you want to push back on that, that it's not 50 50 roughly who's going to win?
Vaden Masrani: Um, I have, I have a pet theory about this, but it, uh, it's, it's not, uh, worth taking
seriously. So I'll just go with the polls, but I think polls are inaccurate, but
Liron Shapira: And I want to bring up the prediction markets, right? So now there's like, PolyMarket, Kalshi, Manifold. So these markets have gotten a lot of action in recent weeks and months. And it's pretty sweet, right? Because you can watch these markets and like, A lot of people betting on these markets have a Bayesian interpretation of what that fluctuating number means, right?
I mean, how else, or like, do you think that it's crazy to give a Bayesian interpretation of those numbers as like, good odds that you could use if you're placing bets?
Vaden Masrani: Yeah. I pretty much just ignore prediction markets, but that's my personal choice. Yeah. Uh, Ben. Um, I mean, what have you learned from the prediction markets that you haven't learned from polls? Out of curiosity.
Liron Shapira: Oh, what have I personally? Um,
Vaden Masrani: Yeah. Just like, what value have you got from the prediction markets that you haven't got from polls?
Liron Shapira: A lot of times it's redundant. I just, I see prediction markets as
being a little bit finer grained than a poll, so when the polls are kind of ambiguous, sometimes I'll look at a prediction market and I'll see more signal. Um, I think with, maybe with Biden dropping out, that, I guess that's not, there's no direct poll about that.
I mean, there was probably no single representative poll that was the Biden dropping out poll, but like, I guess I just want to invoke, like, there's some times where I see something will happen where it's not fully captured by, like, the ontology of a poll, but, like, there's a specific prediction market for it, and it spikes, and then I really do think, like, oh man, that spike seems like I should update my expectation of what's going to happen.
Ben Chugg: there's a lot going on here. Uh, let me just touch on a few points. So I think your initial thrust with like the betting 10 to 1, 000 odds, like these are really good odds, you'd probably take that, and if I keep decreasing the 1, 000, you're gonna hit a point where you don't wanna take that bet anymore, right?
Um, sure, so if you wanna then do some arithmetic on this and come up with like the expected value that, where I think, uh, I'm willing to bet on Kamala, uh, versus Trump, and you wanna call that expected value, Sure, you can do that. Um, we, we just want to emphasize that this is an extremely different quantity than what statisticians are doing when you have a well defined data set and you're looking at a well defined outcome and you're counting things or making very rigorous statistical assumptions.
And calling them both expected value is Unhelpful at best and actively harmful at worst because these are not the same sort of quantity now I don't I don't I'm not disputing that people, you know Think think someone's gonna win versus another person more or less or have different risk tolerances Some people like to bet, some people don't like to bet.
Uh, they have different utility functions. And so if you wanna, you know, if you wanna press them on that and make them bet at certain odds and call that expected value, fine. But this is a different thing than maybe statistical expected value. Uh, where I'm, you know, statistical at the beginning. Okay, fine.
That's one point. The second point is, yeah, prediction markets are interesting as an aggregate, uh, an aggregate knowledge about how, People, sometimes few people, sometimes many people, with money, are willing to bet on an election. It's a summary of information, in that sense, right? Um, And again, so you can now talk about different people in this market's expected value.
Uh, the whole point is that their expected values are different. That's why you see differential outcomes in markets, right? Um, and there's not, there's no way to adjudicate that precisely because it's subjective. And this is, again, why this is different than statistical expected value when we have well defined statistical models.
Vaden Masrani: I ask you a question before you, before you respond, which is how do you deal with the fact that you have many different prediction markets and they all say different things?
Liron Shapira: I mean, usually arbitrage brings them pretty close together, no?
Vaden Masrani: Uh, but that doesn't because they are still saying different things, right? Like, um, I may be factually
Liron Shapira: where there's a persistent, yeah, I mean, like I know on
Trump versus Kamala, it's always plus or minus 3%.
Vaden Masrani: yeah, actually I'll, I'll, I'll back off on this and just, um, let the listeners or the viewers check because I haven't checked it recently, but the
Liron Shapira: you're correct, that arbitrage is possible. I mean, that's not even necessarily a statement about epistemology. That's just like weird. Why aren't
Ben Chugg: I
think maybe a,
Liron Shapira: I mean, I guess that could
potentially be a
statement about epistemology, right?
If you're saying
Vaden Masrani: want to fact check myself and
fact check myself and also fact, like, I would love for the, the, the commenters on here to just, whatever time they're looking at it, just pull up four prediction marks, take screenshots and put them underneath. And let's see if, how
Ben Chugg: or maybe a better example here is just the discrepancy between, like, Nate Silver's models, right, and, like, Polly Market. So Polly Market, I think, is a way bigger,
uh, edge for, uh, Trump at this
Liron Shapira: yeah. So, so polymarket is allowed to have more models, right? Nate Silver constrains which models he's allowing
himself to use, right? Other people are like, oh, I also want to weight this potential model, right? So it's not that surprising that Nate Silver hasn't captured every possible model that you might want to weight into your prediction.
Vaden Masrani: but my, um, My question is, if you are willing to grant for the sake of argument that they are different, how do you decide which one to follow? That's my question. Because if they're all saying different things, and
Liron Shapira: it's the crux of our disagreement, right? I think that
if you were correct, so I'm happy to take a hit if you're actually correct, that prediction markets have persistent disagreements in the probability of something, that
absolutely would be, and given that they were liquid, right, assuming it's
easy to make
money by like buying one and shorting the other, um, that absolutely would be evidence for the meaninglessness of Bayesian probability, right?
And then conversely, the fact that I claim that this isn't factually true, that you actually have
very narrow spreads, Right? I think is evidence for the
meaningfulness of Bayesian probability. And I actually have further evidence along those
lines, which is, have you ever checked their calibration? Um,
Vaden Masrani: market, calibration is,
yeah, okay, yeah, let's go down
Liron Shapira: So manifold manifold is the one I saw. So
they look through a bunch of past markets, right? And these are Bayesian markets. These are not like doing statistics. They're just predicting an uncertain future about random questions. And when the market says at a given randomly sampled time that there's, let's say a 70 percent chance that the market is going to resolve, yes, like whatever it's predicting, right?
Like will Russia invade Ukraine, right? Like all these random
questions, if the market is saying at any given time, 70%, they went back and they checked the calibration. And, uh, I can put a graph in the show notes, but it's ridiculously accurate. Like, uh,
for example, the data point for 70 percent is it's like 68%.
This
is across a random sample of, of manifold points.
Vaden Masrani: there's a post on the EA forum that says that that's only true with a time horizon of less than five years and it might even be one year.
So the, um, the thing
Liron Shapira: so we can use Bayesian probability to predict all
events one year into the future. That seems like a pretty big win for
Vaden Masrani: No, hold on though, because the whole super intelligence thing is not one year into the future, and let's talk about, okay, we're gonna go down this path, let's do it.
Hold on, wait, no, no, let me say a few things, um, let me say a few things, which is, if you want to talk about super forecasting and Philip Tetlock and stuff, um, you have to read the appendix of his book where he says that any predictions beyond ten years is a fool's errand, and you are a fool's errand. You shouldn't even try it, and you'll embarrass yourself if you're going to.
So point number one. Point number two is that on the EA forum, someone who is very sympathetic to Bayesians, did an analysis on the calibration of, I think it was manifold, and when you look at these scores, you have to account for how far into the future they are, and so yeah, it's, it's interesting. It's totally possible to make predictions successfully within a year, but the thing that you're predicting matters a lot.
So if you're going to predict that the next year is going to be kind of similar to this year, that's like a default prediction. It's going to get you pretty high calibration, but that's also completely boring and uninteresting. It's not a huge concession at
Liron Shapira: you're basically saying as it, you know, these Bayesians who think that they can take something that doesn't have a mass of statistical data and slap a quantified probability on it as given by a prediction market, yes, as long as the time horizon is less than one year, they can expect near perfect calibration.
Vaden Masrani: Yeah, I can do that too. I predict that in a year on Christmas, there will be a flex of flights. It depends on the predictions you're making. If the predictions are simple, anyone can do it and get great calibrations. If the predictions are really complicated,
Liron Shapira: look at the predictions, they're complicated, right? They're, they're
things like Russia will invade
Ben Chugg: There are things that these people are willing to bet on that they have differential knowledge about with respect to the rest of people, right? It's not the same people always betting
Liron Shapira: to paraphrase what you guys are telling me, you're basically saying if there's a market saying Russia will invade, let's say it's, you know, January 2022. So we've got like a one month time horizon or let's, right, let's say there's a market saying, Hey, Russia will invade Ukraine by end of the quarter, right?
Like, cause I think
that's when they
Vaden Masrani: there's, there's not a
Ben Chugg: Yeah, there's no hypotheticals. Like, there
Liron Shapira: right? So in that scenario,
Ben Chugg: there's, superforecasters gave this 15 percent before Russia invaded Ukraine, and they gave COVID 3 percent that there'd be over 100, 000 cases in the U. S. by March.
Liron Shapira: Yeah, and remember, we're talking about calibration here, right? So I'm not saying the market gave it a 99 percent chance and then it happened. I'm saying if they gave it a 15 percent chance, then it would fall in a class of markets that were saying 15%. And what I'm saying is calibration data shows us that 15 percent of those markets do resolve.
Yes, like a market is generally well calibrated. So it sounds like you guys might be conceding that with a small timeframe under one year, there is such a thing as a well calibrated Bayesian
probability. Oof,
Vaden Masrani: completely worthless.
Liron Shapira: I mean, I just think that that concession is, you know, as a, uh, in the context of a debate, I do. I feel like that concession is almost conceding everything because
Vaden Masrani: it's not, I mean, there's a complete difference
between making predictions in, uh, the, yeah, yeah, what,
Liron Shapira: Because it didn't, by the way, this is unexpected, right? it's not,
like you came in being like, Okay, you Bayesians are so cocky because you have this amazing tool called the
prediction market where you can nail calibration for things in a year, but let me tell you how bad Bayesians choke after one year.
Like,
that's your
position?
Ben Chugg: I'm, just, wait, wait,
Vaden Masrani: are you talking about? Yeah. What are you talking about here? Uh, let's just zoom out a little sec. Like, um,
no, Ben, you go first, but I
Ben Chugg: I'm, I'm confused. Um, yeah, I'm just confused about the claim you're making. So, what prediction markets are not is consensus on probabilities. So, right, so what they're doing is, you know, so you'll have a prediction market, for instance, would converge to 50 percent if half the people thought there was a 25 percent probability of something, and half the people thought there was a 75 percent probability of something. What's not going on is, like, a bunch of Bayesian updating where, like, you have consensus of people all updating their prior probability. So, like, I just,
Vaden Masrani: You don't have to
be a Bayesian to bet that. Yeah. So you'd have to be a Bayesian to play a
Liron Shapira: and I'm not, and by the way, I'm not using
prediction markets as an example of somebody running Solomonoff induction. I'm using prediction markets as an example of having a number, uh, you know, Sayesh Kapoor's whole thing. I know you guys don't know him, but it's related to your point of like, you guys are basically saying where do these probability numbers come from?
Wow, you can't do expected value unless you're doing statistics. It seems like you could very successfully do Bayesian probability and expected value calculations if you simply refer to the numbers being outputted
Ben Chugg: I don't know, but you're already selecting
Vaden Masrani: phenomenon for prediction market. We know how prediction market works. Sorry, Ben. No, it's just, it's different,
Ben Chugg: But I mean, so you're already selecting for people who are like choosing to bet on these markets Which are people who think they have better information than the average person They think they have an edge, hence they're willing to bet, right? Meaning they think they have like a good explanation of whatever you're betting on, okay?
Right? Do we agree there? Okay, so we're already in a very restricted class of people Um, who are, you know, they're taking bets for some reason. advantageous information for something, uh, that's what betting is all about. You bet when you think other people are wrong about something, you have an explanation as to why they're wrong. Um, and so you, you put money down on it. And then what a market is is like aggregating all this information. Uh, and people think other people are wrong.
So they bet on the other side of that, uh, et cetera. Um, I'm a little confused how this relates to like a win for you. Each person, what Bayesianism says, and I think the claim you're making is you should walk around at all times, putting probabilities on every hypothesis that you conceive of, right, and constantly updating on new information.
Uh, I fail to see how this, like, you know, people are betting on very specific
Liron Shapira: So, so let me restate something you might, let me test your claim here, okay? So would you claim that humanity as a whole, as a team, using the technology of prediction markets, humanity can be a really great Bayesian because humanity can just list a bunch of hypotheses, run prediction markets on them, and then plug in those probabilities as Bayesian updates.
Ben Chugg: Uh, no, like not for not anything in longterm, not like meaningful, like sometimes the future resembles the
past for very important
Liron Shapira: And to be precise, it's not Bayesian updates, but it's like for betting, right? For expected value, for policy. And by the way, what I just described I think is
Robin Hanson's concept of futarky, right? You make
prediction markets telling you the different probabilities of different outcomes and then you just maximize expected value.
You choose the policy that's going to maximize expected value according to the probability that you now know pretty well.
Vaden Masrani: Uh. You. This is an interesting thing. Okay. So your question is, if we could get all of humanity to make predictions about stuff, uh, then we still have to wait for the time to pass and then see if it was right. And a prediction will either be right or wrong. And if the prediction is greater than, like, a year or two, then all of the predictions are eventually just gonna be 50 50 because we have no frickin idea about what's happening.
And then we have to
Liron Shapira: I'm happy that you've conceded one year, so let's just talk
about
Vaden Masrani: not, it's not a con it's not a concession because we understand how this works. Like, so there are certain predictions that can absolutely be made within a year time horizon. It depends on what's being predicted. So I can predict all sorts of stuff
Liron Shapira: for one year.
time horizons?
Vaden Masrani: I wouldn't, no, of course not.
futa sounds insane. Why would I predict, why would I support this?
Um,
because, uh, so I don't, futa if what you just said is that you make a decision based on the whole planet's probability about what's gonna happen in a year, is that what we're doing?
Liron Shapira: Kind of, yeah, the
idea of futarki, and we'll limit it to one year, is anytime there's a policy proposal that's going to yield returns within the next year, like let's say you want to make sure that GDP
grows 2 percent this year,
right, this calendar year,
so, and there's different policies, right, like would a tax cut
improve
GDP,
Vaden Masrani: All, all this is telling you, it's not telling you what is going to happen. It's telling you what people believe is going to happen and what people believe is going to happen can be completely wrong all the time. So,
Liron Shapira: because, you know, you, you basically asked me. So
a future key would be like, you know, there's, there's two different, should we do a tax cut? Should we do a tax increase? Should we, uh, cut interest rates, right? So there's a few different policy proposals where everybody agrees there's a target growth rate for the economy, right?
Like that's the policy outcome you want. And so future key would say, great, run prediction markets on all the different proposals saying conditioned on this proposal, you know, you're allowed to do conditional prediction markets. What would then be the resulting change in GDP? And that way voters could see, ah, okay, this policy is the one that has the
best change in GDP, and then you implement that, and I think you were starting to push back, saying like, look, this isn't a prediction, but may I remind you, these prediction markets have shown themselves to be very well calibrated, meaning you could use them as predictions, and your expected value formula would be very, you know, it'd
yield good
Ben Chugg: not everyone is
voting in these, I guess. Right. You're just restricting it to the people who like want to bet on these markets. Cause the whole point about prediction markets
Liron Shapira: Let's, let's say you're literally running it on, like, PolyMarket, right, using the same rules as PolyMarket, where it's literally just a financial incentive to participate if you think you know something. Um,
Ben Chugg: Okay. So you're going to delegate democratic
decision making to just like an expert class of people betting on poly market. I would definitely not support this.
Liron Shapira: I mean, let's say, let's say we keep the
same system, but we vote in politicians who are like, look, if there's like a weird emergency, I won't do futarchy, but like, I get that we were all smart people. We all get the value of futarchy. So I will be setting up these prediction markets and you guys can help advise on my policy that way.
Vaden Masrani: So you want to find an elite class of super smart people that will be included into the prediction markets and you want to get rid of all the dummies because
Liron Shapira: No, no, no, no, but that, but that's a solved problem. Like whatever current prediction markets are doing, the data is showing that they're yielding calibrated predictions. So you just, you, you just amplify what's working.
Vaden Masrani: It's so that you're talking about predictions. I would want to see if you, if you're seriously talking about this as a policy proposal, I would want to see the set of all predictions that were made and I want to figure out, okay, are these like kind of trivially easy predictions or are they like, holy shit, that is impressive.
So first of all, I'd want to look at the kinds of predictions that are being made. And then I want to see, like, which ones were right and which ones were wrong, and for what reason. Um, but just to zoom out for a sec, like, this is very analogous to a question of, like, direct democracy. And if I, if my car is broken, um, I could do one of two things.
I can talk to, like, a few, like, uh, well knowledgeable mechanics, and ask them what they think is wrong, and they can tell me, This and I can get a couple different opinions, or I could average the opinions of 330 million people in the population and just do whatever the average says. And you're saying that the second camp, just averaging the opinions of a bunch of people is preferred to like domain knowledge about what's going to happen.
And I would, in every case, take domain knowledge and sometimes that domain knowledge is going to be In the minds of, um, people in the Bay Area, in particular, who are extremely online and like to bet on all sorts of different things. And, depending on the question, that may or may not be a good source of, of information.
But, the, there's no, like, massive, ah, you just destroyed the whole Popperian approach because some predictions are possible within a year. It's like, we have to think about what's going on here. Um, and certain predictions are definitely possible within a year, yeah.
Liron Shapira: So say you're the president and you ran on a platform of like, I will pay attention to the prediction markets because I'm Bayesian and I understand the value of paying
attention to prediction
markets. And then, And you're, and you're considering a tax cut, right, a generous tax cut across the board.
And the prediction market says, um, GDP growth will increase, uh, more than 1 percent compared to what it would have been if this tax cut were implemented. Markets are saying 70 percent chance. Right? And now you're saying, just to repeat back what you just said now, you said, Okay, yeah, sure, the president could listen to that prediction market, but he hired Lauren Summers, right, or just like some famous economist, right, who's telling
him, Mr.
President, I give a 30 percent chance that
Vaden Masrani: No. He would give an explanation. He would give an explanation as
Liron Shapira: and it would come with an explanation, yeah. So, so you would say, because it comes with an explanation, and because this guy is trusted by the president, the president should just listen to him, and not the prediction market.
Vaden Masrani: they should listen to the explanation and maybe get a couple different ones and see what makes more sense and maybe get the people to debate a little bit
Ben Chugg: also, I mean, if, yeah, if there's an explanation as to like why the prediction market might be accurate in this case, like say you have like all these expert economists. It's betting on this, on, on, on this market, right? So in some sense, the market is reflecting the view of, uh, some giant class of people who we have, for some reason to expect, they know what they're talking about.
Then yeah, I would take that information on board. Well, I'm still, I'm confused about the Bayesian aspect here, right? So there are certain questions where we want to use statistics. We've said that all along, right? So in statistics is valuable insofar as it helps us with prediction, right? Especially when there are huge, uh, Okay, so markets, uh, prediction markets can reflect that in some sense.
Um, I'm, the Bayesian picture for me comes in, like, at the individual level. And at the individual level, I'm super skeptical of the ability to, for people to make, like, quote unquote super forecasts, right? So I think the literature there has been, like, very overblown, right? So there was this, there was a good review actually written by, um, I'm going to blank on their name.
Gavin Leach and Misha Yagodin, maybe? Right? So they like, um, they, and they were, uh, I think rationalists of some flavor. So very sympathetic to the Superforecasting Project. Um, and they took like a look at Tetlock's literature, um, and found that these initial claims of like 30 percent more accurate than, expert pundits were way overblown.
First of all, they were being measured by different metrics. And so once you correct for this, it's more like a 10 percent difference. Secondly, this 10 percent difference didn't even reach statistical significance. Uh, and so I, yeah, Okay,
Liron Shapira: that, right? I mean, I think this, this is absolutely a crux. I mean, so if I'm wrong about this kind of data, then I'm absolutely open to, um, to downgrading my assessment of the usefulness of Bayesianism. But the data that I would point to is if you look at manifold markets, for instance, the one, the one that published the data about the extremely good calibration. There's no one user in Manifold Market who has this kind of consistent calibration, right? It's the market's
calibration,
Ben Chugg: no one
Liron Shapira: so yeah,
no,
Ben Chugg: is okay, so I think we're getting somewhere, right? Like, there's no one user with good calibration. Okay, so this is saying like, doing a bunch of, Okay,
Liron Shapira: If they were forced to vote on everything, right? Maybe there are some users that have good calibration on the bets they choose to make.
Vaden Masrani: I just add one thing?
Ben Chugg: Uh, yeah, the other point just on the individual thing was like the actual Breyer scores that superforecaster are getting. So like, you know, 0. 25 is like a perfect 50 percent Breyer score. So if you just bet 50 percent on everything, um, assuming there's an equal number of yes, no's in the answer set, you're going to get 0.
25. What sort of Breyer score is where superforecaster is getting typically something around like 0. 2. Okay. This corresponds to like. 60 to 65 percent accuracy. So what we're saying is super forecasters who I guess do this for a living, right? They bet on stuff. Um, when they're maximally incentivized to truth seek, right?
Um, they can get like 60 to 65 percent accuracy on questions. Um, if you want to call that as like a gotcha that they're seeing clairvoyantly into the future, that's fine. I'll just acknowledge that. Um, but I don't view 60 to 65 percent accuracy as some huge win for putting probabilities on everything. I basically view it as like, they're running into hard epistemological limits of how easy it is to see the future.
If you have very, if you have very good domain knowledge of an area, it doesn't surprise me that you can beat a coin flip, literally random guessing. And expert knowledge in an area and are, are incentivized in the right way to actually care about outcomes as opposed to like political punditry, for instance.
Um, and so that's where all my
Liron Shapira: yeah. Let me tell you what I'm claiming here, though. Okay, why are we even talking about prediction markets, right? Let me tell you what I'm claiming here. I, so, and you bring it back to like, look, individual humans, uh, an individual humor, human is much weaker than a prediction market. That's what you say.
Fine. But let me tell you why I'm bringing this up. It's because I think a lot about AI and the powers that AI is going to have. If it's programmed correctly, right? If we keep on this progress of putting the right code into an AI, what's possible? Well, a single AI. It could take on all of humanity. Like, yes, there's a lot of different humans making a lot of different models, but you could also just copy the AI's code and run a bunch of instances of it and have them wire up to each other, and you It literally is, in my mind, a question of One A.
I. Versus all of humanity. And so for me, when I see the prediction market aggregated across all of humanity's experts, the way prediction markets know how to aggregate information, I see that as a lower bound for what one A. I. If programmed correctly could do in its own head and come up with its own Bayesian probabilities.
So when I imagine an A. I. Functioning in the world, I imagine it putting probabilities on things and Having those probabilities be well calibrated, using the expected value formula, and then placing very well calibrated bets.
Vaden Masrani: I ask a question? Can I ask a quick question? Um, what do prediction markets say about the likelihood of superintelligence?
Liron Shapira: So, currently they're saying I think AGI is coming at around 2032. I think that was Metaculous last I checked.
Vaden Masrani: Uh, no, the probability. So what's the probability of, like, the scenarios that you're describing that, um, the super, um, Forecasters on these markets, uh, what do they assign? What probability?
Liron Shapira: Uh, what's the question exactly?
Vaden Masrani: Um, for, uh, the doomsday apocalyptic scenario that ORD gives a 1 in 10 probability to, um, that you're really worried about, uh, I'm not asking when superintelligence is going to arrive because you can define superintelligence in a thousand different ways. I'm asking for the doomsday nightmare scenario that keeps you up at night, what's probability assigned to that?
Liron Shapira: So, I don't know which prediction market I would go check for that, because the problem is
Vaden Masrani: I thought you said they're all the same.
I thought you said they're all the same.
Liron Shapira: Or I don't know which prediction market even has enough volume on a question that corresponds to what you asked because the reason is Prediction markets are a powerful methodology, but they're they do have the issue of you know, counterparty risk and platform risk, right?
So if
you're saying hey What are the chance that everything is going to essentially go to zero that human value is going to go to zero, right? How am I going to collect on that if I if I think it's 90 percent likely why would I bet on that? I'm just losing my money today for something I can't collect on
Vaden Masrani: I see, so you'll follow their predictions up until the point that you have a reason to think that they're wrong and then you'll ignore them, is that right?
Liron Shapira: Well, this is a systematic failure, right? It's like saying, will prediction markets still work if somebody hacks their server? Well, wait a minute, there are some
Vaden Masrani: No, no, right, no, no, right now, there's certainly some prediction market that says some apocalyptic student day scenario, and I think Scott Alexander has blogged about this and it's very, very low. Um, I can find the source, I think it's something like 3, 5 percent bend, if you recall this, please let me know.
Liron Shapira: markets are a way to aggregate information by financially incentivizing their participants. There's no financial incentive to a Doom prediction.
Ben Chugg: Then why can we be confident in like your doom predictions or anything like that? Like, what, like, why should we, why should we, why should
Liron Shapira: My Doom predictions come from just a Yeah, yeah, yeah. I'm, I'm just actually using Bayesian epistemology, right? So everything we've been talking about now, I haven't been saying we're doomed because prediction markets say we're doomed. I'm saying no. I have a strong epistemology. It's called Bayesian epistemology.
It's called approximations to Solomonoff induction. You can see how strong this epistemology is When you go and look at the calibration of prediction markets like Manifold, who are not using statistics, to get great estimates. This helps you see that my epistemology is strong. Now, as a, as a reasoner with a strong epistemology, let me tell you how I got to a high PDoom, right?
That would be the shape
of my
Ben Chugg: okay. One,
Vaden Masrani: I just wanna, I just
wanna, yeah, sorry, this is really quick. So, your answer about, it doesn't make sense to bet on these, um, particular questions because we'll all be dead if they turn out to be true. So, that would just mean that there aren't those questions on these markets, right? Like, people aren't betting on them, they just aren't on there.
Um,
Liron Shapira: putting on the
question,
Vaden Masrani: not true. I think that that's not true. True end. But all I'm curious about is there is some number that they're giving that you have some reason to ignore because yours is much higher and why do you, um, support prediction markets in every case except when it disagrees with you, at which point you don't support them anymore.
Liron Shapira: Okay, so why do I trust prediction markets besides just their track record, right? Because it sounds like you're modeling me as like, look, if you like prediction markets track record, why don't you just extrapolate that no matter what the prediction is, you should expect it to be calibrated. But yeah, I do take a structural fact that I know about prediction markets, I take that into account.
For instance, if I knew for a fact that Bill Gates was going to spend all his money to manipulate a prediction market, right? Like there are some facts that could tell me for some periods of
Vaden Masrani: Yeah, so you have insider knowledge. You have insider knowledge as to say that these prediction markets are wrong and so you're presumably like leveraging that and making some
Liron Shapira: my, my point is just prediction, it's, it's not, you can't be quite so naive to be like, okay, no matter what the prediction market says, you have to trust it. There are some boundaries and I didn't, this isn't an ad hoc limitation. The idea that the whole prediction market shuts down under certain, uh, under certain bets you make.
I mean, there's, it's called
platform risk. Like this is a known thing in trading. Like you're, you're basically just,
you know,
you're,
coming at me for, for just doing a standard thing about trading where you look
Vaden Masrani: No. No, no, no. You, you, you, you were saying you have insider knowledge here to, um, for, you, you have justifications and reasons to assume that the probabilities that are being assigned to these particular questions are wrong. Um, and you should make a lot of money while the apocalypse is coming.
I get that when the apocalypse comes,
we're all going to be
Liron Shapira: only pays out during the apocalypse, right? The presumption is you make money because when the apocalypse happens, you get paid out. That's like a contradictory model. It's almost like betting, like,
you know,
it's, it's like a, it's like a logical paradox to have a prediction
Ben Chugg: can't you just do end
of the
Vaden Masrani: I saw, this
Ben Chugg: end of the world bets here? Like you'd Kowski Hanson style. Where the person
Liron Shapira: But the, the problem. Yeah, Yeah,
Vaden Masrani: And, and I,
Liron Shapira: do make, which I, uh,
So there is a bet I can make, but I can't make it on PolyMarket
or Manifold, but I can make it informally, I can make it with you guys if you want, where it's like, if you guys give me 1, 000 today, so I can have it while the world still exists, I could do something like, in 20 years, which is when I think there's like a 50 percent chance that the world is going to be ended then, I can pay you back 2x plus 5 percent interest or whatever, right?
So it's like you will have made like a very attractive return over those 20 years if I get to use your money now because I think that I do want to, I do place a significantly higher value for your money today.
Vaden Masrani: So, yeah, I just want to say for the listeners that I just might be mistaken about the internal mechanics of how prediction market works, works. Like, I thought you could get paid out before it resolves, but maybe that's just not
Liron Shapira: can get paid out, you can buy out if you find somebody else, like let's say the probability of doom keeps creeping out, so you could sell your contract to somebody else and you could cash out early, but the problem is why would somebody come in with a higher bid than you even if they thought doom was a higher probability? Right, because they're being stupid because they should know that they're not going to get paid out unless they find another sucker. It just becomes a Ponzi scheme essentially.
Vaden Masrani: I agree prediction markets can, no, that was a bit cheap,
Liron Shapira: Yeah, you heard it here first guys. Doomers just, uh, are pulling a Ponzi scheme on everybody.
Vaden Masrani: Yeah, um, I, I didn't say it, but,
um,
Uh, yeah, I think we should, we should pivot off of this because I, I just don't understand the mechanics enough to, um, uh, to adjudicate and I'll take your word for it, but it seems like you have insider knowledge that you should leverage somehow.
If you're right, there should be some way to just make a bunch of bank.
Liron Shapira: insider knowledge with platform risk, right? These are two distinct concepts.
Vaden Masrani: yeah, no, no, totally. Yeah, I'm totally acknowledging that I'm missing some of the details. I'd love for commenters underneath to, um, to clean this up for us. Yeah.
Liron Shapira: Okay. All right. All right. Cool. So, um, yeah, so I guess we're, we're, we're starting to come close to the end of time. Um, so yeah, let, let me just open it up to you guys. Um, just if you want to throw out a few topics to make sure we hit before we end, I can write them down and we can, uh, plan the rest of the talk.
Vaden Masrani: Well, so we, um, we've done three hours on Bayesian epistemology, and I think this is a good place to pause, and then let's do another three hours on superintelligence. This has been a blast. Um, like, uh, we haven't even talked about superintelligence yet, and, uh, and like, this is kind of why, when we had initially talked about this, I'm like, let's just extend the time, because we are not going to make it past the first, uh, the first set of questions.
Liron Shapira: All right, guys. So we've been talking quite a lot, and I talked with Ben invaded offline, and we all agree that there's so much more interesting stuff to talk about that we're gonna do a part two. It's also gonna be pretty long. Check out some of these coming attractions. We're gonna be talking about Ben's blog post called You need a theory for that theory,
and we're gonna be talking about Pascal's mugging. We're gonna be talking about Hume's paradox of induction, talking about, um, utility maximization as an attractor state for a eyes. Then we're going to have a whole David Deutch section talking about his AI claims like creating new knowledge and, uh, a certain captcha that I invented based on David Deutch's arguments. We're going to talk about what is creativity, how will we know when AIs are truly creative. Talk about intelligence, what is intelligence, can we talk about general intelligence. What separates humans from all other life forms? Is there much headroom above human intelligence? Is AGI possible eventually? What about in the next hundred years? How powerful is superintelligence relative to human intelligence? Can there be such a thing as thousands of IQ points? What's a fundamental capability that current AI doesn't have? And then also AI doom topics. We're going to talk about agency, the orthogonality thesis, instrumental convergence, AI alignment. and maybe even unpack some of Elon Musk's claim about AI. So all of those, sounds like we might need ten episodes, but most of those I think we'll hit on in part two. So how about, let's go through everybody and we'll just summarize, uh, where do we stand, what do we learn about the other person's position, did we change our mind about anything, uh, starting with Ben.
Ben Chugg: Sure, yeah, so I'm slightly worried we verbosely circled the disagreement without precisely getting to the key differences, perhaps, between Popperianism and Bayesianism. But hopefully I'm just a little, uh, I'm being a little negative and the differences did shine through. Um, I think To be fair to you, I think the biggest challenge to Popperianism comes in the form of betting.
If people are doing, like, you know, significantly better than random, what the hell's going on there, right? And is proba if probability is the only way to do that, then presumably that justifies some sort of probability, um, Epistemologically speaking. Um, I remain skeptical that that's true because at the individual level, I just haven't seen the statistics that superforecasters, like I said, are doing much better than like 60 65%, which I think can be explained with incentivizing truth and limiting, uh, Uh, thinking about questions where you have very, uh, good domain expertise.
Um, but that would definitely be, I think that's like a good crux to label maybe if I see, and I think actually Bain and I discussed this in some episode, this is sounding familiar as it comes out of my mouth. Like if, you know, if we start to see super forecaster accuracy, really just keep going up over time and start hitting 70, 75%, 80, 85%, then I'm totally gonna, that's going to start, uh, you know, Verging on falsifying my claims, right?
If people just become like more and more omniscient with more and more, uh, if they just become smarter and, and better, better able to
Liron Shapira: Wait, wait, why do you need an individual? Can I just clarify here? So does, when you say they have to get more and more accuracy, do they specifically have to give like 99 percent probability or something? Because normally we look at calibration, right? Like they'll say 60 percent chance and it happens 60 percent of the time.
So are you talking about calibration?
Vaden Masrani: No, accuracy as well, like, so calibration is one metric, but accuracy is another completely valid metric to look at, right?
Liron Shapira: When you say accuracy, do you mean like confidence? Like
high probability?
Vaden Masrani: so any machine learning person who's listening to this will know what I'm talking about. You can look at calibration, which is comparing the probabilities over a set of stuff, but you also just have a bunch of questions and whether or not they happened, right?
And then you can just count the numbers of successful predictions and,
Ben Chugg: Yeah. Like if I see Briar
score is
Vaden Masrani: has
Liron Shapira: outputting a bunch of
probabilities, Okay, so Breyer score does depend, the only way you can have a good Breyer score is by often having high probabilities as your answer, right? You can't
just punt and be like, oh, 51%,
like you
Ben Chugg: But that's the epistemologically relevant thing, right? If, if you're, if really you're using probabilities to reason about the world and updating your probabilities. Um, in such a way as to really be able to predict the future, then yeah, you're going to be predicting the future with high confidence.
That's the claim, right? The whole point about my 0. 25 comment was that you can do, you can get a low, quote unquote, prior score quite easily by just predicting 50%. So that's not interesting, right? What you want to do is start pushing,
Liron Shapira: But the universe is chaotic, right? If I give you, hey, here's your test, it's a hundred problems of like three body problems, right? Like super chaotic stuff, then you're going to fail the test, even if you're a
Ben Chugg: yeah, in other words, the universe is fundamentally, there are epistemological limits about how much we can know
Liron Shapira: fair. You're saying you're never going to
be convinced.
Ben Chugg: What you're saying is probabilities.
Liron Shapira: You're giving me a false test
Ben Chugg: No, no. What I'm saying is probability is not the best tool to reason about the future precisely because the future is chaotic and unpredictable, right?
The best thing we can do is just, like, argue about details, not put random probabilities on things that by your own lights, it sounds like you just admitted, are inherently unknowable. So when Ord says things like there is a one sixth probability of some crazy event happening in the next hundred years, yeah, I wanna I want to appeal, exactly like you said, to the chaotic nature of the universe to say this is a totally unjustifiable move to make and it's doubly unjustifiable to start comparing this to the probability of asteroid collision.
Liron Shapira: Okay, but what if, what if everybody, what if the current manifold prediction for asteroid impact in the next year, let's say, and for some reason that wasn't a world ending event, so like a small asteroid impact, right, a non world ending asteroid impact happening in the next 11 months, right? What if the prediction market was saying 1 in 6?
You wouldn't think that 1 in 6 was a trustworthy probability,
Ben Chugg: wouldn't need to look at a prediction market in that case. Like, we would have a theory of like, this asteroid is coming towards Earth. We'd talk to astronomers. Like, there's, this is not the place
Liron Shapira: Yeah, but to the extent that that theory was good, wouldn't the prediction
Ben Chugg: Yeah, I'm sure it would, and precisely because we have a good theory, right? But this is the whole disagreement between us.
We're saying, yeah, sure, prediction markets are useful sometimes. They're not useful most of the time, especially in the far future, because there are things that are inherently unknowable. In those realms, probability is a totally meaningless thing. Mathematization to put on trying to quantify ignorance.
The Bayesian position, um, and maybe you're not trying to argue this, I'd be surprised if so, but the Bayesian position is you should always put numbers on your uncertainty, for close events and far events. We're just trying to say, these are not the same thing when you're predicting what's happening in the next 5, 10, 15 years from predicting, like, you know, the election tomorrow.
Um, yeah.
Vaden Masrani: Can I
Liron Shapira: like that you were starting to give us a good test of what would change your mind. But then the test proved to be like kind of impossible, right? Like the test, like, what do you need to see from prediction markets to change your mind?
Ben Chugg: Yeah, I would,
Vaden Masrani: that the accuracy absolutely improves. Like, that things get better over time, right?
Liron Shapira: But can we well, why isn't your standard calibration? Why is your standard accuracy? Because accuracy is impossible. If we, if we put questions that are hard to have higher than 51 percent confidence on, then
for sure,
Ben Chugg: well there's a reason, I
Liron Shapira: right? So, you know. you're
giving an
Ben Chugg: there's a reason, like you're begging the question, right? There's a reason it's hard to get a high
Liron Shapira: Okay, okay, but you gotta admit it's not really a good faith test if you're just saying this is logically
Vaden Masrani: Well, so, okay, no, let me, let me rephrase it. So, um, if, this is a hilarious closing
Ben Chugg: we're right back,
Vaden Masrani: uh, it clearly indicates that we have much more to discuss about, um,
which is, which is fine, and which is good, but let's just, let's just try to wind, wind things down, and, and we'll leave
the, uh, leave the audience with, um, with a tease that we clearly have much more to, to discuss.
But, um, okay, let's just use calibration. Fine. Let's say that it gets more, and more, and more, and more calibrated over
Ben Chugg: And for, and for more and more
Vaden Masrani: Then
Ben Chugg: like we bet on everything, say.
Vaden Masrani: and for more and more events.
Surely that would have some s S surely. Unless you want to just handle the, the boring case where it's just the calibrations, 50%. If you're getting more and more calibrated, then that should improve your accuracy as well, right?
It won't be exactly the same, but you should get a better accuracy because that's why we care
Liron Shapira: is this? Just be like, hey, I'm going to filter prediction markets out for only the data points where there's more than 70 percent or less than 30 percent probability. I'm only going to use those data points, and then I'm going to measure the calibration of that, and if it stays high, then I'm going to keep being impressed that Bayesian epistemology has a lot to offer.
Vaden Masrani: because we aren't impressed and we aren't going to keep being impressed. We're talking about that, which would falsify our view and force us to be impressed because the standards for ourselves are different than you. And we're just trying to say, like, like, I would be super impressed if the accuracy started going up because the calibration started going down and it wouldn't have to be like perfect accuracy just like showing that over time people get more knowledge and then they can predict better and that's one falsifiable test that you don't need because you are already convinced but the question is what would change our
mind
Ben Chugg: and
let me concede, like, actually, something Vaden said earlier, like, if you just did more and more event, like, if you had prediction markets for everything, and we were predicting, you know, the, like, we were predicting everything from the weather to, like, uh, who's gonna get A pluses on their tests, so, like, is, you know, like, if we're predicting everything and there's calibration, we have perfectly calibrated, Like, everything's perfectly calibrated, all these markets are perfectly calibrated, that would be amazing.
I claim that's impossible, as, especially as the fidelity of these events gets, like, more and more precise, smaller, I don't, I don't know what word I'm looking for here, but, um, anyway, I think that was, uh, not a very coherent closing statement, but you, you understand what I'm trying to say, like, we use prediction markets to literally predict everything, and they were always perfectly calibrated, label me impressed, and I'm gonna, I'm gonna, definitely gonna reformulate my thought.
I'm still slightly confused about the, your, the relationship in your mind between Bayesianism, which is an individual thing for me, and prediction markets, But I think we're not going to resolve that right now. So maybe we'll relitigate
Liron Shapira: Yeah,
Vaden Masrani: should say that for next time yeah
Liron Shapira: Sweet.
Vaden Masrani: I agree with everything Ben said. Um, I, the only comment I want to make is for the listeners who are, um, overwhelmed right now, because there's a lot of various things, though, like the way that I think about learning the difference between Popperianism and Bayesianism is, um, do you remember like in elementary school where you put like a leaf under a piece of white paper and then you take charcoal and you keep doing passes and then over time the image underneath starts to become clearer and clearer and clearer but any one particular pass doesn't totally give you the resolution?
That's the metaphor I have with regards to the difference in these two kinds of methodologies because any one conversation will be interesting but it's not going to fully Boom, here's the difference. It's, it's more about listening to a set of conversations with different people, um, and listening to our podcasts, but listening to other podcasts as well.
Um, and just seeing the difference and seeing the difference. I only say that not in a self serving way, but also because this is where like a lot of stuff, um, is being, um, uh, put into direct, um, comparison. But over time, you'll just start to see different methodological differences, different emphases.
Emphasize different, um, cognitive tools for how to think through problems. Obviously we disagree on a lot of object level things, but the underlying difference is just how we think about the world. Um, and that's not going to be, um, made clear in any one particular conversation. It's something that's going to gradually become clearer and clearer over time.
As with the leaf and the charcoal. Um, and so just to the listener, who's like, Whoa, there is a lot of stuff and I still don't totally understand what the differences here are. You're not expected to, it's not possible to understand it in one conversation, but over time you'll start to see differences in approach and methodology.
And that's what I want to say.
Liron Shapira: Awesome. Yeah, thanks for the summary, guys. Uh, and you guys have been such great sparring partners,
you know, as fellow podcast hosts, right? You're old pros at this, uh, and so, and it shows. I think this was a really fun conversation. I think, you know, we didn't pull any punches, right? We were both going after pretty strong, which I, you know, I think we all enjoyed it. Um, yeah, it's, you know, it's all good nature. I mean, uh, uh, like I, you know, there's like no hard feelings, right? Just
Ben Chugg: No, this was great. This was so much
Vaden Masrani: Well, for the listeners, every time we go off pod, every time like this off Potter cut, there's like just great vibes. We're just fantastic. I'mloving it. Yeah, totally. That's great. Yeah.
Liron Shapira: it's, Yeah. It's, it's it's not like tribal or whatever. Um, and also I'll take a quick stab at being interfaith, right? I'll probably won't work, but I'll try to do a compatibilist solution here. What if we say that, uh, Solomonoff induction is like a nice theoretical ideal, the same way that, you know, the, the chess player that searches every move is a good ideal, but as humans, right, when you're occupying a human brain and you just have to be like totally limited, you can't even get close to approximating Solomonoff induction.
If you follow Pauper's recommendations, then by your own metric of trying to approximate Solomonoff induction, then you're going to do well. How's that?
Vaden Masrani: Nope. Popperianism was, Popperianism was born via the fight against all induction. of which Solomonov induction is one. So if you want to understand Popperianism, read literally any of his books with the exception of, like, maybe the, um, All of Life is Problem Solving. And every one of them has some attack about induction.
And induction is a much deeper concept than Solomonov induction. And so once you kill induction, you kill any Derivations or derivatives of induction. So, for that reason, we will leave the listener on a cliffhanger. Or maybe check out our episode with, with, with, uh, Tamler Summers. From Very Bad Wizards, where we talk about induction for two hours.
Liron Shapira: my last ditch effort to try tobroker a ceasefire has failed. So this is, so we're going to have to continue in a part two.
Vaden Masrani: Yeah. Induction. Saying induction was kryptonite.
Liron Shapira: Okay, great. So yeah, listeners, just stay tuned. Hopefully in the next few weeks, part two is coming. And, uh, yeah, stay tuned. We got other great debates coming up right here on Doom Debates.
Vaden Masrani: this is great. Honestly, I had a complete blast.