233 – AI Policy in D.C., with Dave Kasten

Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take.

LINKS
Narrow Path
Center for AI Policy
Dave Kasten’s Essay on the Essay Meta on his Substack
Prospective donors for CAIP can donate here and potential major donors ($5k+) encouraged to email Jason at jason@aipolicy.us
TBC episode 202 – we talk with the Center for AI Policy
Kickstarter link for Thomas Eliot’s new game: The Singularity Will Happen In Less Than A Year

00:00:05 – AI Policy in D.C.
01:20:07 – Fundraising for CAIP and The Singularity Will Happen in Less Than a Year
01:24:28 – Guild of the Rose
01:29:16 – Thank the Supporter!


Our Patreon, or if you prefer Our SubStack

Hey look, we have a discord! What could possibly go wrong?
We now partner with The Guild of the Rose, check them out.


Rationality: From AI to Zombies, The Podcast

LessWrong Sequence Posts Discussed in this Episode:

None

Next Sequence Posts:

None

This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to 233 – AI Policy in D.C., with Dave Kasten

  1. David Duvenaud says:

    Please post a transcript!

    • BayesianAdmin says:

      We don’t have a decent automated way of adding transcripts yet (I know, we’re like 3 years behind the curve). The Substack post includes one, but it’s embedded in a way that’s actually kind of cool – you can click on it and it’ll start playing wherever you clicked – but it was hellish to extract it and clean it up. Let’s see if it’ll let me paste the whole thing in a comment.

    • BayesianAdmin says:

      Well I feel silly for having done all of that work. I can just link to the Substack post since it’s public today. I’ll find a way to streamline transcripts going forward – you’re not the first to ask for them. 🙂

  2. BayesianAdmin says:

    0:05
    Welcome to The Bayesian Conspiracy. I’m Eneasz Brodski. I’m Steven Zuber.

    0:08
    And we have a guest with us today. Dave, please introduce yourself.
    Hi, my name is Dave Kasten. I am based in Washington, D.C., which is
    where I’ve lived for almost all of my adult life. By background, I’m
    someone that’s done a lot of work in the government contracting and
    consulting worlds,

    0:24
    working mainly with sort of executive branch organizations, you
    know, sort of defense, national security, regulatory kind of stuff.
    And I’ve also done a couple of things in my life. I’ve worked in the
    video game industry for a couple of years doing corporate strategy
    for that. But nowadays I work on AI policy stuff.

    0:44
    You know, I work full time with Control AI. uh, which is a London
    based nonprofit that is working on, uh, policy proposals and policy
    advocacy to reduce the risk of, uh, AI, uh, for humanity,
    particularly with regards to the risk of artificial
    superintelligence.

    0:59
    Awesome. Uh, yeah, I’m one more interested in the second thing, but
    I’ve got to ask about the first, uh, drive by thing. Did you, uh,
    work in any video games that we might’ve heard of?

    1:07
    Yeah, so I worked on the corporate strategy side of things, right?
    So many, not all, but most video game studios have a pretty thick
    division between the business side of things and the creative side
    of things, which is not the, you know, sort of public perception.

    1:21
    I think in general, people think like, ah, the suits ruin everything
    and run everything. And like, factually, that’s just not the case.
    Even where I worked, which was Activision Blizzard, which I think is
    often broadly perceived as being especially an example of that. I
    actually didn’t find that the case at all. I thought it was
    pretty…

    1:36
    Good wall of separation internally between the two or kind of each
    side did what it was good at and tried to stay out of the other
    side’s way as much as possible. I, you know, while I was there, I
    helped them spin up a couple new businesses and sort of did broad
    corporate strategy stuff as well,

    1:52
    I think. When I joined the team, it was on the very tail end of
    Activision Blizzard acquiring King, which are the people that make
    probably the most famous game that your listeners are familiar with
    is Candy Crush. So I helped a very little bit with that and then did
    a lot of work related to

    2:06
    helping Activision Blizzard think through… you know, strategy is
    sort of mobile transition, free to play transition was happening,
    and particularly, you know, help them think through their esports
    strategy, which, quite frankly, was a brilliant strategy that then
    got dashed on the rocks of COVID, among other things. But, you know,

    2:23
    I got to work with a lot of really great, really smart folks on that
    effort, as well as a couple of things.

    2:28
    Awesome. Well, I’m glad I asked. Thanks for indulging me.

    2:31
    Yeah, of course. It’s always fun to talk about. Like listeners, if
    you can get paid to think full time about video games, go do that.
    It’s great.

    2:39
    Really? So as long as you want an actual programmer, because I’ve
    heard being a programmer for them can be miserable.

    2:44
    Yeah. So I think like there is very much a cycle of crunch, which is
    really problematic. I think, sorry, for those listeners not
    familiar, crunch is just like working really crazy hours in the
    video game industry. And the problem is, is that a lot of video
    games for various reasons have deadlines that are really difficult
    to move.

    3:02
    Those used to be basically impossible to move when you were shipping
    a physical disc. Nowadays, as things have gone more and more online,
    even if you have a physical disc that you’re selling in Walmart or
    whatever, these deadlines are a little more flexible,

    3:14
    but they’re still very much is like much as they’re in sort of the
    movie industry. And it’s like, hey, we got to hit our deadlines. And
    we also have to create a product that’s creatively compelling. And
    so sometimes the combination of those two things can mean really
    crazy hours for the programmers.

    3:27
    I will say that the flip side of that is that if you end up in a
    really exciting place to be in the industry, it can be a lot of fun,
    even if the hours are crazy. And also, I think I would not be
    revealing any confidence as to say that if you happen to

    3:42
    walk through the parking lots of many of the video game companies in
    America… you will see that the programmers are driving very nice
    cars and they can afford them. Not all of them. There certainly are
    pluses and minuses, but I think there are certainly very fun places
    to be in that industry.

    3:59
    Right on. That’s awesome. You were brought to our attention by John
    Bennett. When he was talking with us, he said that you are a great
    person to talk to about all things AI for DC policy and AI
    interactions, which is why we are talking with you right now. I
    have…

    4:15
    Boy, I got a number of places I want to go. Steven, did you want to
    start us off with anything before I jump in?

    4:22
    I mean, so it’s always tempting just to jump in and start asking the
    fun, hard-hitting questions. But I always like to, I find that if
    there’s time at the end, I try to get like the backstory and that’s
    sometimes really interesting. So if you’re, I guess I’m curious
    about what got you,

    4:35
    you talked a bit about your like work history, but what got you into
    like the specific, you know, AI policy work that you’re focused on
    now?

    4:42
    Yeah, so I think for me, you know, AI policy and sort of AI broadly
    had always been an area of interest. You know, I think like probably
    a lot of folks, I really enjoyed reading Less Wrong and Slate Star
    Codex and, you know, a bunch of other things in that vein back in
    the day.

    4:59
    And of course, even before that, I was a huge science fiction fan,
    right? And, you know, Werner, Wenge, and, you know, all those other
    folks. And so I think these are always really exciting concepts for
    me when I thought they were really far away. Like I imagine a lot of
    other folks,

    5:14
    I know like a lot of other folks, I think I sort of had a wake up
    call a couple years ago when I realized, oh, this is coming very
    soon now. This is not going to be something that is a hypothetical
    one day that far future people will deal with. This is something
    that, you know,

    5:26
    us in our own lifetimes are going to have to navigate in a really
    substantial way. So for me, it was like when GPT-2 came out, the
    team I was on at McKinsey at the time, I had sort of made a move to
    an internal role there on their cybersecurity and data protection
    team.

    5:41
    And we were trying during COVID to kind of sort of just keep spirits
    up. So we had like a monthly contest on the team to do something
    like that. Paint a picture, build a gingerbread house, cook a meal
    and like, you know, most innovative recipe wherever wins. And then
    one month,

    5:57
    basically whoever won in month N got to set the challenge for month
    N plus one. And, you know, the winner one month was like, hey, I
    want to do a cocktail competition. And then they were like, well,
    Dave, of course, wins because he used to work as a bartender and
    he’s a huge cocktail nerd, which is true.

    6:11
    Hmm. And so, therefore, Dave is automatically disqualified from
    competition. He has already won. Everyone else is just competing for
    second place, which I thought was a little flattering of them. And I
    decided, therefore, to go nerdy. And I was like, well,

    6:25
    I’m going to train an instance of GPT-2 on my laptop in order to
    have it produce a cocktail recipe for me. Then I’m going to make
    that cocktail better. And then still presented anyways. And, you
    know, I think that experience made me realize like, hey, this is
    really nascent technology.

    6:42
    Obviously, it’s nothing, you know, like the models we have access to
    now. But like, wow, there really is a heat, like something here that
    is going to be really impressive. And I think that for me was kind
    of a big moment of kind of putting it on my radar. And then… You
    know,

    6:55
    when I was sort of looking for my next opportunity, you know, about
    a year and a half ago, I sort of tweeted out, hey, then I’m looking
    for my next opportunity. And a couple of folks, you know, very
    kindly, particularly Zvi Maushiewicz and Patrick McKenzie, retweeted
    out that tweet that I had done and, you know,

    7:12
    caused me to come to the attention of a bunch of folks, including
    Control AI, which is where I now work. We had a conversation and
    kind of all went from there. So, you know, I think for me, my origin
    story is, you know, has a good dash of luck in it,

    7:27
    but also I think really is a testament to how in the AI policy
    space, people are really willing to, you know, bring in folks from a
    wide variety of different backgrounds that, you know, have
    perspectives that can be useful. And certainly that’s what I try to
    be.

    7:42
    Awesome. You mentioned LessWrong. How long had you been reading
    LessWrong before the ChatGPT2 incident?

    7:49
    Oh, gosh, I don’t know. I think, you know, I sort of read it a lot
    in the, you know, less wrong 1.0 era, as people call it, right? Sort
    of when it was an initial flourishing, all the sequences, Harry
    Potter and the methods of rationality, all that good stuff.

    8:03
    And then, you know, like, I think a lot of other folks got, you
    know, Busy with other stages of life and other things going on and
    sort of stopped checking it more frequently. And then, you know,
    sort of as the less wrong 2.0 era kind of emerged, you know, ended
    up going back to the site.

    8:17
    Certainly what the folks at Lightcone have done with sort of
    rejuvenating the site has been really impressive. And, you know,
    started finding myself on my random internet, you know, wanderings
    drawn to there more and more. And, you know, now it’s pretty much on
    my daily, on my daily reading list again.

    8:34
    Fantastic. All right. I have been feeling really disheartened, I
    guess, by the state of how politics does not seem to be paying
    attention to AI at all. For example, not once in any of the debates,
    in the presidential debates, any of the politicking leading up to
    our last election, did anyone really say anything about AI.

    8:56
    I think there was… The most notable thing I recall was somebody,
    some senator speaking and saying, and when you talk about the
    existential risk of AI, you mean, of course, the existential risk to
    our jobs. Wow. What is going on in D.C.? How many people out there
    actually have some idea of what is happening and what is

    9:19
    their understanding of the situation out there?

    9:21
    Yeah. So I think, you know, the answer is, is that it is really
    mixed and inconsistent. I think there are like pockets that you
    would think, sort of feel the AGI, as it were, sort of similar to
    the degree anyone in the Bay Area does.

    9:37
    I think there’s other folks who are deeply focused on other problems
    of AI systems that exist already today. So things like, you know,
    algorithmic bias, you know, use of them for sort of creating, you
    know, deep fakes for purposes of business fraud or sexual abuse and
    things like that. As actually, you know,

    9:55
    as of this taping is a big issue right now that actually is getting
    a lot of attention on Capitol Hill. And then, of course, there are
    those folks who kind of just are either A, think it’s just like the
    next NFTs or crypto, where there’s a lot of grifters and not a lot
    of, you know,

    10:09
    sort of real business value being delivered. It’s kind of just hype.
    You know, don’t want to litigate whether or not that’s true or not
    about crypto, but that’s certainly how they would describe it. Uh,
    and then, you know, there’s some folks who are just kind of like,
    Hey, it’s not even on my radar screen.

    10:21
    I’m busy working on like healthcare or, you know, federal emergency
    management, or, you know, one of the 17 other things that Americans
    get, you know, every day that they don’t bother to realize the
    federal government delivers to them. And they just haven’t really
    paid attention. Um, this is shifting.

    10:37
    I think you were seeing people moving kind of in a, you know, a more
    concerned direction broadly. So, for example, the individual who,
    you know, made that very awkward transition of, you know, you said
    existential risk, maybe you mean jobs, senator, senator, you know,
    that individual since then has not gotten nearly as much attention
    for actually

    10:56
    holding another series of follow up hearings where both he and
    colleagues from the other side of the aisle have actually like
    really been asking some tough questions, have been talking to open
    AI whistleblowers and things like that in session. So, you know,
    totally get why that was like just a really, you know, compelling
    anecdote of, hey,

    11:13
    you know, it doesn’t feel like DC gets us. I suspect it was just
    like a senator trying to kind of transition on to the next question
    he wanted to ask and kind of just, you know, fumbled, fumbled the
    handoff to himself a little bit.

    11:26
    Um, but I, I think that like, you are actually seeing some folks who
    are paying it. I think what you’re not particularly seeing is that
    you’re not particularly seeing, um, shared common knowledge of this
    being an okay thing to talk about in terms of it’s what I believe.

    11:41
    And I think a lot of folks believe to be it’s likely future. Right.
    So, uh, just the other day I was talking, um, with a particular
    congressional staffer. And this actually does not in any way reveal
    any confidence because I’ve had many of these conversations with
    several different staffers. You know,

    11:54
    this particular staffer from a particular office was saying, hey,
    you know, I am concerned about this. You know, I know that my boss
    is as well, but I, you know, I can’t really talk about it in these
    terms in public yet.

    12:06
    It’s just kind of too easy to get attacked on it, too easy to be
    made fun of. So instead, we’re going to kind of talk about it
    through, you know, a slightly different instead. And I’ve had
    several of those conversations, right? So it’s a common one.

    12:17
    What needs to happen so that people will actually be able to talk
    about this in a normal way?

    12:22
    So the honest answer is that I don’t know. This is a lot of what I
    spend in my time trying to figure out right now. And I don’t think
    I’ve cracked it, and I don’t think any of us have. So there’s a
    classic answer, which is that you hear a lot of people say, which is
    that,

    12:35
    and indeed, John, I think, articulated this perspective, and he and
    I argue about this a lot, and I think there is a lot of truth in it.
    that the most easy way for things to change is some sort of
    attention-catching incident that moves the policy window really
    substantially, right? Whether that’s a new,

    12:53
    exciting tech demo or a dangerous tech demo, kind of a Sputnik
    moment, or whether it’s, God forbid, people en masse losing jobs or
    their lives in the event of something happening. really disastrous
    happening um i don’t know but you know certainly i hope if it

    13:09
    does happen for any of those reasons it’s one of the first ones uh
    but i think that’s the sort of obvious theory of the case that
    everyone okay something will happen this will suddenly rise from
    being a yeah i’ll get to it one day to a top of the priority list
    for you know,

    13:22
    100 senators and 435 House of Representative reps all at once, and
    then suddenly you’ll see action happen. I actually, as we’re taping
    this, I have a bookshelf to the left of me that has a bunch of books
    on the history of sort of national security in the United States,
    both before and after 9-11. And broadly,

    13:39
    the way I’d characterize them is that the books arranged on the
    shelf that are before for 9-11 all say, hey, there’s these set of
    issues coming, information sharing, the internet, non-state actors,
    which is a terrorist group. All those are going to be important. And
    then the books afterward are like, so all that turned out to be
    very,

    13:55
    unfortunately, very important. And then we proceeded to just sort of
    grab off the shelf. The answer is that the books from the 90s and
    the 80s had on that. And you know, lightly tweaked the answers in a
    bunch of ways and then kind of implemented them because we didn’t
    have time for anything else.

    14:08
    So I think that’s the most, if you were asking me to bet, that’s
    what I had unfortunately bet is the most. I think there are other
    ways. So the one that I really have been experimenting with a lot
    and have gotten some good success on is that, as I mentioned before,
    the sort of those, you know,

    14:22
    the sort of the last three buckets, the folks who don’t sort of feel
    the AGI but care about current AI systems, the folks who think it’s
    a scam and the folks who are just disengaged. If you tell them, hey,
    I’m not asking you to come to a conclusion on this just yet, right?

    14:35
    You just met me at a cocktail party or whatever. It’d be really
    weird if you would change your whole life, your whole view on the
    world based on some random person talking to you. Not asking for
    that. But I am wanting you to know, as kind of like just a piece of
    gossip, honestly,

    14:49
    that it is the case that when you talk to people who work in the AI
    company, talk to them regularly, one-on-one, in large groups, on the
    record, off the record, coffee first thing in the morning, three
    beers in at the end of the night, doesn’t matter.

    15:01
    They really do think that they are building intelligence that will
    be smarter than humans real soon now, and that does scare them and
    keep them up at night. That is actually a surprising update. A lot
    of people actually just don’t even know that. And they make that
    Honda noise exactly like you just did, for different reasons, right?

    15:18
    Because they literally… There’s no one in their social circle who
    tells them, hey, this is what these people believe, right? There’s
    sort of just too much distance between the Bay Area and DC on this,
    literal and metaphorical. And, you know, I’ve often found that if
    you sort of just approach it from that angle of like, look,

    15:33
    I’m not telling you what to believe, but I am telling you what they
    believe, it opens up the conversation in a way that wouldn’t
    otherwise be the key, right? Because then you’re no longer asking
    someone to kind of accept on blind faith a really big change in
    their worldview, right? But you are just telling them,

    15:48
    hey,

    15:49
    you didn’t know it, but there’s already this group of people over
    here who, broadly speaking, and to various degrees that, you know,
    many listeners are no doubt familiar with, you know, believe this to
    some extent. And that’s just like an actual surprise to them.
    They’ve never met anyone before who’s told them. So I suspect things
    like that,

    16:05
    things that bypass the, I’m asking you right at the start to, you
    know, ambush surprise, make a big life update, are going to be more
    successful. And I think hopefully… the path of communications, I
    know that we’re working on it, I know that some others are as well,
    is to try to find a way to, you know,

    16:22
    find that more gentle and gradual path to enable people to grapple
    with the issue from that.

    16:27
    How many people do you think don’t even have any clue that there’s a
    lot of people worried about this in the Bay?

    16:32
    I think a lot of folks, actually. I don’t think I have good numbers
    on this. I probably should try to get us to run polling on it,
    actually. But I think a lot of folks feel very much like the tech
    industry, and this is me reporting, not endorsing,

    16:50
    but they feel like the tech industry broadly does not care about
    them at all and is uninterested in what tech builds affects their
    lives at all.

    16:59
    Now, surely folks, do you mean like people in in policy and
    government in D.C. or are you talking speaking? Who are you talking
    about?

    17:08
    Both. So both folks who work in the policy world and folks sort of
    in what you might call sort of the like the people who listen to
    like Ezra Klein’s podcast. And of course, it’s sort of, you know,
    partners on the right wing. and are curious about policy, care about
    it a lot, but aren’t necessarily policymakers.

    17:28
    Both groups, I would say, kind of feel like tech doesn’t care that
    much about their perspectives. I think there is a broad belief in
    both parties in Washington… that the tech industry generally has
    not thought enough about the consequences of the sort of societal
    change that’s enabled.

    17:45
    I think there is a very low degree of trust that has developed. You
    know, sort of the above the waterline version of that you see is
    sort of the… Well, we haven’t had one, I guess, in a while, but,
    you know, sort of there was a very long time where, you know,

    17:58
    sort of the heads of all the social media companies would get hauled
    into Congress to get yelled at about every six months or so. And I
    think, you know, that was sort of the above the waterline part you
    could see of the much bigger iceberg below the waterline of like,

    18:11
    Everyone feeling like, hey, tech isn’t really caring about its
    impact on my constituents. The tech industry doesn’t really, and I
    think this is not true any longer, but was for a good long time. The
    tech industry for a very long time didn’t really have that much
    representation in Washington, contrary to popular belief.

    18:28
    Now they do quite a large amount of lobbying and have gotten a lot
    better at sort of knowing how to communicate with policymakers but
    you know for a very long time they didn’t have virtually any
    lobbyists in washington or any sort of policy experts um and that i
    think really

    18:41
    led to this feeling of disconnect that’s good to know i did not i
    did not understand for a long time like why uh media generally like
    leftist media and why a lot of people uh with with power and
    government seem to hate tech and want to destroy it i’m like why why
    are there

    18:57
    these memos going out at the new york times that we need to take
    down tech now it was just like are you randomly malicious for no
    reason and this is this is a good update i i feel like i have
    learned a new thing just now and i think like part of

    19:09
    you know uh So I think there’s two ways you can think of this,
    right? So the sort of call it the more negative way of thinking
    about this is that the New York Times views tech as a competitor for
    eyeballs and for monetization, which I think is probably true to
    some.

    19:23
    I think the other is that in general, both the journalistic world,
    broadly speaking, and the sort of political world, broadly speaking,
    tends to do their sense making of the world often by thinking about
    sort of power dynamics and particularly thinking about how those
    power dynamics harm the

    19:38
    most vulnerable right and i think um listeners who might have spent
    less time talking to uh broadly washington people might think that
    this is sort of more of a left versus right thing in the left you
    know maybe is caricature to sort of caring

    19:51
    more about power than the right i think they both actually care
    quite a lot about this just in different ways and indeed i think you
    know one of the things one of the reasons that like caring about
    deepfakes has popped a lot in the sort of bipartisan consensus, both
    in the US and also other Western liberal democracies,

    20:05
    is that a lot of people have had a lot of constituents come to them
    and say, hey, let me tell you the story about the really awful thing
    that happened to my son or daughter, or the really awful thing that
    happened to my business, or the really awful thing that happened to
    someone I know in my community,

    20:18
    as a result of a, you know, a malicious deepfake being made in order
    to harm. And, you know, I think similarly, you know, a decade before
    that, you saw the same thing with social media. And so people are
    like, Hey, it seems like there’s all this bad. That seems really
    problematic. What are you guys doing about it?

    20:33
    And I think for a long time, the tech industry didn’t know how to
    have a conversation about, you know, sort of, Hey, yes, those things
    are awful. Here’s what you’re trying to do to stop it. But also
    like, here’s the good that sort of technology enables, like, They
    sort of knew how to have that conversation.

    20:47
    But I think every CEO has gotten a lot more sophisticated on that
    over the past decade, you know, looking across the long months of
    history. And that’s really changed how they engage with what that’s
    there’s a lot to go.

    20:59
    I wanted to kind of unpack there. I mean, I guess just broadly, you
    know, you kind of mentioned the different camps of people in DC,
    where they kind of, where they stand on these issues. I’m not going
    to try and pin it down to a, you know, three significant digits,

    21:12
    but is it your estimation that it’s like one in 10 people toss it up
    to like the NFTs or this isn’t that interesting category? Or is it
    like one in two?

    21:21
    Huh? That’s a good question. I think among people who think about it
    at all, probably I’d say like one in three. But I’d say probably, I
    think, you know, that’s maybe one in, you know, like half the
    population isn’t paying any attention to it at all, because there
    are no significant attention, maybe more fair,

    21:37
    because they’re working on something else that busy with something
    else. And so I think there’s a lot of people, you know, probably who
    just are focused on other things. And then you probably, let’s say,
    a third of folks are kind of like, and this is nonsense. A third of
    folks, you know, actually probably of the remaining,

    21:53
    you know, call it remaining fraction, probably a good chunk of folks
    are sort of thinking, yeah, this is important. This is like
    something I really got to be on top of for my career. But, you know,
    most of them are thinking about and call like a mundane AI
    capability sense.

    22:07
    Very few of them are yet really aware of the risks in a meaningful
    sense. And, you know, maybe no more than a sixth of all of it, you
    know. And so I think that, you know, I’m probably doing my fractions
    wrong, actually, I think, because I’m thinking a lot. But, you know,

    22:20
    I think very few folks are actually, you know, really feeling the
    AGI and really worried about. Now, that being said, I think a lot of
    folks who totally would be dismissive of a catastrophic or
    extinction risk scenario for AI are actually very interested in
    engaging in

    22:35
    meaningful regulation of AI and like being very aggressive in terms
    of thinking about its social harms and trying to maximize its social
    benefits. Um, they’re just doing it for different reasons, right?
    Like, you know, for example, there was a, uh, particular Republican
    house member who gave a speech, uh, that I saw that, uh, you know,
    repeatedly said,

    22:55
    Hey, I don’t want to stifle innovation in various ways. Um, you
    know, it’s a really frankly great metaphors for that. Uh, but then
    at the end of the conversation was like, but also like, I’m still a
    regular, right? I still, you know, exercise oversight in Congress,
    um, I still, over a bunch of regulatory agency, those agencies,

    23:13
    I want to treat AI no different than anything else. You shouldn’t be
    able to get a get out of jail free card on the issues I’ve cared
    about for 20 years by stapling the words AI, right? And so I think
    that’s where a lot of the attention right now is very focused. And,
    you know,

    23:27
    it’s a not uncommon thing for me that when I’ll go to a think tank
    event about AI stuff, You know, the people up on stage are actually
    more willing to talk about these sort of big, scary issues. Whereas
    when you talk to people in the audience about, okay, what are you
    working on, you know,

    23:39
    for your boss in Congress or for, you know, in some regulatory
    agency or in some, you know, other context, it’s usually much more
    of the sort of like mundane day-to-day AI harms kind of stuff, like
    algorithmic bias, discrimination, you know, the ability to sort of
    look under the hood and tell whether or not an algorithm is

    23:56
    being fair to you. You know, a lot of these are real concerns that
    people have. the same concerns that, you know, sort of the classical
    sort of like less wrong style concerns.

    24:05
    Right. Yeah. It’s funny talking about these issues in public because
    people, oh yeah, I’m very, you know, they’ll at first seem like
    they’re on the same page. Like, oh yeah, I’m very concerned about AI
    stuff. And then you dive deep and they’re like, yeah, I mean,

    24:18
    what if it’s representing the news wrong or something it’s like yeah
    that would be a problem but that’s not the biggest problem so i mean
    you know i think we’re all on the same page there and i totally get
    where they’re coming from too they just

    24:27
    it’s i don’t know i’m trying to think of like maybe a really
    tortured analogy would be the discovery of you know refined uranium
    and it’s like man but isn’t radiation dangerous and it’s like yeah
    you know it’s really dangerous nuclear bombs um

    24:41
    Do people generally, I recently saw somebody posting that most
    people who haven’t read some significant amount about the tech think
    that these AIs are basically the same as programs that we had made
    in the past, that there’s code somewhere that people can go into and
    alter line by line,

    25:00
    and that we actually have control over what is happening, and that
    most people do not realize that these are basically synthetic brains
    that were grown using a descent algorithm. How widespread is the
    idea that these are things that we understand and can tinker

    25:17
    with as opposed to that these are intellects that we’re grown and we
    have no idea what’s going on under the hood?

    25:23
    So I think it’s it varies in ways that very poorly map on to, you
    know, your willingness to sort of think that an AGI or recursive
    self-improvement scenario is kind of coming. So, you know, I think
    in general, people in D.C. do not understand. I think there are
    definitely plenty of experts, both in congressional staff, you know,

    25:42
    actual elected members of Congress themselves, as well as a lot of
    folks in the various executive branch agencies who do understand
    that, like very clearly, like, for example, the Biden campaign. AI
    executive order could not have been written without an understanding
    of that. So too, it is very clear that in the new administration,

    25:56
    both David Stacks and Sriram understand these issues pretty well in
    that regard, as well as two others as well. I don’t want to give an
    exhaustive list of all the Trump officials. I’m just being sort of
    shorthanded here. But I think in general, no, people think it’s like
    a human interpretable program.

    26:13
    The weird counter example is that like folks who have worked on like
    autonomous driving and like financial regulation stuff actually do
    kind of get this because they’ve had to already talk about machine
    learning in like a day-to-day context of like, hey, what kinds of
    safety guarantees are we comfortable with,

    26:29
    with regards to deploying machine learning in these like very highly
    regulated safety critical, right? So like, they do know that they’re
    aware that like, hey, like, you know, to some degree, like computer
    vision or anything else is like, you know, a machine learning
    problem. And it’s just kind of how good do you think it is?

    26:45
    Maybe staple some safety controls on top, but to some degree, it’s a
    blob of numbers that they get. They then just haven’t generalized
    that thought, probably because that’s just not their job. They’re
    not necessarily someone working on that issue set to then think
    about, oh, where does that lead as you sort of scale up these models
    infinitely?

    27:00
    But, you know, there definitely are individuals who do fully
    understand that. But I’d say in general, mostly not.

    27:06
    Okay, this, how big of a problem do you think that’s going to be?
    Are we going to be able to have anything that can actually steer the
    future in a better place before people understand that? Or is this
    one of the communication issues?

    27:18
    I think it’s helpful to know that AIs are grown, not written. I
    think that’s certainly a useful thing for people to know. I’m not
    convinced that it’s the ultimate load-bearing piece of the puzzle. I
    think for me, I at least could conceive of, and I’m sure right after
    this publishes,

    27:39
    someone will write me a tweet explaining why this is wrong, but at
    least… To me, it seems conceivable that you could have a model
    that was human interpretable, like, you know, a list of 30 trillion
    if statements, right? Hypothetic, right? Or whatever, right? And in
    principle, humans can read every one of those if statements.

    27:55
    And in principle, you could map every single path, you know, you
    could go through the code. But, you know, in practice, nonetheless,
    it still has things that look kind of like the sort of instrumental
    convergence and the sort of, you know, not, you know, not
    generalizing out of

    28:08
    the quote unquote distribution of all the if statements and things
    like that, that would happen. Like, like those to me, you know,
    those are concerns about, can you really trust that a model is going
    to do what you want it to do feels plausible to me, even if you have
    call it fairly deterministic, you know,

    28:21
    sort of processes that you can, at least in theory, read with a
    human, you know, a human standing there in large part, just because
    like in other domains, we have things that are like in theory, like
    interpretable, but in practice cause industrial accidents, right?
    So, you know, the, the Chemical Safety Board or the Nuclear
    Regulatory Commission,

    28:39
    a meaningful chunk of their labor is around the problem of as these
    systems get very complicated, even if in theory, you can read the
    blueprints and understand how it all works and analyze it, you know,
    nonlinearities that you weren’t predicting that would happen or
    weird edge cases

    28:54
    that you weren’t thinking of are the ones that end up really biting
    you. And so like, I think people can get the notion that, hey, when
    you wire up something complex enough, no matter how it was wired up,
    weird stuff happens. I think it’s useful to further strengthen the
    point by making that.

    29:10
    But I’m not convinced that if you somehow didn’t tell anyone that,
    that that would prevent them from getting the point if they were
    sold on all the other.

    29:18
    Okay. These regulations, we talked about a little bit over a year
    ago, we talked to the Center for AI, some people from the Center for
    AI Policy, and they had a number of regulations that they were
    proposing, including strict liability for anything that a model does
    in the real world with negative effects back on the originators,

    29:39
    whether it be OpenAI or Anthropic or whatever. Do you think… These
    sorts of regulations would actually be enough to stop what we are
    seeing as the coming general intelligence, a fully thinking AI
    agent. Like, is this, could this actually make a difference?

    29:57
    You know, I don’t, so I think you asked a couple of different
    questions in there, right? So like, will it make a difference? Yes.
    Is it sufficient in and of itself? Probably not is probably my gut
    answer. I’ll admit that liability structures themselves, I haven’t
    especially studied the issue in any depth. And to some extent,

    30:15
    what I have is probably downstream of talking to folks at places
    like Cape, as well as folks like Gabe Wheel, who’s a law professor
    and good friend, who has done a lot of thinking on this kind of
    question. I think what I would say is that if you’ve never been
    inside a company that is sort

    30:33
    of like college, an ordinary, boring company, So not like a really
    cool tech startup, not like a nonprofit that operates in sort of a
    tech inflected space like AI, but just like a random company that’s
    like a Fortune 500 company or something like that.

    30:48
    I think it’s like really easy to underweight the extent to which
    like everyone is afraid of pissing off your general counsel. right?
    Because they are often one of the closest counselors to the CEO.
    They have the ability to tell, you know, the organization, hey, I
    really think this is a problem.

    31:04
    They are usually very, very, very smart people. And, you know, as a
    result, if like the general counsel is going around saying, hey,
    look, new law just got passed, I’ve got, you know, insert your
    thing, a strict liability requirement, a negligence level liability
    requirement, a You know,

    31:20
    a disclosure requirement in terms of my risk factors in my annual
    corporate filings, like pick your pick your sort of mechanism to
    cause that eye of Sauron to focus on these issues. Right. And have
    some teeth behind them. I think that is something that would drive
    meaningful impact in terms of affecting the behavior of these
    organizations,

    31:41
    certainly the ones that are public company. So, for example, Meta
    and Google D, right? And one thing that I personally expect to see
    at some point over the next two to three years, this is not based on
    any sort of insider knowledge, it’s just based on sort of call it
    broad pattern recognition,

    31:54
    is that if these models continue to scale up and there’s some sort
    of bad incident that happens, a bunch of lawyers are going to look
    at, you know, let’s say that, you know, arbitrarily there is, you
    know, Meta had an issue and

    32:06
    a bunch of lawyers are going to go look at meta’s uh you know risk
    disclosures and their annual corporate filings that they have to do
    which are public and control f through and see what they said about
    ai and if they didn’t have something that they

    32:17
    said in there that looks like uh you know the incident that god
    forbid it might have happened then they will you know sue the
    company on behalf of the stockholders and you know say in the
    immortal words of matt levine of money stuff like you know
    everything is securities fraud they will be like hey

    32:32
    uh seems to us that you failed to disclose that there was a
    meaningful risk of the ai models that you’re investing shareholders
    money in uh into causing a god forbid a chemical biological
    radiological nuclear attack uh what up guys um and you know to some
    degree they’ll be able to you know have a pretty decent chance of

    32:50
    prevailing um and indeed i would expect that as you see these models
    getting more powerful you will see companies because in these for
    those less familiar basically there’s uh in every public
    corporation’s annual filings, there’s like a very, very long list of
    basically every risk under the sun that they could possibly think

    33:06
    of that might affect the corporation that you’re required to
    disclose. And so I would anticipate that you will see at some point
    in the next couple of years, as these models get very powerful, you
    will start to see disclosures in there saying, you know,

    33:16
    our models might be very powerful and might cause bad things that we
    don’t intend. It’s a risk to the corporate, right? And in part,
    because they want to avoid the scenario where if God forbid
    something happens, they will have the ability to get sued or have
    the risk of getting sued, I should say.

    33:32
    And so I predict that, like, you know, returning to your original
    question, if we have something that has teeth, it will shape
    corporate behavior somewhat. Do I think that is likely to be enough?
    Probably not, because these corporations, to some degree, one, the
    ones that aren’t public, like Anthropic and OpenAI… If you’re a
    private investor,

    33:50
    your ability to complain after the fact if something bad happens is
    a lot less. One, as a matter of general practice, you tend to assign
    much more substantial waiving of certain considerations as part of
    how you invest than you would if you were an investor in Topeka
    investing in a public company like Google or whatever. And also,

    34:13
    to some degree, some of these companies exist solely to drive
    towards superintelligence, as in the case of OpenAI, as in the case
    of Anthropic. And so it’s hard to tell them to just stop altogether.
    And that’s why I think it’s unlikely on its own to be enough, even
    if it might have some meaningful positive impact.

    34:29
    And I certainly do think that folks at Cape and elsewhere are
    absolutely right to keep on pressing on this point.

    34:35
    What do you think would be enough? I was just going to ask a variant
    of the same question. And I like your open-ended version, Maureen
    Yash.

    34:43
    So I think the flippant answer is that folks can go to
    http://www.narrowpath.co. to read 80 pages that I and some others co-wrote
    on this for Control AI. I think the short version of my answer,
    though, in all seriousness, is there are multiple different ways we
    could get here that I feel we can skate

    35:05
    through by the skin of our team. I think the most likely one is,
    other than it just turns out that the universe loves us and it turns
    out that AI models really want to be aligned by default. I’m not
    expecting that to happen, but who knows? I think the most likely
    ones are things like, you know,

    35:20
    establishing really robust national level regulatory system that not
    only engages substantial oversight over the companies that are
    really pushing at the frontier, but also ban certain types of
    behavior that are risky altogether. Right. So, you know, recursive
    self-improvement. I think coming up with a good legal definition of
    that is very hard.

    35:41
    People are working on it, but it’s very hard. But if you could just
    sort of have, you know, a magic wand that told you, yes, this is
    recursive self-improvement. No, this is not. And you could magically
    tie that into a legal process.

    35:52
    I think you certainly would want to just to sort of say, look, let’s
    go more slowly. Let’s avoid a situation where things are runaways.
    And, you know, we sort of wake up dead one morning. There’s a
    variety of other behaviors like that, you know, the ability to
    escape your own environment,

    36:04
    the ability to be agentic in certain ways, that kind of thing that I
    think you might want to think about. Once you’ve done that national
    level regulation, and sort of, by the way, if you structure that
    right, you’re not actually harming the ability of the nation state
    to get lots of things that it wants,

    36:17
    because all kinds of sort of narrow tool AI are still totally
    doable, right? You know, protein folding, advanced sort of research,
    you know, capabilities that are still controlled by a human, those
    are very, you know, plausible that they could use. But you’re not
    building the really dangerous stuff that could kill us all.

    36:34
    And then you sort of say, okay, nationally, we’ve locked ourselves
    down. let’s go and have some sort of international cooperation on
    this question, right? Whether that looks like a treaty, whether that
    looks like something that’s more like an informal deal, whether it
    looks like, you know, no deal at all,

    36:47
    but it’s just kind of folks squaring up across the table from each
    other and saying, if you start building it, we’re going to launch
    the missiles. And the other side saying back, same back to you,
    buddy. I don’t know. I think it’s probably better if there’s real
    formal mechanisms behind it that enable international cooperation,
    like

    37:01
    The International Atomic Energy Organization, you know, agency,
    rather, the International Atomic Energy Agency hasn’t been perfect,
    but it certainly has delayed proliferation to a much greater degree
    than people give it credit for. When you sort of look back at
    people’s estimates of how many countries have been nuclear powers
    versus how many actually were, you know,

    37:20
    we were thinking it was going to be dozens, less than a dozen. And
    we were thinking nuclear weapons would be used in war a bunch.
    Nuclear weapons have not been used in war since 1945. So, you know,
    if you think we can slightly with sort of 80 years, you know,
    additional wisdom,

    37:35
    can we slightly beat the record that we have on nuclear
    nonperforation and go from using it twice and never again to using
    it zero times and never again, then I think you have real reason to
    believe that we could establish some sort of, you know, both
    national and international effort that meaningfully reduce risk.

    37:52
    And I don’t think it would last forever. I think eventually you
    would have a You know, if you sort of establish that, eventually
    someone clever enough and evil enough can find a way around. But I
    think it gives you more than enough time, hopefully decades,

    38:03
    in order to figure out how do you actually build an AI that’s very,
    very smart in a way that is safe for humanity.

    38:10
    So the… The current administration rescinded Biden’s order on AI
    regulation, I think it was. And then at the AI Safety Summit like a
    week or two ago – no, sorry, just the AI Summit. AI Action Summit, I
    believe. Okay, yeah. They literally removed the safety from the name
    of the summit.

    38:28
    um jd vance stood up and told the world uh basically uh my
    interpretation is um screw all this safety bullshit full speed ahead
    on ai what do you think the actual political will the possibility of
    having any sort of regulation of the type you just described is
    right now

    38:45
    I mean, I think there actually is a real possibility of, you know,
    meaningful political will on this topic, right? So a couple
    observations, right? The first is that, like, I’m trying to figure
    out how to say this in a polite way. It is certainly the case that a
    longstanding belief in American politics is that you can,

    39:06
    you know, go to an international summit and bluster, but then
    afterwards do something else. Right. And so I think, you know, it
    was very clear that the vice president was trying to send like a
    very direct message. And it’s not the one that I would have sent. I
    wish he hadn’t. But, you know,

    39:22
    I think there are plenty of times when we sort of rattle the saber
    in one way or another, and then we end up doing. And so I certainly,
    you know, If there was a listener out there that says, oh, it’s
    over. J.D. Vance said this. We’re done. Like, don’t believe that.
    Like, just on your priors. That doesn’t.

    39:36
    Right. But what I would say is that I think the administration very
    clearly is trying to do a different path than the Biden
    administration did. I think a lot of the, you know, quite frankly, a
    lot of what they rescinded with the, because Biden actually had
    multiple executives, right? So there’s this sort of core AI
    executive order.

    39:51
    And then separately, there was export connection, right? They
    haven’t touched the export control action with regards to limiting,
    you know, chips getting out and things like that. There’s some signs
    they might modify it, but like, They seem to believe in some degree
    of controlling exports and sort of ensuring

    40:06
    that sort of chip proliferation doesn’t happen to that much of a
    degree. I think also, frankly, a lot of the things that were in the
    Biden executive order were kind of like, thou shalt go write me a
    report of thou shalt go study an issue. And we’re sort of, you know,
    those had happened already.

    40:22
    And the remaining ones were large, I think were the most important
    were largely around sort of companies that there’s training models
    over a certain size. engaging in certain disclosure requirements to
    the United States. By and large, companies in the US are doing lots
    of engagement and disclosure. Anyways, is it exactly what I would
    like? No.

    40:36
    Do I think it has exactly the TIFI would give it? No. But I’m not
    that sure that that much has changed, other than the symbolic value,
    which of course is very important, both the administration and I
    think broadly to the world. But I don’t think removing the Biden
    executive order necessarily ends our ability to…

    40:51
    I would also say like, you know, as of the time of this taping, most
    likely I will have passed by the time that this airs. But, you know,
    right now the, you know, you can go on regulations.gov and find it.
    The White House Office of Science and Technology Policy is openly
    soliciting perspectives from the general public, academia,

    41:09
    industry, you name it, on what their AI plan should look like. And
    that was in response to an executive order that was passed by the
    President of the United States on that. I want to in the first week
    at some point, I forget the exact. And so they’re moving with speed,
    but they’re considering external perspectives.

    41:24
    I think you would anticipate that a lot of folks are going to
    express their perspectives on why they think that sort of, you know,
    just rushing to super intelligence is actually a fool’s errand and
    can kind of kill us all. You know, it doesn’t give you strategic
    advantage if you are the one that presses the button

    41:39
    that kills us all first. And, you know, I think that people are
    willing to listen to that to some degree if they can find a path to
    meaningfully preserve American, you know, competitive advantage and
    strategic advantage without addressing that, which we believe is
    possible. You know,

    41:53
    whether or not that is a desirable thing is obviously a question
    that, you know, different countries might disagree on, but certainly
    American policymakers care about it a lot. And I think it’s
    definitely possible to do. um and you know quite frankly like the
    united states has a pretty good track record

    42:07
    of backing away from building doomsday weapons right like you know
    we you know stopped building biological and chemical we didn’t build
    a wide variety of nuclear weapons configurations that would have
    been really really good at killing civilians at scale but uh not
    very good for much else right um we have you know You know, and
    indeed,

    42:27
    like Dr. Strangelove, you know, if viewers haven’t watched it, by
    the way, watch it. It’s one of the greatest movies of all time. But
    like, quite literally, the reason the core plot impetus of it was
    that at the time in the, you know, I guess it was late 50s, early
    60s, when the movie was being made,

    42:41
    there was a, you know, studies that were being done at the Rand
    Corporation, among others, about the sort of deterrence value of
    building a doomsday one. What the United States concluded was, you
    absolutely can build a doomsday. It is super easy to do.

    42:53
    Just build a very large nuclear weapon and put a jacket of cobalt or
    a variety of other things around it. And if you blow it up, it just
    kind of poisons the entire earth. This is doable, right? We know how
    to do it, unfortunately. And we decided that we weren’t ever going
    to build one,

    43:07
    not even as a sort of last ditch, you know, threat to pull out when
    all the other threats just wasn’t something we wanted to build, like
    both because it was really messed up to build, But also because
    like, if you actually explore strategic logic of it, it’s really
    destabilizing to have an effort towards building one because the

    43:23
    obvious logical move is, well, if you’re telling me you’re building
    something that if you turn it on, kills us all, I’m going to try to
    stop you at any chance I can before you acquire that capability.
    Because once you have it, that’s leverage over me for more, right?
    And so we decided it’s actually destabilizing to build these.

    43:38
    I think that’s quite possible that we might also conclude that about
    certain types of artificial superintelligence as well.

    43:43
    I think that’s one proposed approach that definitely has some merit.
    Inyash, I missed it. There was a loud truck going by. What was it
    that Vance said?

    43:52
    Oh, that we don’t care about, at least the official bluster was that
    we don’t care about safety and we’re going full speed ahead on AI.
    And we’re not going to regulate any of this. I’m being a little
    hyperbolic, but that was definitely the gist of it.

    44:07
    Yeah, I mean, even if I tone that down by half, it still is
    disconcerting. So, I mean, between that and the guy who had to be
    told multiple times why you can’t just nuke a hurricane, I have to
    just remind myself that these aren’t the only two people calling the
    shots.

    44:21
    And so, like you said, they put out a call for whatever, I guess,
    input. It’s interesting that they’d solicit from the general public.
    I wouldn’t want them to solicit from me. And I’ve been an AI
    enthusiast for like 15 years, right? I want them to ask people who
    know what they’re talking about. Yeah. they’re also asking them.

    44:36
    So, uh, it sounds like not all hope is lost.

    44:39
    Can I actually disagree with that? Steven? I actually like, I think
    you probably, uh, speaking in purely personal capacity as all these
    comments are, um, I think that you should actually submit something.

    44:49
    Oh, well, I appreciate that.

    44:51
    What, uh, I mean, ordinary citizens do this all the time. Uh, it is
    actually one of the things that I think, like one of my current
    hobby horses is that I think, uh, we have done a really good job of
    making government in america feel really

    45:04
    impressive we have a very good sort of national mythos around it we
    take kids on basically a secular pilgrimage in middle school a lot
    of them to like come visit the national monument and walk around and
    do all that stuff right and as a result

    45:15
    like we make government feel really impressive and cool and like
    yeah it is really impressive and cool but like At the end of the
    day, like most Americans don’t realize the power that they have to
    talk to their own. You see this on only a couple issues.

    45:29
    Usually do you see sort of Americans at scale engaging on a topic?
    And as a result, those are topics that policymakers care a lot about
    getting right. Right. So, you know, one example is sort of all
    things relating to prescription drugs and drugs. As a result of
    that,

    45:45
    you’ve seen a lot of effort to really make sure that the FDA process
    takes patient advocates’ concerns into account. Similarly, on other
    topics, like part of why is it for whatever your opinions might be
    on firearms, it is certainly the case that if the NRA historically
    asked its base to provide comments on a firearms regulation issue,

    46:07
    they would at mass. They would click on the link and they would just
    provide comments. And similarly, there’s a couple other issues as
    well on both the left and the right was the case. We do not yet see
    that on AI. I bet if we saw that in a constructive and positive way,

    46:19
    whether that’s in terms of providing comments to the federal
    register, that’s regulations.gov. It’s called the federal. It has
    multiple names. It’s a long story. Whether that’s providing comments
    directly to the federal government, which is your right as just
    actually not even a citizen. Any human being can do this.

    46:32
    or whether it’s calling your representative in Congress or your
    senators, or whether it’s writing a letter to the editor. At some
    point along the way, we somehow convince people, and I really don’t
    know how, that this is not something that’s easy to do. In fact,

    46:47
    one of the things that I’ve been playing around the notion of is the
    next time I’m out at one of the Bay Area conferences, Like less
    online or manifest that has sort of a little bit of an unconference
    structure is that I might literally do a session that’s called we
    are going to call your congressman right now.

    47:01
    And it’s literally going to be a 30 minute session where we’re going
    to sit there. I’m going to explain to you this is the number that
    you call that is the Senate switchboard or the House switchboard.
    We’re going to call right now. You call it. You give your zip code.

    47:14
    And then it directs you to the… Actually, maybe it’s… I forget
    exactly how it works. That usually is called a particular office
    direct. But, like, you call the switchboard. It figures out which
    your correct senators or representative are. Moves you straight to
    their office. And you can, like, leave a comment right then and
    there.

    47:27
    If there are people in the office, they’ll talk to you. If they’re
    not, there’s a voicemail, right? And by the way, if you want a
    meeting with your congressman’s office or your senator’s office,
    like this is eminently doable. You can call them and say,

    47:38
    I would like to talk to you for 30 minutes about why I’m concerned
    about AI safety. And like, honestly, like odds are, you know, you
    probably won’t get to meet with the congressman or congresswoman or
    senator, but you probably will get to meet with one of their staffs.
    That is like much easier than people realize.

    47:52
    And I really think that more people should do that. And I am hopeful
    that one of the things I think, you know, Quite frankly, if you’ve
    got a community of a lot of folks who are really good at writing
    really long essays on the internet,

    48:05
    what if they just took those essays and then submitted them to a
    little portal on regulations.gov by copy and pasting and maybe doing
    20 minutes of cleanup so that it’s relevant to this topic? I think
    there could be a lot of value in that. And it’s something that I’m
    hopeful that I can encourage folks to try more.

    48:21
    You make a compelling point. And I think maybe I’m just a bit jaded,
    maybe because it’s been so long since civics class. But I kind of
    forget that, yeah, the government, there are ways to talk to people
    out there. And it’s built into the system. I did do a quick search.

    48:36
    It’s worth noting, and we’ll probably put out a notice between
    episodes about this. As of today, there’s only nine days left to
    submit your feedback on the artificial intelligence action plan.
    Yep. So those are through the March 15th. We’ll probably put
    something out for that because I love the idea of, yeah, just, I
    mean, you know,

    48:53
    someone submit Yudkowsky’s time essay. Someone grab Zvi’s top three
    blog posts. Or, I mean, I guess the downside is that we inundate
    them with everything Zvi wrote. No one will ever finish reading it,
    but it’s all worth reading, right? I mean-

    49:07
    I would actually not be surprised if like the usual way this tends
    to work is that there’s someone whose job is to literally categorize
    every single input that is received. And there’s like a giant
    spreadsheet that’s like, okay, you know, Bob Smith wrote us a letter
    that was three pages long about his personal feelings about AI.

    49:24
    How do we categorize them? They have some sort of categorization
    scheme. And then there’s like, oh, you know, such and such XYZ
    nonprofit sent us a thing. What are the recommendations? How do they
    fit in sort of various buckets? And it’s just like a giant
    spreadsheet exercise of like categorizing all this stuff.

    49:39
    And then they look for patterns. And so frankly, if you were like,
    you know, I would encourage folks to sort of feel comfortable
    speaking in their own voices. But if you’re like, hey, you know, I’m
    Steven. I’ve been caring about this a long time. I really think this
    essay sums up my feelings better than I ever could.

    49:53
    I really would appreciate it if you sort of like took a look at this
    essay, you know, here’s a copy and paste it in or whatever. Here you
    go. Like, I think that’s a very reasonable thing. And indeed, if you
    look on the Federal Register, you can find examples of this like
    pretty easily.

    50:06
    Well, you’ve convinced me I’m going to do it. That’s awesome. I
    appreciate the encouragement.

    50:09
    Great.

    50:10
    Yeah. I heard from someone who works in the federal government that,
    yeah, since they do have to look at all the public comments and
    categorize them in one way or another, like, first of all, I guess,
    you know, adding your own little voice is kind of nice.

    50:21
    It’s like tally one more into the really concerned about AI
    existential risk column, which is good. But apparently the
    government spends literally tens of millions of dollars a year on
    people to read all these things and then put them in these
    spreadsheets and categorize them. And he pointed out that like,

    50:38
    we may have the tech right now for an LLM to be able to do this for
    much cheaper. So if someone wants to get that running and sell it to
    the government for a cost that is less than tens of millions of
    dollars a year, you could get a very viable business out of that.

    50:51
    Yeah, and I wouldn’t be shocked, by the way, if, like, particularly
    in this new administration, I wouldn’t be shocked if they’re also
    doing that as well. Like, if they sort of say, hey, like, LLM, look
    at this entire bucket of files and tell me common themes. Like,
    wouldn’t shock me if they’re doing that as well.

    51:04
    I also suspect that they will literally have humans looking at it
    nonetheless, not only because that’s the tradition, but, like… You
    know, regulators, even if they disagree with you, have a series of
    pretty strong legal incentives under this is it’s not worth getting
    into the weeds. There’s something called the Administrative
    Procedure Act,

    51:21
    and there’s a bunch of other stuff that is related to it or takes
    inspiration. The net result of that is, is that they if even if I
    am, you know, an evil like a that really wants to do the exact
    opposite of whatever Dave Kasten wants to have passed on some issue.

    51:34
    I want to demonstrate very clearly that I read Dave Captain’s
    comment because it bulletproofs the likelihood that I will be able
    to actually defend what I’m doing in sort of a future court
    proceeding. But in order to demonstrate that I read it, I have to
    write down somewhere a good argument, probably,

    51:52
    if I’m thinking about this in a sharp way, as to why Dave’s argument
    doesn’t apply. And also, if I can take pieces of Dave’s argument
    that I’m excited by and apply them in one or another or sort of
    track some sort of path dependency. I have an incentive to do so,

    52:06
    to modify my draft proposal in order to look more like the
    submissions I got from citizens in order to make it more likely to
    be able to, right? And we sort of created these incentives on
    purpose in order to make it so that, you know,

    52:20
    at least the theory of the case was that we’d have a government
    that’s responsive to its citizens. Whether or not you think that’s
    accessible is a very complicated thing that many law review articles
    have been written on. But like the net result is exactly to your
    point a moment ago, like

    52:31
    Tens of millions of dollars of staff time are being spent on reading
    all these submissions. And they range from everything from a
    one-page essay to, you know, here’s a 30-page, you know, position
    paper to everything in between.

    52:41
    I was surprised by how… Okay, so this is something I… I’m
    hemming and hawing a lot because I don’t want to reveal a lot of
    personal details here, but… I know someone in the Bay who
    contributed the maximum amount you can contribute to

    52:55
    a campaign as a private citizen to a lot of senators and reps that
    were doing AI things. And this caught the attention. And the maximum
    amount is in the low thousands. It’s like, yeah. And just… That
    caught the attention of some process, someone, somewhere, that this
    private citizen was making the maximum contribution to specific, you
    know,

    53:16
    a dozen or so people. And somebody reached out, contacted him, was
    like, what’s going on here? And he has written extensively about AI
    things before. And due to what turns out to be investment of just
    several tens of thousands of dollars… has met with literally the
    people directly below senators.

    53:33
    And I think in one case actually had a lunch or something with a
    sitting senator, which kind of blew my mind that just doing targeted
    maximum contributions like that as a private citizen can get your
    voice heard. Yep.

    53:45
    I think that’s right. I think, you know, certainly, you know, again,
    speaking purely in personal capacity, like I think it is worth
    considering if you want to donate to particular candidates that you
    think might express your views on any issue. I do want to emphasize
    that while that is certainly helpful for some people,

    54:02
    and there’s a reason that we’ll do it. It is very possible to have
    influence with a elected official having never spent a dime. Right.
    And like, you know, part of this is that, you know, at the end of
    the day, they’re not getting like dollars are a proxy for votes.
    Right.

    54:16
    So if they have a good reason to believe that talking to you gets
    them votes, they kind of on some level don’t fully care whether or
    not it gets them dollars. Right. Dollars are just like the thing to
    sort of help them in their sort of ability to engage in, you know,
    under the rules we’ve set up,

    54:33
    competition for people’s votes. And so, you know, certainly a
    worthwhile thing to consider. And anyone who is considering policy
    influence on any issue, I’m sure would consider it. But, you know, I
    don’t want to leave readers with the impression or listeners with
    the impression that that is… Though certainly if you go on the,

    54:51
    there are various disclosure sites for this right now, you can
    pr

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.