I'm a rising sophomore at the University of Chicago studying philosophy and economics/math. I'm largely interested in formal epistemology, metaethics, formal ethics, decision theory, and I have minor interests in a few other areas--I think LessWrong ideas are heavily underrated in philosophy academia, though I have some contentions. I also have a blog where I post about philosophy (and other stuff sometimes) here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
Thanks! Honestly, I think this kind of project needs to get much more appreciation and should be done more by those who are very confident in their positions and would like to steelman the other side. I also often hear people very confident about their beliefs and truly have no idea what the bets counterarguments are--maybe this uncommon, but I went to an in-person rationalist meetup like last week, and the people were really confident but haven't heard of a bunch of these counterarguments, which I though is not at all in the LessWrong spirit. That interaction was one of my inspirations for the post.
I think I agree, but I'm having a bit of trouble understanding how you would evaluate arguments so much differently than I am now. I would say my method is pretty different than that of twitter debates (in many ways, I am very sympathetic and influenced by the LessWrong approach). I think I could have made a list of cruxes of each argument, but I didn't want the post to be too long -- much fewer would read it which is why I recommended that people first get a grasp on the general arguments for AI being an existential risk right at the beginning (adding a credence or range, i think, is pretty silly given that people should be able to assign their own, and I'm just some random undergrad on the internet).
Different but related point:
I think, generally, I largely agree with you on many things you've said and just appreciate the outside view more. A modest epistemology of sorts. Even if I don't find an argument super compelling, if a bunch of people that I think are pretty smart do (Yann LeCun has done some groundbreaking work in AI stuff, so that seems like a reason to take him seriously), I'm still gonna write about it. This is another reason why I didn't put credences on these arguments -- let the people decide!
I think Eliezer briefly responds to this in his podcast with Dwarkesh Patel — satisfactorily is pretty subjective. https://youtu.be/41SUp-TRVlg?si=hE3gcWxjDtl1-j14
At about 24:40.
Thanks for the comment; I appreciate the response! One thing: I would say generally that people should avoid assuming other motives/ bad epistemics (i.e. motivated reasoming) unless pretty obvious (which I don't think is the case here) and can be resolved by pointing it out. This usually doesn't help anyone any parties get any closer to the truth, and if anything, it creates bad faith among people leading them to actually have other motives (which is bad, I think).
I also would be interested in what you think of my response to the argument that the commenter made.
I laughed at your first line, so thank you for that lol. I would love to hear more about why you prefer to collect models over arguments because i don't think I intuitively get the reasons for why this would be better -- to be fair, I haven't spent enough time thinking about it probably. Any references you like on arguments for this would be super helpful!
Thanks for the helpful feedback, though!
Tbh, I don’t think what I think is actually so important. The project was mainly to take arguments and compile them in a way that I thought was most convincing. I think these arguments have various degrees of validity in my mind, but i don’t know how much saying those actually matter.
Also, and this is definitely not your fault for not catching this, I write tell me why I’m wrong at the end of every blog post, so it was not a statement of endorsement. My previous blog post is entitled against utilitarianism, but I would largely consider myself to be a utilitarian (as I write there).
Also, I can think the best arguments for a given position are still pretty bad.
I much appreciate the constructive criticism, however.
Good point!
Good call. Thanks!
People: “Ah, yes. We should trust OpenAI with AGI.” OpenAI: https://www.nytimes.com/2024/07/04/technology/openai-hack.html “But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.”
Tyler Cowen often has really good takes (even some good stuff against AI as an x-risk!), but this was not one of them: https://marginalrevolution.com/marginalrevolution/2024/10/a-funny-feature-of-the-ai-doomster-argument.html
Title: A funny feature of the AI doomster argument
If you ask them whether they are short the market, many will say there is no way to short the apocalypse. But of course you can benefit from pending signs of deterioration in advance. At the very least, you can short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends.
Still, in a recent informal debate at the wonderful Roots of Progress conference in Berkeley, many of the doomsters insisted to me that “the end” will come as a complete surprise, given the (supposed) deceptive abilities of AGI.
But note what they are saying. If markets will not fall at least partially in advance, they are saying the passage of time, and the events along the way, will not persuade anyone. They are saying that further contemplation of their arguments will not persuade any marginal investors, whether directly or indirectly. They are predicting that their own ideas will not spread any further.
I take those as signs of a pretty weak argument. “It will never get more persuasive than it is right now!” “There’s only so much evidence for my argument, and never any more!” Of course, by now most intelligent North Americans with an interest in these issues have heard these arguments and they are most decidedly not persuaded.
There is also a funny epistemic angle here. If the next say twenty years of evidence and argumentation are not going to persuade anyone else at the margin, why should you be holding this view right now? What is it that you know, that is so resistant to spread and persuasion over the course of the next twenty years?
I would say that to ask such questions is to answer them.