All of Adam David Long's Comments + Replies

Thanks for this comment. I totally agree and I think I need to make this clearer. I mentioned elsewhere ITT that I realized that I need to do a better job distinguishing between "positions" vs. "people" especially when debate gets heated. 

These are tough issues with a lot of uncertainty and I think smart people are wrestling with these. 

If I can risk the "personal blog" tag: I showed an earlier draft of the essay that became this post to a friend who is very smart but doesn't follow these issues very closely. He put me on the spot by asking me "o... (read more)

Thanks. I think this is useful and I'm trying to think through who is in the upper left hand corner. Are there "AI researchers" or, more broadly, people who are part of the public conversation who believe (1) AI isn't moving all that fast towards AGI and (2) that it's not that risky? 

I guess my initial reaction is that people in the upper  left hand corner just generally think "AI is kind of not that big a deal" and that there are other societal problems to worry about. does that sound right? Any thoughts on who should be placed in the upper left? 

2Stefan_Schubert
Yeah, I think so. But since those people generally find AI less important (there's both less of an upside and less of a downside) they generally participate less in the debate. Hence there's a bit of a selection effect hiding those people. There are some people who arguably are in that corner who do participate in the debate, though - e.g. Robin Hanson. (He thinks some sort of AI will eventually be enormously important, but that the near-term effects, while significant, will not be at the level people on the right side think). Looking at the 2x2 I posted I wonder if you could call the lower left corner something relating to "non-existential risks". That seems to capture their views. It might be hard to come up with a catch term, though. The upper left corner could maybe be called "sceptics".

Was not aware of this Collective Intelligence Project and glad to learn about them. I'll take a look. Thanks.

I'm very eager to find other "three-sided" frameworks that this one might map onto. 

Maybe OT, but I also have been reading Arnold Kling's "Three Languages of Politics" but so far have been having trouble mapping Kling's framework onto this one 

Thanks. To be honest, I am still wrestling with the right term to use for this group. I came up with "realist" and "pragmatist" as the "least bad" options after searching for a term that meets the following criteria:

  1. short, ideally one word
  2. conveys the idea of prioritizing (a) current or near-term harms over (b) far-term consequences
  3. minimizes the risk that someone would be offended if the label were applied to them

I also tried playing around with an acronym like SAFEr for "Skeptical, Accountable, Fair, Ethical" but couldn't figure out an acronym that I liked... (read more)

4Stefan_Schubert
Not exactly what you're asking for, but maybe a 2x2 could be food for thought. 

Thanks for that feedback. Perhaps this is another example of the tradeoffs in the "how many clusters are there in this group?" decision. I'm kind of thinking of this as a way to explain, e.g., to smart friends and family members, a basic idea of what is going on. For that purpose I tend, I guess, to lean in favor of fewer rather than more groups, but of course there is always a danger there of oversimplifying.

I think I may also need to do a better job distinguishing between describing positions vs describing people. Most of the people thinking and writing ... (read more)

2lewis smith
Any post along the lines of yours needs a 'political compass' diagram lol. I mean it's hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote <= 20% of the resources available to his org to 'the alignment problem'. If he wanted to assign more resources into 'solving alignment' he could probably do so. I think Altman thinks he's basically doing the right thing in terms of risk levels. Maybe that's a naive analysis, but I think it's probably reasonable to take him more or less at face value. I also think that it's worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that 'AI is really risky but also we should build it really fast'. I think you can steelman this argument more than I've done here, and many smart people do, but there's no denying it sounds pretty weird, and I think it's why many people struggle to take it at face value when people like Altman talk about x-risk - it just sounds really insane! In constrast, while people often think it's really difficult and technical, I think yudkowsky's basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general 'big tech bad' takes that the 'realists' like to make. I think a lot of boosters who are skeptical of AI risk basically think 'AI risk is a load of horseshit' for various not always very consistent reasons. It's hard to overstate how much 'don't anthropomorphise' and 'thinking about AGI is distracting sillyness by people who just want to sit around and talk all day' are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that 'the alignment problem' is like a coherent thing to be solved.

yes, this has been very much on my mind: if this three-sided framework is useful/valid, what does it mean for the possibility of the different groups cooperating?

I suspect that the depressing answer is that cooperation will be a big challenge and may not happen at all. Especially as to questions such as "is the European AI Act in its present form a good start or a dangerous waste of time?" It strikes me that each of the three groups in the framework will have very strong feelings on this question

  • realists: yes, because, even if it is not perfect, it is at l
... (read more)

Yes agreed. Indeed one of the things that motivated me to propose this three-sided framework is watching discussions of the following form:
1. A & B both state that they believe that AI poses real risks that the public doesn't understand. 

2. A takes (what I now call) the "doomer" position that existential risk is serious and all other risks pale in comparison: "we are heading toward an iceberg and so it is pointless to talk about injustices on the ship re: third class vs first class passengers"

3. B takes (what I now call) the "realist" or "pragmati... (read more)