Thanks. I think this is useful and I'm trying to think through who is in the upper left hand corner. Are there "AI researchers" or, more broadly, people who are part of the public conversation who believe (1) AI isn't moving all that fast towards AGI and (2) that it's not that risky?
I guess my initial reaction is that people in the upper left hand corner just generally think "AI is kind of not that big a deal" and that there are other societal problems to worry about. does that sound right? Any thoughts on who should be placed in the upper left?
Was not aware of this Collective Intelligence Project and glad to learn about them. I'll take a look. Thanks.
I'm very eager to find other "three-sided" frameworks that this one might map onto.
Maybe OT, but I also have been reading Arnold Kling's "Three Languages of Politics" but so far have been having trouble mapping Kling's framework onto this one
Thanks. To be honest, I am still wrestling with the right term to use for this group. I came up with "realist" and "pragmatist" as the "least bad" options after searching for a term that meets the following criteria:
I also tried playing around with an acronym like SAFEr for "Skeptical, Accountable, Fair, Ethical" but couldn't figure out an acronym that I liked...
Thanks for that feedback. Perhaps this is another example of the tradeoffs in the "how many clusters are there in this group?" decision. I'm kind of thinking of this as a way to explain, e.g., to smart friends and family members, a basic idea of what is going on. For that purpose I tend, I guess, to lean in favor of fewer rather than more groups, but of course there is always a danger there of oversimplifying.
I think I may also need to do a better job distinguishing between describing positions vs describing people. Most of the people thinking and writing ...
yes, this has been very much on my mind: if this three-sided framework is useful/valid, what does it mean for the possibility of the different groups cooperating?
I suspect that the depressing answer is that cooperation will be a big challenge and may not happen at all. Especially as to questions such as "is the European AI Act in its present form a good start or a dangerous waste of time?" It strikes me that each of the three groups in the framework will have very strong feelings on this question
Yes agreed. Indeed one of the things that motivated me to propose this three-sided framework is watching discussions of the following form:
1. A & B both state that they believe that AI poses real risks that the public doesn't understand.
2. A takes (what I now call) the "doomer" position that existential risk is serious and all other risks pale in comparison: "we are heading toward an iceberg and so it is pointless to talk about injustices on the ship re: third class vs first class passengers"
3. B takes (what I now call) the "realist" or "pragmati...
Thanks for this comment. I totally agree and I think I need to make this clearer. I mentioned elsewhere ITT that I realized that I need to do a better job distinguishing between "positions" vs. "people" especially when debate gets heated.
These are tough issues with a lot of uncertainty and I think smart people are wrestling with these.
If I can risk the "personal blog" tag: I showed an earlier draft of the essay that became this post to a friend who is very smart but doesn't follow these issues very closely. He put me on the spot by asking me "o... (read more)