After reading this article and the Scott/Weyl exchanges, I'm left with the impression that one side is saying: "We should be building bicycles for the mind, not trying to replace human intellect." And the other side is trying to point out: "There is no firm criteria by which we can label a given piece of technology a bicycle for the mind versus a replacement for human intellect."
Perhaps uncharitably, it seems like Weyl is saying to us, "See, what you should be doing is working on bicycles for the mind, like this complicated mechanism design thing that I've made." And Scott is sort of saying, "By what measure are you entitle to describe that particular complicated piece of gadgetry a bicycle for the mind, while I am not allowed to call some sort of sci-fi exocortical AI assistant a bicycle for the mind?" And then Weyl, instead of really attempting to provide that distinction, simply lists a bunch of names of other people who had strong opinions about bicycles.
Parenthetically, I'm reminded of the idea from the Dune saga that it wasn't just AI that was eliminated in the Butlerian Jihad, but rather, the enemy was considered to be the "machine attitude" itself. That is, the attitude that we should even be trying to reduce human labor through automation. The result of this process is a universe locked in feudal stagnation and tyranny for thousands of years. To this day I'm not sure if Herbert intended us to agree that the Butlerian Jihad was a good idea, or to notice that his universe of Mentats and Guild Navigators was also a nightmare dystopia. In any case, the Dune universe has lasguns, spaceships, and personal shields, but no bicycles that I can recall.
1. Do you see no discernable or meaningful distinction between the operating philosophy of these pairs: Audrey Tang v. Eliezer Yudokowsky
Let's take Audrey Tang, Eliezer Yudokowsky, Glen Weyl and Jaron Lanier.
Audrey and Eliezer both created open spaces for web discussion where people can vote (Eliezer did two of those projects) while Glen Weyl decided against doing so when it comes to the Radical Market community and Jaron Lanier is against online communities to the point of speaking against Wikipedia and the Open source movement.
You can make an argument that cybernetic systems and web discussions where people vote being cybernetic in nature.
The thing that distinguises Eliezer the most from the others is that Eliezer is a successful fiction author while the others aren't. You can discuss the merits of writing fiction to influence people but it seems to me to qualify as chosing humanistic ways of achieving goals.
In terms of augmenting human intellect, the entirety of the rationalist community represents an attempt at augmenting human intellect, even if the "bicycles" it creates are more along the lines of concepts, habits, principles, methods, etc.
That sounds to me like a strawman. We are having this discussion on LessWrong which a bicycle in the Engelbartian sense. We aren't having this discussion on facebook or on Reddit but on software that's designed by the rationalist community to facilitate debate.
The rationalist community is not about limiting itself to the lines of concepts, habits, principles, methods. I would expect that's much more likely true of Weyl's RadicalXChange.
Jaron Lanier is against online communities to the point of speaking against Wikipedia and the Open source movement.
Should have mentioned the first time — Jaron is critical of, but not against, Wikipedia or open source.
I’ve been a Wikipedia editor since 2007, and admit that most of his criticisms are valid. Anyone that has, say, over 1000 edits on Wikipedia either knows that it sucks to be a Wikipedia editor or hides how much it sucks because they’re hoping to be an admin at some point and don’t complain about it outside their own heads… in which case they don’t really add new content to articles, they argue about content in articles and modify what other people have already contributed.
He points on open source don’t seem to be any different than what people said in Revolution OS In like 2001–open source and proprietary software can co-exist. e.g. Bruce Perens said his only difference with Richard Stallman was the thought "that free software and non-free software should coexist." https://www.youtube.com/watch?v=4vW62KqKJ5A#t=49m39s
That sounds to me like a strawman. We are having this discussion on LessWrong which a bicycle in the Engelbartian sense. We aren't having this discussion on facebook or on Reddit but on software that's designed by the rationalist community to facilitate debate.
That's a fair point, I should have included the software that's been written as part of the rationalist movement.
Thank you very much for taking the time to write this. Scott Alexander and Glen Wyel are two of my intellectual hero's, they've both done a lot for my thinking in economics, coordination, and just how to go about a dialectic intellectual life in general.
So I was also dismayed (to an extent I honestly found surprising) when they couldn't seem to find a good faith generative dialogue. If these two can't then what hope is there for the average Red vs Blue tribe member?
This post have me a lot of context though, so thanks again 😊
[EDIT: I've edited this post based on feedback. The original is on archive.org.]
I've gazed long into the exchanges between Jaron Lanier and Eliezer Yudkowsky, then longer in to those between Glen Weyl and Scott Alexander. The two exchanges rhyme in a real but catawampus kind of way. It's reminiscent of an E.E. Cummings poem. If you puzzle over it long enough, you'll see it. Or, your English teacher explains it to you.
(Understanding "r-p-o-p-h-e-s-s-a-g-r" is not required for the rest of this post, or even really worth your time. But a hint if you want to try: the line preceding the date is something like a pre-ASCII attempt at ASCII art.)
Anyway, for the duration of this post, I will be your hip substitute English teacher. Like hip substitutes, instead of following the boring lesson plan your real teacher left, we're going to talk about Freud, bicycles and a computer demo from 1968.
Who are Jaron Lanier and Glen Weyl?
Jaron Lanier has written several books, most recently Ten Arguments for Deleting Your Social Media Accounts Right Now, which distills many of his more salient points from his previous books You Are Not a Gadget, Who Owns the Future and Dawn of the New Everything. Eric Weinstein, says there's "four stages of Jaron Lanier."
If this is your first encounter with Jaron and he sounds crazy, you're not alone, but maybe give him and second, third and fourth look. Glen works with Jaron closely, and I'm familiar with Glen's work where it overlaps with Jaron's.
(Unrelated but perhaps interesting to other neurodivergents who share personality traits with me—I strongly suspect that if I saw Jaron's Big Five test results, they would look a lot like mine. Part of my interest in him is to see how someone high in openness and low in conscientiousness—he may of had the most overdue book contract ever in American publishing, for example—succeeds in doing things. FWIW two strategies he uses are cross-procrastination and the hamster wheel of pain. I wish he would share more of these.)
The easy rhyme: mentor-protégé lineage
In 2008 Eliezer Yudkowsky and Jaron Lanier had a conversation. Then, in 2021. their protégés Scott Alexander and E. Glen Weyl had their own exchange (see Glen's initial essay, Scott's response to Glen and Glen's response to Scott).
In their joint appearance at the San Francisco Blockchain Summit, Glen adulates Jaron as "one of his heroes." I'd suspect Glen and Jaron first crossed paths while working at Microsoft Research. Glen and Jaron also frequently write articles together; see AI is an Ideology, Not a Technology and A Blueprint for a Better Digital Society as examples.
Scott describes his early relationship with Eliezer with the term "embarrassing fanboy." In The Ideology Is Not The Movement Scott facetiously describes Eliezer as the rightful caliph of the rationalist movement. I know it's a joke, but still.
Humanisms
In his 2010 book, You Are Not A Gadget, Jaron describes the distinction between these two groups of people well, if with uncharitable terminology. Jaron uses the terms "cybernetic totalists" and "digital Maoists" to describe the kind of people that are technologists lacking in a kind of humanism.
(This is a long except, but it's worth it.)
Remember the names from the fifth paragraph quoted above, they'll come up again in a second.
Jaron and Eliezer in 2008
Jaron and Eliezer weren't talking about technocracy in 2008, but found themselves getting to base case related to humanism through a discussion on consciousness. Below is an excerpt starting around 26:01. (As a side note, this discussion was so much friendlier then it's 2021 counterpart, if you're keeping track of cultural changes).
Glen and Scott in 2021
I'm not going to quote from either of their essay responses, where it really gets revealing is in the comments.
Comment 1156922 - Glen responding to Scott (only most relevant bit quoted, wiki link is mine):
Comment 1159339 - Scott responding to Glen (again removing some of the more irrelevant parts):
Comment 1162634 - Glen responds to Scott:
Comment 1164478 - An other commentator, jowymax, interjects (quoted in part):
Comment 1164942 - Glen responds to jowymax:
The hard rhyme: philosophical camps taking shape
Pointing out the obvious, the names Glen mentions in these comments look familiar with the 2008 conversation.
Glen's 2021 list of humanist design influences: John Dewey, Paul Milgrom, Douglas Engelbart, Norbert Wiener, Alan Kay, Don Norman, Terrance Winograd, Jaron Lanier, Henry George, Danielle Allen, Vitalik Buterin and Audrey Tang.
Jaron's 2010 list of humanist design influences: Joseph Weizenbaum, Ted Nelson, Terry Winograd, Alan Kay, Bill Buxton, Doug Engelbart, Brian Cantwell Smith, Henry Fuchs, Ken Perlin, Ben Schneiderman, Andy Van Dam, Randy Pausch and David Gelernter.
The union includes: Douglas (Doug) Engelbart, Alan Kay, Terrance (Terry) Winograd and Jaron Lanier (assuming Jaron would have put himself on his own list).
People that would be outside of this set, I suppose we could call them cybernetic totalists for lack of a more charitable term, seem to include Eric Horvitz, Marvin Minsky, Sam Altman and... Eliezer?
You can see Glen and Jaron are respectively saying that Eliezer and Scott are lacking in a kind of humanism and/or human-centered way of design. Both Eliezer and Scott respond, effectively, by saying that they're humanists. (In Scott's case, he doesn't quite use the word but I believe implies it with statements like "finding ways to blend rationality with honesty and compassion").
It also seems clear to me that neither side has a good way of defining this distinction, although both sides seem to recognize the need for one (as in comment 1162634 explicitly).
So what is the formal distinction between cybernetic totalists and the people who are not that?
These are likely not the cybernetic totalists you're looking for
I'm going to use the earliest computer-related name that appears on both Glen and Jaron's lists, Doug Engelbart, and some more specific distinctions made in one of Jaron's early essays to draw what I believe is a fair boundary separating cybernetic totalists from everyone else. This isn't science, this is just me having spent a lot of time reading and digesting all of this. Interested parties probably will disagree to an extent, and I'm happy about that so long as I've improved the overall quality of the disagreement.
At one point in time, Jaron attempted to formalize what a cybernetic totalist was, but to get there we'll have to go... all the way back to the year 2000.
In the year 2000...
In the year 2000... Jaron wrote "One Half A Manifesto" and listed beliefs cybernetic totalist were likely to hold. (If you like reading comments, don't skip these ones on edge.org). I'm thinking of this along the lines of a DSM criteria in the sense that some beliefs are evidence in favor of a "diagnosis" but others are more salient when it comes to "diagnostic criteria." The following three sections contain the concepts I believe are most relevant from his essay, and what I think is a useful way of stating what Glen and Jaron mean by humanism.
Campus imperialism
I'm not sure if Jaron coined this term or not but—purely linguistically speaking—I love it. It's so parsimonious I'm not sure why Glen didn't use it in his original essay on technocracy or any of the follow-up, because it elegantly describes a common failure mode of the technocratic mindset. This is especially true if you broaden the term "campus" from the first association many of us have—universities—to include, say, corporate campuses, government agency campuses, think-tank campuses, etc. Jaron uses this term twice.
The first appearance:
The second appearance:
The Singularity (or something like it) and autonomous machines
The other important bits are the "belief" in a coming singularity, and affinity for autonomous machines. Jaron uses "autonomous machines" in this essay like an umbrella term for a category of technology that includes AI.
More on autonomous machines:
Even more on autonomous machines:
Engelbartian humanism (bicycles for the mind)
Let me my give you a little background on Doug Engelbart, if you're not familiar. He's perhaps most well-know for The Mother of All Demos. If you haven't seen that you may want to stop now and give it a few minutes of your time. It's a demo of the oN-Line System (NLS) that's like... a real desktop-like computer in 1968. I mean it's not punch cards, it's like you're watching this and you're like "oh, this a lot of what my computer does now, but this was in 1968!" NLS was developed by Engelbart's team at the Augmentation Research Center (ARC) in the Stanford Research Institute (SRI).
I had thought for most of my adult life that Xerox PARC invented the concept of a GUI with all of the rudimentary accoutrements, then at some point Apple and Microsoft stole their ideas. It's more like the GUI we use now was invented at SRI and then a bunch of people who worked at SRI went to Xerox PARC.
I'll call what Glen and Jaron mean by humanism, Engelbartian humanism. That is humanism in the sense of Engelbart's goal of "augmenting human intellect" with the constraint that augmenting human intellect does not mean replacing human intellect. Steve Jobs rephrased and punched-up this concept likening computers to "bicycles for the mind."
Chart from Scientific American, 1973. I don't see condors, but it more or less makes the same point.
At the time, that was probably more of a descriptive statement, but I suspect in 2021 Glen and Jaron would see it as more normative.
Proposed cybernetic totalist diagnostic criteria
I propose four criteria, three as a positive indicator and one as a negative. If the score is 2 or above, you are diagnosed with cybernetic totalism!
Scoring Scott and Eliezer on the first criterion
I know (1) isn't true in Scott's case and strongly suspect it's no longer true in Eliezer's. A particularly apt example I recall is from Scott's post "What Intellectual Progress Did I Make In The 2010s?"
(I'll even go as far as to say Scott's take here in principle sounds like Jaron's take on Belief #5 in "One Half A Manifesto." I've already quoted too much, but it's worth reading.)
I'm less familiar with Eliezer, but judging by these two articles, the Singulariry section on yudkowsky.net and his 2016 interview with Scientific American, I would say, no, Eliezer is not a Singularity ideolog.
Scoring Scott and Eliezer on the remaining criteria
On the question of campus imperialism... I'd invite everyone to judge based on their bodies of work. Do you see them dismissing the rest of the culture or carefully trying to understand it? Do they think of themselves better judges of reality than others based on their proficiency with technology? Would they be willing to impose their judgement of better realities on others without sufficient consent? I would answer no to these questions, but of course I'm biased. In terms of working on AI 50% or more of the time, Eliezer gets a 1 and Scott gets a 0. In terms of augmenting human intellect, the entirety of the rationalist community represents an attempt at augmenting human intellect, even if the "bicycles" it creates are more along the lines of concepts, habits, principles, methods, etc. On the above scale, I would score criteria 1, 2, 3, 4 as follows:
Going easy on Scott and Eliezer (e.g. negligible campus imperialism)
Scott: 0, 0, 0, -1. Total score: -1
Eliezer: 0, 0, 1, -1. Total score: 0
Going hard on Scott and Eliezer (e.g. sufficient campus imperialism)
Scott: 0, 1, 0, -1. Total score: 0
Eliezer: 0, 1, 1, -1. Total score: 1
In either case, they're both below 2.
For reference here, I would give Ray Kurzweil, for example, a 3. But I may have some bias there as well.
Conclusion
There are obvious humanistic intentions driving work on AI and especially AI alignment. As Eliezer mentions around 49:09 we want AI to do things like cure AIDS and solve aging, but more importantly we want to make sure a hypothetical AGI at a baseline doesn't annihilate us as a species. It's easy for me to see the humanist intentions there even if reasonable people can disagree on the inevitability of AGI.
It also seems obvious to me that we should avoid getting in a campus imperialist mindset. I similarly see how it's easy to get excited about building cool technology and forgetting to ask, "is this more like a bicycle or is this more like a Las Vegas gambling machine?"
I don't fault Glen and Jaron for being a bit cranky and bolshy about pushing back. The business models of Facebook and Twitter don't align with ethical humanistic design practices, if that latter phrase is to mean anything. I do, however, see Glen and Jaron's energy as being misdirected at Scott, Eliezer, AI alignment and effective altruism.