JonahSinick comments on Norbert Wiener on automation and unemployment - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
The impression that I got is that he had a pretty short time scale in mind (i.e. to the point that he was working with labor unions in the present day). One could argue that he believed that AI would develop faster than it has, or that he thought that networking with labor unions in the present would be useful for preventing problems 50+ years down the road.
I think that the extent to which the human brain has become replaceable is debatable. I think that there are a lot of tasks that people would have characterized as cognitively demanding at the time have now been automated (e.g. getting and understanding travel directions used to be more cognitively demanding than it is now, supermarkets have automated checkout stations).
AFAIT, all he did was write a few letters to two or three union officials, alerting them to this issue. I don't think that really counts as "networking". I also wasn't able to find any place where Wiener gave a specific time scale, but if he did expect a short time scale, I think his error was definitely in expecting that AI would develop faster than it has, rather in economic theory. If we assume the existence of AIs that are as capable as any human of average intelligence, and can be operated at a cost less than a human's minimum wage (or subsistence wage in the absence of minimum wage laws), then clearly there would be a great deal of unemployment, and "equilibrating influences" isn't going to help. I think the following quotes show that this is what Wiener had in mind:
My impressions are largely primarily from Some Notes on Wiener’s Concerns about the Social Impact of Cybernetics, the Effects of Automation on Labor, and “the Human Use of Human Beings” (though I did spend some time looking at other sources). Do you think that other sources give a different impression?
This is partially a semantic issue (what do we mean by "networking"?). One of the quotations that I pasted from the article above is
which gives the impression of urgency (a sense that he viewed it as a high priority and time-sensitive issue).
Capable of what? Some tasks that previously required labor of humans of average intelligence have been automated, and others haven't been automated. There's still an abundance of jobs for people of average intelligence that pay above-minimum wage.
The first paragraph that you quote gives the impression that he may have (mistakenly) thought that humans were on the brink of developing robotics that are sufficiently sophisticated to replace physical labor. But robotics don't suffice to replace all desired labor that humans of average intelligence are capable of.
I was reading Wiener's own writings, here and here
Wiener's own writings do not seem to give such an impression of urgency, and I note that he didn't do anything beyond contacting a few union leaders, such as lobbying directly to politicians. Here's how he described his contact with union leaders:
Quoting you again:
Capable of any job that a human of average intelligence could perform. I thought that's pretty clear from "However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy."
It seems clear, at least in his later writings (1960, second link above), that he really was thinking of AGI, not just robotics:
Thanks.
Based on your quotation, I agree. I was reporting on what I read, and didn't deep dive the situation, because I came to the conclusion that the case of Wiener and automation doesn't have high relevance.
We have a difference of interpretation. I thought he wasn't talking about AGI because AGI could probably replace high intelligence people too, and he suggests that high intelligence people wouldn't be replaced.
I think that he was writing about narrow AI in his earlier writings, and AGI in his later writings.