Doug Engelbart's mouse is like an AI that can modify one line of its source code per week, and can't mathematically prove that that change represents an improvement but just has to try it and see if it's useful. (Also some areas of code are execute-only and not readable or writeable).
Farming is like an AI that has improved the efficiency of one of its key algorithms.
This should also come with a get-out clause: these are just analogies, and should be considered as playing with ideas rather than an attempt to accurately summarize Yudkowsky's viewpoint.
The meta-point is making analogies seems relatively straightforward when you do it in this direction (but a lot harder in the direction of "Self improving AI is like farming your own brain"). Not sure where Yudowsky's strong reaction against the analogies comes from.
Today's post, Whence Your Abstractions? was originally published on 20 November 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Abstraction, Not Analogy, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.