If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.
There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.