You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Larks comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion

9 Post author: KatjaGrace 21 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 21 October 2014 02:01:33AM *  12 points [-]

If you have a human-level intelligence which can read super-fast, and you set it free on the internet, it will learn a lot very quickly. (p71)

But why would you have a human-level intelligence which could read super-fast, which hadn't already read most of the internet in the process of becoming an incrementally better stupid intelligence, learning how to read?

Similarly, if your new human-level AI project used very little hardware, then you could buy heaps more cheaply. But it seems somewhat surprising if you weren't already using a lot of hardware, if it is cheap and helpful, and can replace good software to some extent.

I think there was a third example along similar lines, but I forget it.

In general, these sources of low recalcitrance would be huge if you imagine AI appearing fully formed at human-level without having exploited any of them already. But it seems to me that probably getting to human-level intelligence will involve exploiting any source of improvement we get our hands on. I'd be surprised if these ones, which don't seem to require human-level intelligence to exploit, are still sitting untouched.

Comment author: Larks 01 December 2014 01:43:54AM 0 points [-]

So basically you're arguing there shouldn't be a resource overhang, because those resources should have already been applied while the AI was at a sub-human level?

I suppose one argument would be that there is a discrete jump in your ability to use those resources. Perhaps sub-human intelligences just can't read at all. Maybe the correct algorithm is so conceptually separate from the "lets throw lots of machine learning and hardware at it" approach that it doesn't work at all until it suddenly done. However, this argument simply pushes back the rhetorical buck - now we need to explain this discontinuity, and can't rely on the resource overhang.

Another argument would be that your human-level intelligence makes available to you much more resources than before, because it can earn money / steal them for you. However, this only seems applicable to a '9 men in a basement' type project, rather than a government funded Manhattan project.