TimS comments on Tools versus agents - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
You're fundamentally assuming opaque AI, and ascribing intentions to it; this strikes me as generalizing from fictional evidence. So, let's talk about currently operational strong super-human AIs. Take, for example, Bayesian-based spam filtering, which has the strong super-human ability to filter e-mails into categories of "spam", and "not spam". While the actual parameters of every token are opaque for a human observer, the algorithm itself is transparent: we know why it works, how it works, and what needs tweaking.
This is what Holden talks about, when he says:
In fact, the operational AI R&D problem, is that you can not outsource understanding. See tried eg. neural networks, when trained with evolutionary algorithms: you can achieve a number of different tasks with these, but once you finish the training, there is no way to reverse-engineer how the actual algorithm works, making it impossible for humans to recognize conceptual shortcuts, and thereby improve performance.
Off topic question: Why do you believe the ability to sort email into spam and non-spam is super-human? The computerized filter is much, much faster, but I suspect that if you could get 10M sorts from me and 10M from the filter, I'd do better. Yes, that assumes away tiredness, inattention, and the like, but I think that's more an issue of relative speed than anything else. Eventually, the hardware running the spam filter will break down, but not on a timescale relevant to the spam filtering task.
Exactly for those reasons. From the relevant utilitarianism perspective, we care about those things much more deeply. (also, try differentiating between "不労所得を得るにはまずこれ" and "スラッシュドット・")