shminux comments on Tiling Agents for Self-Modifying AI (OPFAI #2) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (260)
Hm. I'm not sure if Scott Aaronson has any weird views on AI in particular, but if he's basically mainstream-oriented we could potentially ask him to briefly skim the Tiling Agents paper and say if it's roughly the sort of paper that it's reasonable for an organization like MIRI to be working on if they want to get some work started on FAI. At the very least if he disagreed I'd expect he'd do so in a way I'd have better luck engaging conversationally, or if not then I'd have two votes for 'please explore this issue' rather than one.
I feel again like you're trying to interpret the paper according to a different purpose from what it has. Like, I suspect that if you described what you thought a promising AGI research agenda was supposed to deliver on what sort of timescale, I'd say, "This paper isn't supposed to do that."
This part is clearer and I think I may have a better idea of where you're coming from, i.e., you really do think the entire field of AI hasn't come any closer to AGI, in which case it's much less surprising that you don't think the Tiling Agents paper is the very first paper ever to come closer to AGI. But this sounds like a conversation that someone else could have with you, because it's not MIRI-specific or FAI-specific. I also feel somewhat at a loss for where to proceed if I can't say "But just look at the ideas behind Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, that's obviously important conceptual progress because..." In other words, you see AI doing a bunch of things, we already mostly agree on what these sorts of surface real-world capabilities are, but after checking with some friends you've concluded that this doesn't mean we're less confused about AGI then we were in 1955. I don't see how I can realistically address that except by persuading your authorities; I don't see what kind of conversation we could have about that directly without being able to talk about specific AI things.
Meanwhile, if you specify "I'm not convinced that MIRI's paper has a good chance of being relevant to FAI, but only for the same reasons I'm not convinced any other AI work done in the last 60 years is relevant to FAI" then this will make it clear to everyone where you're coming from on this issue.
He wrote this about a year ago:
And later:
Without further context I see nothing wrong here. Superintelligences are Turing machines, check. You might need a 10^20 slowdown before that becomes relevant, check. It's possible that the argument proves too much by showing that a well-trained high-speed immortal dog can simulate Mathematica and therefore a dog is 'intellectually expressive' enough to understand integral calculus, but I don't know if that's what Scott means and principle of charity says I shouldn't assume that without confirmation.
EDIT: Parent was edited, my reply was to the first part, not the second. The second part sounds like something to talk with Scott about. I really think the "You're just as likely to get results in the opposite direction" argument is on the priors overstated for most forms of research. Does Scott think that work we do today is just as likely to decrease our understanding of P/NP as increase it? We may be a long way off from proving an answer but that's not a reason to adopt such a strange prior.
As it happens, I've been chatting with Scott about this issue recently, due to some comments he made in his recent quantum Turing machine paper:
I thought his second objection ("how could we know what to do about it?") was independent of his first objection ("AI seems farther away than the singularitarians tend to think"), but when I asked him about it, he said his second objection just followed from the first. So given his view that AI is probably centuries away, it seems really hard to know what could possibly help w.r.t. FAI. And if I thought AI was several centuries away, I'd probably have mostly the same view.
I asked Scott: "Do you think you'd hold roughly the same view if you had roughly the probability distribution over year of AI creation as I gave in When Will AI Be Created? Or is this part of your view contingent on AI almost certainly being several centuries away?"
He replied: "No, if my distribution assigned any significant weight to AI in (say) a few decades, then my views about the most pressing tasks today would almost certainly be different." But I haven't followed up to get more specifics about how his views would change.
And yes, Scott said he was fine with quoting this conversation in public.
I think I'd be happy with a summary of persistent disagreement where Jonah or Scott said, "I don't think MIRI's efforts are valuable because we think that AI in general has made no progress on AGI for the last 60 years / I don't think MIRI's efforts are priorities because we don't think we'll get AGI for another 2-3 centuries, but aside from that MIRI isn't doing anything wrong in particular, and it would be an admittedly different story if I thought that AI in general was making progress on AGI / AGI was due in the next 50 years".
I think that your paraphrasing
is pretty close to my position.
I would qualify it by saying:
I'd replace "no progress" with "not enough progress for there to be a known research program with a reasonable chance of success."
I have high confidence that some of the recent advances in narrow AI will contribute (whether directly or indirectly) to the eventual creation of AGI (contingent on this event occurring), just not necessarily in a foreseeable way.
If I discover that there's been significantly more progress on AGI than I had thought, then I'll have to reevaluate my position entirely. I could imagine updating in the directly of MIRI's FAI work being very high value, or I could imagine continuing to believe that MIRI's FAI research isn't a priority, for reasons different from my current ones.
Agreed-on summaries of persistent disagreement aren't ideal, but they're more conversational progress than usually happens, so... thanks!
I'm doing some work for MIRI looking at the historical track record of predictions of the future and actions taken based on them, and whether such attempts have systematically done as much harm as good.
To this end, among other things, I've been reading Nate Silver's The Signal and the Noise. In Chapter 5, he discusses how attempts to improve earthquake predictions have consistently yielded worse predictive models than the Gutenberg-Richter law. This has slight relevance.
Such examples not withstanding, my current prior is on MIRI's FAI research having positive expected value. I don't think that the expected value of the research is zero or negative – only that it's not competitive with the best of the other interventions on the table.
My own interpretation of Scott's words here is that it's unclear whether your research is actually helping in the "get Friendly AI before some idiot creates a powerful Unfriendly one" challenge. Fundamental progress in AI in general could just as easily benefit the fool trying to build a AGI without too much concern for Friendliness, as it could benefit you. Thus, whether fundamental research helps out avoiding the UFAI catastrophy is unclear.
I'm not sure that interpretation works, given that he also wrote:
Since Scott was addressing steps taken to act on the conclusion that friendliness was supremely important, presumably he did not have in mind general AGI research.