gwern comments on Risks from AI and Charitable Giving - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
All your counter-arguments are enthymematic; as far as I can tell, you are actually arguing against a proposition which looks more like
I would find your enthymematic far more convincing if you explained why things like Goedel machines are either fallacious or irrelevant.
Your argument is basically an argument from fiction; it's funny that you chose that example of the Roman Empire when recently Reddit spawned a novel arguing that a Marine Corps (surely less dangerous than your 100) could do just that. I will note in passing that black powder's formulation is so simple and famous that even I, who prefers archery, knows it: saltpeter, charcoal, and sulfur. I know for certain that the latter two are available in the Roman empire and suspect the former would not be hard to get. EDIT: and this same day, a Mafia-related paper I was reading for entertainment mentioned that Sicily - one of the oldest Roman possessions - was one of the largest global exporters of sulfur in the 18th/19th centuries. So that ingredient is covered, in spades!
A civilization which exists and is there for the taking.
Chimp brains have not improved at all, even to the point of building computers. There is an obvious disanalogy here...
All of which are available to a 'simple algorithm'. Artificial life was first explored by von Neumann himself!
Are you serious? Are you seriously claiming this? Dead-simple chess and Go algorithms routinely turn out fascinating moves. Genetic algorithms are renowned for producing results which are bizarre and inhuman and creative. Have you never read about the famous circuit which has disconnected parts but won't function without them?
What is this bullshit 'computers can't exhibit creativity' doing here? Searle, why did you steal XiXiDu's account and post this?
'I may be completely wrong, but hey, I can still ask rhetorically whether I'm not actually right!'
This implies P2.
Why can't I predict the next move of my chess algorithm? Why is there no algorithm to predict the AI algorithm simpler and faster than the original AI algorithm?
This is just naive. Source code can be available and either the maliciousness not obvious (see the Underhanded C Contest) or not prove what you think it proves (see Reflections on Trusting Trust, just for starters). Assuming you are even inspecting all the existing code rather than a stub left behind to look like an AI.
No. Not all the premises are necessary, so a conjunction is inappropriate and establishes a lower bound, at best.
I'm going to stop here. This might have been a useful exercise if you were trying to establish solely necessary premises, in the same vein as Chalmer's paper or Drake equation-style examination of cryonics, but you're not doing that.
You are arguing past each-other. XiXiDu is saying that a programmer can create software that can be inspected reliably. We are very close to having provably-correct kernels and compilers, which would make it practical to build reliably sandboxed software, such that we can look inside the sandbox and see that the software data structures are what they ought to be.
It is separately true that not all software can be reliably understood by static inspection, which is all that the underhanded C contest is demonstrating. I would stipulate that the same is true at run-time. But that's not the case here. Presumably developers of a large complicated AI will design it to be easy to debug -- I don't think they have much chance of a working program otherwise.
No, you are ignoring Xi's context. The claim is not about what a programmer on the team might do, it is about what the AI might write. Notice that the section starts 'The goals of an AI will be under scrutiny at any time...'
Yes. I thought Xi's claim was that if you have an AI and put it to work writing software, the programmers supervising the AI can look at the internal "motivations", "goals", and "planning" data structures and see what the AI is really doing. Obfuscation is beside the point.
I agree with you and XiXiDu that such observation should be possible in principle, but I also sort of agree with the detractors. You say,
Oh, I'm sure they'd try. But have you ever seen a large software project ? There's usually mountains and mountains of code that runs in parallel on multiple nodes all over the place. Pieces of it are usually written with good intentions in mind; other pieces are written in a caffeine-fueled fog two days before the deadline, and peppered with years-old comments to the extent of, "TODO: fix this when I have more time". When the code breaks in some significant way, it's usually easier to write it from scratch than to debug the fault.
And that's just enterprise software, which is orders of magnitude less complex than an AGI would be. So yes, it should be possible to write transparent and easily debuggable code in theory, but in practice, I predict that people would write code the usual way, instead.
I disagree with the gist of your comment, but I upvoted it because this quote made me LOL.
That said, I don't think that XiXiDu is claiming that computers can't exhibit creativity, period. Rather, he's saying that the kind of computers that SIAI is envisioning can't exhibit creativity, because they are implicitly (and inadvertently) designed not to.