ShardPhoenix comments on Open Thread September, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
I'm working on a top-level post about AI (you know what they say, write what you don't know), and I'm wondering about the following question:
Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?
That is, many technologies are really slow when first developed, until a few cycles of Moore's Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?
An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.
e.g. see: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html where a bug in a popular published version of the search remained undetected for decades.
This kind of thing is particularly worrying in the context of AI, which may well need to be exactly right the first time!
That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.
According to one of the comments on the link I posted:
"in section 6.2.1 of his 'Sorting and Searching,' Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962" (Programming Pearls 2nd edition, "Writing Correct Programs", section 4.1, p. 34).
Besides, it's not like higher-level languages are immune to subtle bugs, though in general they're less susceptible to them.
edit: Also, if you're working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you'd have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn't break it.
That's terrifying in the context of get-it-right-the-first-time AI.
I hope there will be some discussion of why people think it's possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.