orthonormal comments on Open Thread September, Part 3 - Less Wrong

2 Post author: LucasSloan 28 September 2010 05:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread.

Comment author: orthonormal 01 October 2010 12:27:33AM 5 points [-]

I'm working on a top-level post about AI (you know what they say, write what you don't know), and I'm wondering about the following question:

Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?

That is, many technologies are really slow when first developed, until a few cycles of Moore's Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?

Comment author: Vladimir_M 01 October 2010 10:10:29PM *  6 points [-]

To clarify the question a bit, I would consider dividing software technologies into three categories:

  1. Technologies developed while the necessary computing resources were still unavailable or too expensive, which flourished later when the resources became cheap enough. For example, Alan Turing famously devised a chess program which he could only run using paper and pencil.

  2. Technologies that appeared very soon after the necessary computing resources became available and cheap enough, suggesting that the basic idea was fairly straightforward after all, and it was only necessary to give smart people some palpable incentive to think about it. Examples such as the first browsers and spreadsheets would be in this category.

  3. Technologies for which the necessary computing resources had been cheaply available for a long time before someone finally came up with them, suggesting an extraordinary intellectual breakthrough. I cannot think of any such examples, and it doesn't seem like anyone else in this thread can either.

This reinforces my cynical view of software technologies in general, namely that their entire progress in the last few decades has been embarrassingly limited considering the amount of intellectual power poured into them.

Here's an interesting related thought experiment that reinforces my cynicism further. Suppose that some miraculous breakthrough in 1970 enabled the production of computers equally cheap, powerful, compact, and easily networked as we have today. What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Comment author: CarlShulman 28 June 2011 11:17:53PM *  1 point [-]

Chess had steady algorithmic improvements on the same order as the gains from hardware: Deep Fritz and Rybka both got to better performance per FLOP than Deep Blue, etc. More generally, I think that looking at quantitative metrics (as opposed to whole new capabilities) like game performance, face recognition, image processing, etc, will often give you independent hardware and software components to growth.

Comment author: LucasSloan 02 October 2010 08:20:02PM 0 points [-]

What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Well, I'd guess our video game library is larger...

Comment author: ShardPhoenix 01 October 2010 10:15:23AM *  3 points [-]

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

e.g. see: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html where a bug in a popular published version of the search remained undetected for decades.

This kind of thing is particularly worrying in the context of AI, which may well need to be exactly right the first time!

Comment author: Douglas_Knight 01 October 2010 04:31:26PM 2 points [-]

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.

Comment author: ShardPhoenix 01 October 2010 11:35:32PM *  0 points [-]

According to one of the comments on the link I posted:

"in section 6.2.1 of his 'Sorting and Searching,' Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962" (Programming Pearls 2nd edition, "Writing Correct Programs", section 4.1, p. 34).

Besides, it's not like higher-level languages are immune to subtle bugs, though in general they're less susceptible to them.

edit: Also, if you're working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you'd have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn't break it.

Comment author: NancyLebovitz 01 October 2010 02:33:26PM 0 points [-]

That's terrifying in the context of get-it-right-the-first-time AI.

I hope there will be some discussion of why people think it's possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.

Comment author: magfrump 03 October 2010 08:13:40AM 2 points [-]

What about BitTorrent or P2P file transfers in general? Anonymous peer to peer seems to have not emerged until 2001, or peer to peer in general until November 1998. That's a bit too far back for me to have any idea what computers were like but peer to peer file transfer is an amazing software development which could be implemented on any computers that can transfer files--at least as early as 1977.

Comment author: cousin_it 01 October 2010 03:15:38PM *  2 points [-]

Would the first spreadsheet (VisiCalc) or the first browser (Mosaic) fit your bill? As far as I know, they didn't face difficult hardware bottlenecks when they appeared.

Comment author: Douglas_Knight 01 October 2010 04:15:12PM *  0 points [-]

VisiCalc is a great example, but Mosaic was hardly the first browser. Nelson and Engelbart certainly had hypertext browsers before. I'm not entirely sure that they had graphics, but I think so. Have you seen Engelbart's 1968 demo? (ETA: I'm not sure that Engelbart's demo counts, but even if it doesn't, he clearly cared about both hypertext and graphics, so he probably did it in the following decade or two)

Comment author: orthonormal 01 October 2010 04:33:04PM 1 point [-]

Speaking of Engelbart, how about the mouse as an example? Did that take a nontrivial amount of the computing power when first demoed, or not?

Comment author: Douglas_Knight 01 October 2010 04:54:01PM *  1 point [-]

I'd guess that the computing power devoted to the mouse in a graphical environment is always small compared to that devoted to the screen, and thus it should be a good example. (if one used a mouse to manipulate an array of characters, as sounds like a good idea for a spreadsheet, the mouse might be relatively expensive, but that's not what Engelbart did in '68)

The mouse and the browser (and magfrump's similar example of AIM) are probably examples of a phenomenon that generalizes your original question, where the bottleneck to deployment was something other than computer power.

Comment author: magfrump 01 October 2010 09:23:36AM 2 points [-]

How significant of a technology are you thinking of?

For example, I would guess that most video game emulators came about when computers were much faster than the games they were emulating--if it weren't the case that fast computers were cheaper than the emulated consoles emulators wouldn't be very popular. Further, I can guarantee you that computers easily have more power than video game consoles, so any emulator produced of the latest generation of console was written when computers had far more power than necessary.

So: Does a new emulator count? It's a specific technology that is developed in a fast environment. Does an old emulator count? Emulators in general aren't new technology at all. Does an instant messenger count? Predecessors existed in times when text content was a big deal, but I would be mildly surprised to hear that the original AIM (or whatever the first instant messenger program was) was created at a time when text-over-the-internet was a big stress on computers.

Comment author: Document 01 October 2010 07:04:22PM 1 point [-]

Relatedly, I wish I could remember what I read recently about comparing the performance of a 2000s algorithm for a particular problem (factoring?) on 1980s hardware to a 1980s algorithm on 2000s hardware. It might've been on Accelerating Future.

Comment author: Douglas_Knight 01 October 2010 10:05:54PM 1 point [-]

There are lots of examples of improved algorithms, such as your example of factoring, the similarly timed example of chess algorithms, and the much earlier merge sort and simplex algorithms. But in none of these cases did the algorithm completely solve the problem; there are always harder instances that we care about. This is particularly clear with factoring, which is adversarial. (you might count human chess as a solved problem, though)