RichardKennaway comments on Open Thread, December 1-15, 2012 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (177)
I get the same feeling. It seems unusually hard to come up with an idea for how things will be like after ten or so years that don't sound either head-in-the-sand denial of the technological change or crazy.
I wonder how you could figure out just how atypical things are now. Different than most of history, sure, most people lived in a world where you expected life parameters to be the same for your grandparents' and grandchildren's generations, and we definitely don't have that now. But we haven't had that in the first world for the last 150 years. Telegraphs, steam engines and mass manufacture were new things that caused massive societal change. Computers, nuclear power, space rockets, and figuring out that space and time are stretchy and living cells are just chemical machines were stuff that were more likely to make onlookers go "wait, that's not supposed to happen!" than "oh, clever".
People during the space age definitely thought they were living in the future, and contemporary stuff is still a bit tinged by how their vast projections failed to materialize on schedule. Did more people in 1965 imagine they were living in the future than people in 1975? What about people doing computer science in 1985, compared to 2005?
The space program enthusiasts mostly did end up very disappointed in their 50s, as did the people who were trying to get personal computing going using unified Lisp or SmallTalk environments that were supposed to empower users with the ability to actually program the system as a routine matter.
Following the pattern, you'd expect to get a bunch of let down aging singularitarians in the 2030s, when proper machine intelligence is still getting caught up with various implementation dead ends and can't get funding, while young people are convinced that spime-interfaced DNA resequencing implants are going to be the future thing that will change absolutely everything, you just wait, and the furry subculture is a lot more disturbing than it used to be.
So I don't know which it is. There seems to be more stuff from the future in peoples' everyday lives now, but stuff from the future has been around for over a century now, so it's not instantly obvious that things should be particularly different right now.
It may seem to have been a golden age of promise now lost, but I was there, and that isn't how it seems to me.
As examples of computer science in 1985, the linked blog post cites the Lisp machine and ALICE. The Lisp machine was built. It was sold. There are no Lisp machines now, except maybe in museums or languishing as mementos. ALICE (not notable enough to get a Wikipedia article) never went beyond a hardware demo. (I knew Mike Reeve and John Darlington back then, and knew about ALICE, although I wasn't involved with it. One of my current colleagues was, and still has an old ALICE circuit board in his office. I was involved with another alternative architecture, of which, at this remove, the less said the better.)
What killed them? Moore's Law, and this was an observation that was made even back then. There was no point in designing special purpose hardware for better performance, because general purpose hardware would have doubled its speed before long and it would outperform you before you could ever get into production. Turning up the clock made everything faster, while specialised hardware only made a few things faster.
Processors stopped getting faster in 2004 (when Intel bottled out of making 4GHz CPUs). The result? Special-purpose hardware primarily driven not by academic research but by engineers trying to make stuff that did more within that limit: GPUs for games and server farms for the web. Another damp squid of the 1980s, the Transputer, can be seen as ancestral to those developments, but I suspect that if the Transputer had never been invented, the development of GPUs would be unaffected.
When it appears, as the blog post says, "that all you must do to turn a field upside-down is to dig out a few decades-old papers and implement the contents", well, maybe a geek encountering the past is like a physicist encountering a new subject. OTOH, he is actually trying to do something, so props to him, and I hope he succeeds at what could not be done back then.