novalis comments on Diseased disciplines: the strange case of the inverted chart - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
One of the problem is that Software Engineering is very broad.
Some parts of it (making a btree library, a neural network, finding the shortest path in a graph, ...) are very "mathy" and Dijkstra conception holds fully on them. Those parts are mostly done mathematically, with proofs on both the way the algorithm works and its complexity (worst-case, average-case, CPU and memory). Some other parts, like building a UI, are much harder (not totally impossible, but significantly harder) to address in a "mathy" way, and much easier to address in a "taylorist" way.
The other issue is about the way the software will be used. If you're making embedded software for a plane, a failure can mean hundred of deaths, so you can afford spending more time doing it a more rigorous (mathematical) way. When you're doing something not so critical, but with a final customer switching his mind every two days and asking a new feature for yesterday, you end up being much less rigorous, because the goal is no longer "to be perfect" but "to be made fast and still be good enough".
Like in industry, if you're making a critical part of plane, you won't necessary use the same process than if you're making cheap $10 watches. And yes, it's quite sad to have to do cheap $10 watches, at least personally, I hate being forced to write "quick and dirty" code, but in the world as it now, the one who pays decide, not the one who codes...
I happen to know something about this case, and I don't think it's quite as you describe it.
Most of the research in this area describes itself as "algorithm engineering." While papers do typically prove correctness, they don't provide strong theoretical time and space bounds. Instead, they simply report performance numbers (often compared against a more standard algorithm, or against previous results). And the numbers are based on real-world graphs (e.g. the highway network for North America). In one paper I read, there was actually a chart of various tweaks to algorithm parameters and how many seconds or milliseconds each one took (on a particular test case).
This is probably the best of both worlds, but of course, it is only applicable in cases where algorithms are the hard part. In most cases, the hard part might be called "customer engineering" -- trying to figure out what your customer really needs against what they claim to need, and how to deliver enough of it to them at a price you and they can afford.