CronoDAS comments on Diseased disciplines: the strange case of the inverted chart - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
I don't think this is quite true. For instance, a few years ago, I traced a bug in my application down to an issue in how the Java Virtual Machine does JIT compiling, which caused subtle differences in a numerical algorithm between when the application started up, and when it had warmed up enough that certain functions were JIT compiled. Almost certainly, the correct fix would have been to correct the JVM so that the results were exactly the same in all cases.
But, of course, the JVM was nowhere near as "accessible" as the library containing the bug -- almost everyone relies on a prebuilt version of the JVM, and it is rather difficult to build. Also, it's written in a different and less friendly language: C++. Of course, this assumes that we are using a free/open source JVM (as it happens, we are); the situation would be even worse if we had to rely on a proprietary VM. And it assumes that all of our users would have been willing to use a custom JVM until a fixed version of the mainline JVM were released.
Another possibility would have been to add a compile-time option to the library containing that algorithm, so that that particular function would either always be JIT-compiled, or would never be. That's pretty straightforward -- as it happens, a different division of my company employs some of that library's authors. But the authors didn't consider it a worthwhile thing to do. So now we could maintain a fork of the library forever, or we could fix the bug somewhere else. Again, of course, this relied on the library being open; with a typical proprietary library, there would have been no recourse.
Needless to say, the top layer, the application, was the easiest thing to change, and so that's what changed.
Neither lower-level would have negatively impacted other library users (well, maybe turning off JIT on this function might have, but turning it always-on wouldn't). So I do think there is, in some sense, a difference in accessibility between layers which is not just caused by the interdependence problem. We genuinely do treat lower layers as foundational, because it makes it easier to develop, distribute, and collaborate on software. So I'm not sure that a construction analogy is entirely inappropriate here.
Wow. An actual exception to "The compiler is never wrong!"
In this case, it's not clear that the compiler was really wrong. The results of a floating point calculation differed by a tiny amount, and it's possible that either was acceptable (I don't know how strict Java is about its floating point rules). The problem was that I was using the result as a hash key.
But later, I was able to make the JVM reliably dump core (in different memory locations each time). Unfortunately, it was only under extremely heavy load, and I was never able to build a reduced test case.
Compilers do get things wrong. You may be interested in John Regehr's blog; he's essentially throwing cleverly-chosen "random" input at C compilers ("fuzz-testing"). The results are similar to those for other programs that have never been fuzzed, i.e. embarrassing.
And yet, in practice, when something is wrong with your code, it's always your own fault.
Well, your prior should be pretty high that it's your fault, unless you also wrote the compiler :)
If you can do experiments to prove that there's a compiler bug, you learn something. If you jump straight to the compiler bug explanation instead of looking for the bug in your own code, you are resisting education, and the probability that all you are doing is delaying the lesson is the probability that the compiler is working correctly. This should be >>50% of the time or you need a better compiler!
The difference here is not so much in where you guess the bug is, as in whether you do the experiment.
A very effective experiment is to take your program and chop out everything irrelevant until you have a short piece of code which demonstrates the bug. At this point, if it is a compiler bug, you have dense information to hand the compiler author; if it isn't a compiler bug, you're in a much better position to understand what's wrong with your code.
However, one is often reluctant to apply this technique until one suspects a compiler bug, because it seems like a lot of work. And it is — but often less work than continuing to examine the bug with less radical tools, given that you're in the position where the notion of compiler bugs crosses your mind.