SoullessAutomaton comments on Open Thread: March 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (658)
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.
Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn't suggest for learning purposes, either.
Well, the problem isn't really multiple inheritance itself, it's the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.
Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn't really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we've all seen trick questions about "okay, which method will this call?"). Something closer to a simple type predicate, like the interfaces in Google's Go language or like Haskell's type classes, is much less painful here. Or of course duck typing, if static type-checking isn't your thing.
Compositional code reuse in objects--what I meant by "implementation inheritance"--also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.
The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.
Note that "multiple inheritance" makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it's generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of "parent" types.
Consider the following types:
The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there's no shared implementation or obvious subtyping relationship.
The text strings can't implement the above interface (because they're not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren't subtypes of the list, though, because it's mutable.
The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.
Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.
Of course, but I'm more considering 'languages to learn that make you a better programmer'.
Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.
I don't agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma 'multiple inheritance is bad' and don't allow generics enforce bad habits while at the same time insisting that they are the True Way.
I think I agree on this note, with certain restrictions on what counts as 'civilized'. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn't do it in Java or .NET (except Eiffel.NET).