Comment author: Jonii 20 March 2010 01:18:20PM 6 points [-]

To me, the connotation of Wilde's quote was that it's a bad thing to be aware that everything has a price, and the truly valuable things, e.g. "the smile of a baby" (the cuteness of a bunny?) cannot be priced.

That could be implied by larger context, but the quote, as it stands, only expresses the idea that prices and values are separate things. It could be that there was some meaningful conversion chart, or it could be that there wasn't. If we take that there isn't any chart for some things, it still doesn't imply that the price was infinite, it just means that talking about price doesn't make any sense. Analogy would be measuring happiness in kilograms. Lack of conversion chart doesn't imply that happiness means infinite kilograms.

The disconnect between values and prices could be described as something like "It has a high price because many people value it", not the other way around. Values are why we do things, and losing sight of those, staring only at price tags without understanding why there are prices in the first place, that's what Wilde seems to describe cynicism to be.

In response to comment by Jonii on The Price of Life
Comment author: SoullessAutomaton 20 March 2010 10:37:35PM 0 points [-]

My interpretation was to read "value" as roughly meaning "subjective utility", which indeed does not, in general, have a meaningful exchange rate with money.

Comment author: byrnema 05 March 2010 04:12:01AM *  10 points [-]

Um. I'm having one of those I-can't-believe-I've-been-this-stupid-over-the-last-ten-years moments.

I went back and reread what you wrote and the part I missed before was this:

The wave described is light.

So it isn't that light "happens to follow" this wave equation. That wave equation IS light -- that is, that specific interaction between the electric and magnetic fields is light.

Honestly, I'd never thought of it that way before. I can go back to that chapter in electromagnetism and see if I understand things differently now.

I look at the light bulb on my desk and I wouldn't even call it 'light' anymore. It is electromagnetic interaction.

I photographically recall the poster over an exhibit at a science museum, "Light Is Electromagnetic Radiation'. I thought that meant that light was radiation (obviously, it radiates) that was associated in some way with electromagnetic theory and I remember thinking it was a decidedly unpleasant verbal construction.

I'm thankful, and sorry...

Comment author: SoullessAutomaton 05 March 2010 04:43:18AM 5 points [-]

You know, this really calls for a cartoon-y cliche "light bulb turning on" appearing over byrnema's head.

It's interesting the little connections that are so hard to make but seem simple in retrospect. I give it a day or so before you start having trouble remembering what it was like to not see that idea, and a week or so until it seems like the most obvious, natural concept in the world (which you'll be unable to explain clearly to anyone who doesn't get it, of course).

Comment author: gwern 04 March 2010 02:01:15AM 1 point [-]

I'm going through SICP now. I'm not getting as much out of it as I expected, because much of it I already know, is uninteresting to me since I expect lazy evaluation due to Haskell, or is just tedious (I got sick pretty quick with the authors' hard-on for number theory).

Comment author: SoullessAutomaton 05 March 2010 04:35:42AM 1 point [-]

SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them.

Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.

Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...

Comment author: Wei_Dai 03 March 2010 02:30:35AM 1 point [-]

I think it's not a case of blurring the line, but instead there's probably a substantive disagreement between us about whether one of my points applies generally to rational agents or just to humans. Would you or SoullessAutomaton please explain why you don't think it applies generally?

Comment author: SoullessAutomaton 05 March 2010 04:02:55AM 3 points [-]

Sorry for the late reply; I don't have much time for LW these days, sadly.

Based on some of your comments, perhaps I'm operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner's dilemma are clearly not, in any coherent sense, rationally maximizing individual outcomes. Thus I don't really see such a scenario as presenting a group vs. individual conflict, but rather a practical problem of coordinated action. Certainly, solving such problems applies to any rational agent, not just humans.

The part about giving undue weight to unlikely ideas--which seems to comprise about half the post--by mis-calibrating confidence levels to motivate behavior seems to be strictly human-oriented. Lacking the presence of human cognitive biases, the decision to examine low-confidence ideas is just another coordination issue with no special features; in fact it's an unusually tractable one, as a passable solution exists (random choice, as per CannibalSmith's comment, which was also my immediate thought) even with the presumption that coordination is not only expensive but essentially impossible!

Overall, any largely symmetric, fault-tolerant coordination problem that can be trivially resolved by a quasi-Kantian maxim of "always take the action that would work out best if everyone took that action" is a "problem" only insofar as humans are unreliable and will probably screw up; thus any proposed solution is necessarily non-general.

The situation is much stickier in other cases; for instance, if coordination costs are comparable to the gains from coordination, or if it's not clear that every individual has a reasonable expectation of preferring the group-optimal outcome, or if the optimal actions are asymmetric in ways not locally obvious, or if the optimal group action isn't amenable to a partition/parallelize/recombine algorithm. But none of those are the case in your example! Perhaps that sort of thing is what Eliezer et al. are working on, but (due to aforementioned time constraints) I've not kept up with LW, so you'll have to forgive me if this is all old hat.

At any rate, tl;dr version: wedrifid's "Anything an irrational agent can do due to an epistemic flaw a rational agent can do because it is the best thing for it to do." and the associated comment thread pretty much covers what I had in mind when I left the earlier comment. Hope that clarifies matters.

Comment author: wedrifid 03 March 2010 06:01:31AM 1 point [-]

If-then-else as function composition, where "true" is a function returning its first argument, and "false" is a function returning its second? These are decidedly odd)

Of course, not so odd for anyone who uses Excel...

Comment author: SoullessAutomaton 03 March 2010 06:09:48AM 2 points [-]

Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)

And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.

Comment author: Douglas_Knight 03 March 2010 05:14:10AM *  2 points [-]

I think the link you want is to the history of the Church-Turing thesis.

Comment author: SoullessAutomaton 03 March 2010 06:05:26AM 1 point [-]

The history in the paper linked from this blog post may also be enlightening!

Comment author: Douglas_Knight 03 March 2010 05:17:58AM *  0 points [-]

Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.

ETA: well, it's straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing's paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.

EDIT2: so maybe that is what you said originally.

Comment author: SoullessAutomaton 03 March 2010 05:55:35AM 3 points [-]

It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation.

It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.

Comment author: Eliezer_Yudkowsky 03 March 2010 04:44:09AM 4 points [-]

Um... I think it's a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.

EDIT: It's interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who's not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.

Comment author: SoullessAutomaton 03 March 2010 05:35:37AM *  3 points [-]

Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.

Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing remotely sensible.

There's plenty of ridiculously opaque models of computation (Post's tag machine, Conway's Life, exponential Diophantine equations...) but I can't begin to imagine one that would be more comprehensible than untyped lambda calculus.

Comment author: wedrifid 03 March 2010 03:09:08AM *  0 points [-]

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

I'm sure I could manage 1k before I considered the point settled and moved on to a language that isn't a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.

Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.

This is the one point I disagree with, and I do so both on the assertion 'almost uniformly' and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book 'Object Oriented Software Construction' is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Comment author: SoullessAutomaton 03 March 2010 05:17:52AM *  0 points [-]

I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.

Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.

Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn't suggest for learning purposes, either.

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Well, the problem isn't really multiple inheritance itself, it's the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.

Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn't really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we've all seen trick questions about "okay, which method will this call?"). Something closer to a simple type predicate, like the interfaces in Google's Go language or like Haskell's type classes, is much less painful here. Or of course duck typing, if static type-checking isn't your thing.

Compositional code reuse in objects--what I meant by "implementation inheritance"--also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.

The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.

Note that "multiple inheritance" makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it's generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of "parent" types.

Consider the following types:

  • Tree structures containing values of some type A.
  • Lists containing values of some type A.
  • Text strings, stored as immutable lists of characters.
  • Text strings as above, but with a maximum length of 255.

The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there's no shared implementation or obvious subtyping relationship.

The text strings can't implement the above interface (because they're not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren't subtypes of the list, though, because it's mutable.

The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.

Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.

Comment author: wedrifid 03 March 2010 01:24:53AM 1 point [-]

Agree on where C is useful and got the same impression about the applicability to XiXiDu's (where on earth does that name come from?!?) goals.

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn't meet your 'minimalist' ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I've converted to primarily using a language that relies on duck-typing.

Comment author: SoullessAutomaton 03 March 2010 02:14:28AM *  6 points [-]

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:

  • It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
  • Templates are a clunky, disappointing imitation of real metaprogramming.
  • Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
  • It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
  • Combining error handling via exceptions with manual memory management is frankly absurd.
  • The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.

I could elaborate further, but it's too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical "real" OO language, but I'd probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).

ETA: Well, that came out awkwardly verbose. Apologies.

View more: Prev | Next