I often find it difficult to think about things without concrete, realistic examples to latch on to. Here are five examples of this.
1) Your Cheerful Price
Imagine that you need a simple portfolio website for your photography business. Your friend Alice is a web developer. You can ask her what her normal price is and offer to pay her that price to build you the website. But that might be awkward. Maybe she isn't looking for work right now, but says yes anyway out of some sort of social obligation. You don't want that to happen.
What could you do to avoid this? Find her cheerful price, perhaps. Maybe her normal price is $100/hr and she's not really feeling up for that, but if you paid her $200/hr she'd be excited about the work.
Is this useful? I can't really tell. In this particular example it seems like it'd just make more sense to have a back and forth conversation about what her normal price is, how she's feeling, how you're feeling, etc., and try to figure out if there's a price that you each would feel good about. Cheerful/excited/"hell yeah" certainly establishes an upper bound for one side, but I can't really tell how useful that is.
2) If you’re not feeling “hell yeah!” then say no
This seems like one of those things that sounds wise and smart in theory, but probably not actually good advice in practice.
For example, I have a SAAS app that hasn't really gone anywhere and I've put it on the back burner. Someone just came to me with a revenue share offer. I'd be giving up a larger share than I think is fair. And the total amount I'd make per month is maybe $100-200, so I'm not sure whether it'd be worth the back and forth + any customizations they'd want from me. I don't feel "hell yeah" about it, but it is still probably worth it.
In reality, I'm sure that there are some situations where it's great advice, some situations where it's terrible advice, and some situations where it could go either way. I think the question is whether it's usually useful. It doesn't have to be useful in every single possible situation. That would be setting the bar too high. If it identifies a common failure mode and helps push you away from that failure mode and closer to the point on the spectrum where you should be, then I call that good advice.
If I were to try to figure out whether "hell yeah or no" does this, the way I'd go about it would be to come up with a wide variety of examples, and then ask myself how well it performs in these different examples. It would have been helpful if the original article got the ball rolling for me on that.
3) Why I Still ‘Lisp’ (and You Should Too)
I want to zoom in on the discussion of dynamic typing.
I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway).
This made me breathe a sigh of relief. I've always felt the same way, but wondered whether it was due to some sort of incompetence on my part as a programmer.
Still, something tells me that it's not true. That such type checking does in fact help you catch some non-obvious errors that would be much harder to catch without the type checking. Too many smart people believe this, so I think I have to give it a decent amount of credence.
Also, I recall an example or two of this from a conversation with a friend a few weeks ago. Most of the discussions of type checking I've seen don't really get into these examples though. But they should! I'd like to see such articles give five examples of:
Here is a situation where I spent a lot of time dealing with an issue, and type checking would have significantly mitigated it.
Hell, don't stop at five, give me 50 if you can!
Examples are the best. Recently I've been learning Haskell. Last night I learned about polymorphism in the context of Haskell. I think that seeing it from this different angle rather than the traditional OOP angle really helped to solidify the concept for me. And I think that this is usually the case.
It makes me think back to Eliezer's old posts about Thingspace. In particular, extensional vs intensional descriptions.
What's a chair? I won't define it for you, but this is a chair. And this. And this. And this.
4) Short Fat Engineers Are Undervalued

(Before I read this article I thought it was going to talk about the halo effect and how ugly people in general are undervalued. Oh well.)
The idea that short, fat engineers are undervalued sounds plausible to me. No, I'm not using strong enough language: I think it's very likely. Not that likely though. I think it's also plausible that it's wrong, and that deep expertise is where it's at.
Again, for me to really explore this further, I would want to kind of iterate over all the different situations engineers find themselves in, ask myself how helpful being short-fat is vs tall-skinny in each situation, and then multiply by the importance of each situation.
Something like that. Taken literally it would require thousands of pages of analysis, clearly beyond the scope of a blog post. But this particular blog post actually didn't provide any examples. Big, fat, zero!
Let me provide some examples of the types of examples I have in mind:
- At work yesterday I had a task that really stands out as a short-fat type of task. I needed to make a small UI change, which required going down a small rabbit hole of how our Rails business logic works + how the Rails asset pipeline works, write some tests in Ruby (a language I'm not fluent in), connect to a VPN which involves a shell script and some linux-fu, deploying the change, ssh-ing into our beta server and finding some logs, and just generally making sure everything works as expected. No one step was particularly intense or challenging, but they're all dependencies. If I'm a tall and skinny Rails wizard I'd have an easy time with the first couple parts, but if I also don't know what a VPN is or don't know my way around the command line, I could easily be bottlenecked by the last couple parts.
- For an early work task, I had to add watermarks to GIFs. Turns out GIFs are a little weird and have idiosyncrasies. Being tall-skinny would have been good here, in the sense of a) knowledgeable about GIFs and b) generic programming ability. The task was pretty self-contained and well-defined. "Here's an input. Transform it into an output. Then you're done." It didn't really require too much breadth.
5) Embedded Interactive Predictions on LessWrong
I really love this tool so I feel bad about picking on this post, but I think it could have really used more examples of "here is where you'd really benefit from using this tool".
This post made more of an attempt to provide examples than examples 1-4 though. It lead off with the "Will there be more than 50 prediction questions embedded in LessWrong posts and comments this month?" poll, which I thought was great. And it did have the "Some examples of how to use this" section. But to me, I still felt like I needed more. A lot more.
I think this is an important point. Sometimes you need a lot of examples. Other times one or two will get the job done. It depends on the situation.
Hat tips
I'm sure there are lots of other posts worthy of hat tips, but these are the ones that come to mind:
- I really like how alkjash used anecdotes in Pain is not the unit of Effort, and parables in Is Success the Enemy of Freedom. Parables are an interesting alternative to examples.
- Not quite the same thing, but the Specificity Sequence is closely related.
- A lot of johnswentworth's posts are structured around examples. Eg. Exercise: Taboo "Should" and Anatomy of a Gear. In particular, they use examples from different domains. On the other hand, in this post I focused way too much on programming.
- Same with Scott Alexander. In particular, the following line from Meditations on Moloch comes to mind: "And okay, this example is kind of contrived. So let’s run through – let’s say ten – real world examples of similar multipolar traps to really hammer in how important this is."
It's not easy
Coming up with examples is something that always seems to prove way more difficult than it should be. Even just halfway decent examples. Good examples are way harder. Maybe it's just me, but I don't think so.
So then, I don't want this post to come across as "if you don't have enough (good) examples in your post, you're a failure". It's not easy to do, and I don't think it should (necessarily) get in the way of an exploratory or conversation starting type of post.
Maybe it's similar to grammar in text messages. The other person has to strain a bit if you write a (longer) text with bad grammar and abbreviations. There are times when this might be appropriate. Like:
Hey man im really sry but just storm ad I ct anymore. BBl.
But if you have the time, cleaning it up will go a long way towards helping the other person understand what it is you're trying to say.
Similar here. I use renaming less frequently, but I consider it extremely important to have a tool that allows me to automatically rename X to Y without also renaming unrelated things that also happen to be called X.
With static typing, a good IDE can understand that method "doSomething" of class "Foo" is different from method "doSomething" of class "Bar". So I can have automatically renamed one (at the place it is declared, and at all places it is called) without accidentally renaming the other. I can automatically change the order of parameters in one without changing the other.
It is like having two people called John, but you point your cursor at one of them, and the IDE understands you mean that one... instead of simply doing textual "Search/Replace" on your source code.
In other words, every static type declaration is a unit test you didn't have to write... that is, assuming we are talking about automated testing here; are we?
I agree that when you are writing new code, you rarely make type mistakes, and you likely catch them immediately when you run the program. But if you use a library written by someone else, and it does something wrong, it can be hard to find out what exactly went wrong. Suppose the first method calls the second method with a string argument, the second method does something and then passes the argument to the third method, et cetera, and the tenth method throws an error because the argument is not an integer. Uhm, what now? Was it supposed to be a string or an integer when it was passed to the third or to the fifth method? No one knows. In theory, the methods should be documented and have unit tests, but in practice... I suspect that complaining about having to declare types correlates positively with complaining about having to write documentation. "My code is self-documenting" is how the illusion of transparency feels from inside for a software developer.
I mostly use Java, and yes it is verbose as hell, much of which is needless. (There are attempts to make it somewhat less painful: Lombok, var.) But static typing allows IDE to give me all kinds of support, while in dynamically typed languages it would be like "well, this variable could contain anything, who knows", or perhaps "this function obviously expects to receive an associative array as an argument, and if you spend 15 minutes debugging, you might find out which keys that array is supposed to have". In Java, you immediately see that the value is supposed to be e.g. of type "Foo" which contains integer values called "m" and "n", and a string value called "format", whatever that may mean. Now you could e.g. click on their getters and setters, and find all places in the program where the value is assigned or read (as opposed to all places in the program where something else called "m", "n", or "format" is used). That is a good start.