Thanks for the suggestion, Patrick. I've now adapted Tsvi's formal argument to the reframed "5-equals-10 problem" and added it into the last section of my writeup.
Because the length of Scott's Moloch post greatly exceeds my working memory (to the extent that I had trouble remembering what the point was by the end) I made these notes. I hope this is the right place to share them.
Notes on Moloch (ancient god of child sacrifice)
http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Intro - no real content.
Moloch as coordination failure: everyone makes a sacrifice to optimize for a zero-sum competition, ends up with the same relative status, but worse absolute status.
Existing systems are created by incentive structures, not agents, e.g. Las Vegas caused by a known bias in human reward circuitry, not optimization for human values.
But sometimes we move uphill anyway. Possible explanations:
Technology/ingenuity creates new opportunities to fall into such traps. Technology overcomes physical limitations, consumes excess resources. Automation further decouples economic activity from human values. Technology can improve coordination, but can also exacerbate existing conflicts by giving all sides more power.
AGI opens up whole new worlds of traps: Yudkowsky's paperclipper, Hanson's subsistence-level ems, Bostrom's Disneyland with no children.
6 & 7. Gnon - basically the god of the conservative scarcity mindset. Nick Land advocates compliance; Nyan wants to capture Gnon and build a walled garden. Scott warns that Moloch is far more terrifying than Gnon and will kill both of them anyway.
8 & 9. So we have to kill this Moloch guy, by lifting a better God to Heaven (Elua).
Cool.
For those who don't want to wait until April 7th, Udacity is scheduled to launch their own Intro to Data Science on Feb 5th (this Wednesday). I expect it to be easier/shallower than the Hopkins/Coursera sequence, but it wins on actionability.
For a countable language L and theory T (say, with no finite models), I believe the standard interpretation of "space of all models" is "space of all models with the natural numbers as the underlying set". The latter is a set with cardinality continuum (it clearly can't be larger, but it also can't be smaller, as any non-identity permutation of the naturals gives a non-identity isomorphism between different models).
Moreover this space of models has a natural topology, with basic open sets {M: M models phi} for L-sentences phi, so it makes sense to talk about (Borel) probability measures on this space, and the measures of such sets. (I believe this topology is Polish, actually making the space Borel isomorphic to the real numbers.)
Note that by Lowenheim-Skolem, any model of T admits a countable elementary substructure, so to the extent that we only care about models up to some reasonable equivalence, countable models (hence ones isomorphic to models on the naturals) are enough to capture the relevant behavior. (In particular, as pengvado points out, the Christiano et al paper only really cares about the complete theories realized by models, so models on the naturals suffice.)
For the sake of the absent gods do not start with an IDE or anything that hides the compilation step.
Could you say more about why this is important for beginning programmers?
While you're certainly technically correct, it's an easy/common mistake for people to focus on the "save all you can" part, overlooking "gain all you can" opportunities. The EA movement is notable for proactively trying to counter this mistake, and apparently so is John Wesley.
For the same reasons you outline above, I'm okay with fighting this hypothetical target.
If I must dignify the hypothesis with a strategy: my "buy" and "sell" prices for such a bet correspond to the inner and outer measures of the target, respectively.
In other words, "what is the measure of an unmeasurable set?". The question is wrong.
Several popular comments say something to the effect of "I was too arrogant to just get with the program and cooperate with the other humans".
The biggest of my own arrogant mistakes was not taking CS/programming very seriously while in college because I was dead set on becoming a mathematician, and writing code was "boring" compared to math. Further arrogance: I wasn't phased by the disparity between the number of graduating Ph. D.'s and the number of academic jobs.
I found out in grad school that my level of talent in mathematics, while somewhat rare, was certainly not so exceedingly rare that real world considerations would not apply to me.
I've since changed my attitude, and I'm working on fixing this mistake.
Thanks for pointing that out. My feeling is still "well yes, that's technically true, but it still seems unnatural, and explosion is still the odd axiom out".
Coq, for example, allows empty case expressions (for empty types), and I expect that other languages which double as proof assistants would support them as well... for the very purpose of satisfying explosion. General purpose languages like Haskell (and I just checked OCaml too) can seemingly overlook explosion/empty cases with few if any practical problems.