Thanks for the suggestion, Patrick. I've now adapted Tsvi's formal argument to the reframed "5-equals-10 problem" and added it into the last section of my writeup.
Because the length of Scott's Moloch post greatly exceeds my working memory (to the extent that I had trouble remembering what the point was by the end) I made these notes. I hope this is the right place to share them.
Notes on Moloch (ancient god of child sacrifice)
http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Intro - no real content.
Moloch as coordination failure: everyone makes a sacrifice to optimize for a zero-sum competition, ends up with the same relative status, but worse absolute status.
Cool.
For those who don't want to wait until April 7th, Udacity is scheduled to launch their own Intro to Data Science on Feb 5th (this Wednesday). I expect it to be easier/shallower than the Hopkins/Coursera sequence, but it wins on actionability.
For a countable language L and theory T (say, with no finite models), I believe the standard interpretation of "space of all models" is "space of all models with the natural numbers as the underlying set". The latter is a set with cardinality continuum (it clearly can't be larger, but it also can't be smaller, as any non-identity permutation of the naturals gives a non-identity isomorphism between different models).
Moreover this space of models has a natural topology, with basic open sets {M: M models phi} for L-sentences phi, so it mak...
For the sake of the absent gods do not start with an IDE or anything that hides the compilation step.
Could you say more about why this is important for beginning programmers?
While you're certainly technically correct, it's an easy/common mistake for people to focus on the "save all you can" part, overlooking "gain all you can" opportunities. The EA movement is notable for proactively trying to counter this mistake, and apparently so is John Wesley.
For the same reasons you outline above, I'm okay with fighting this hypothetical target.
If I must dignify the hypothesis with a strategy: my "buy" and "sell" prices for such a bet correspond to the inner and outer measures of the target, respectively.
In other words, "what is the measure of an unmeasurable set?". The question is wrong.
Several popular comments say something to the effect of "I was too arrogant to just get with the program and cooperate with the other humans".
The biggest of my own arrogant mistakes was not taking CS/programming very seriously while in college because I was dead set on becoming a mathematician, and writing code was "boring" compared to math. Further arrogance: I wasn't phased by the disparity between the number of graduating Ph. D.'s and the number of academic jobs.
I found out in grad school that my level of talent in mathematics, while...
GEB will take you from superficial knowledge to full grok.
A word of caution: there is a risk when reading popular science/math books like GEB of coming away feeling like one understands something at a higher level than one actually does, particularly if one hasn't already studied the subject formally.
If one has formally studied incompleteness before, it's easy to wave away standard primitive recursive derivations (e.g. the proof predicate) as tedious and trivial and beside the main point, but having this attitude the first time around could be dangerous...
Martin Gardner's Mathematical Games column from Scientific American Volume 242, Number 6, June, 1980. Paywalled here:June:1980).
EDIT: escape characters
Robin Hanson has advocated this point of view.
I find the argument quite unconvincing; Hanson seems to be making the mistake of conflating "life worth living" with "not committing suicide" that is well addressed in MTGandP's reply (and grandchildren).
The 52-karma top comment of the Virtual Employment thread has been deleted. I gather that it said something about copywriting, with online skill tests for prospective applicants.
Can anyone provide a bit more information about this apparently quite valuable comment?
Awesome, thanks!
Thanks for the nice comment. I listed the PD post first, as it is probably the most readable of the three, written more like an article than like notes.
Thanks for looking! I'll try to get my hands on a physical copy, as the Google Books version has highly distracting page omissions.
Unless I'm going insane, Luke_A_Somers' comment below is incorrect. If you defect and your opponent times out, you still get 1 point and they get 0, which is marginally better than both getting 1 point in the case of mutual defection.
That was the purpose my (sleep 9). I figured anyone who was going to eval me twice against anything other than CooperateBot is going to figure out that I'm a jerk and defect against me, so I'm only getting one point from them anyway. Might as well lower their score by 1.
My assumption might not have been correct though. In the ...
Yes, the "here's why Quinn is wrong about CooperateBot being a troll submission" comments were valuable, so I don't regret provoking them. Presumably if my comment had said "players" instead of "trolls" from the outset, it would have been ignored for being inoffensive and content-free.
But provoking a few good comments was a happy accident from my perspective. I will avoid casual use of the word "troll" on this site, unless I have a specific purpose in mind.
Yes, and I agree with this. I'm familiar with Straw Vulcanism and accept that guessing incorrectly is my mistake, not others'.
It seems anger and frustration were read into my comment, when in fact I was merely surprised, so I've edited out the offending t-word.
Ken Binmore & Hyun Song Shin. Algorithmic knowledge and game theory. (Chapter 9 of Knowledge, Belief, and Strategic Interaction by Cristina Bicchieri.)
EDIT: Actually, I'd be pretty happy to see any paper containing both the phrases "common knowledge" and "Löb's theorem". This particular paper is probably not the only one.
Thank you.
Can you elaborate on this?
You are right that I used the inflammatory t-word because CooperateBot submitters are probably not trying to win. I certainly expected to see DefectBots (or CliqueBots from optimists), and agree that the competition should have been seeded with both CooperateBots and DefectBots.
But I don't understand this at all:
But I find this a poor excuse. (It's why I always give the maximum possible number whenever I play "guess 2/3rds of the average.")
To me, this looks like t-wording the people who play 0.
Are we thinking of the ...
I was a bit irritated that no fewer than three players submitted CooperateBot, as I cooperated with it in hopes of tricking bots like submission B.
EDIT: More offense was taken at my use of the word "trolls" than was intended.
Of course, when you ignore the payoff matrix, the argument is no longer really about the Prisoner's Dilemma (and so I don't know that it's fair to accuse the barber paradox of having a lot of extra cruft not present in the PD formulation).
But I agree that this is a cute presentation of diagonal arguments in culturally relevant terms. A fun read.
I'm glad you bring this up. I have two comments:
(1) The absoluteness argument shows that intermediate uses of Choice don't matter. They happen (definably) in L, and the conclusion reflects back up to V.
(2) As you suspect, you don't really need Choice for compactness of the Hilbert cube. Given a sequence in it, I can define a convergent subsequence by repeatedly pigeonholing infinitely many points into finitely many holes. I believe sequential compactness is what is needed for the Kakutani theorem.
Variants of open cover compactness are derivable from sequen...
I gave this paper a pretty thorough read-through, and decided that the best way to really see what was going on would be to write it all down, expanding the most interesting arguments and collapsing others. This has been my main strategy for learning math for a while now, but I just decided recently that I ought to start a blog for this kind of thing. Although I wrote this post mainly for myself, it might be helpful for anyone else who's interested in reading the paper.
Towards the end, I note that the result can be made "constructive", in the sen...
The computability problem is due to the unbounded ability of P to reason about theorems in the base theory (so that would be a necessary point of relaxation). A computable total function can't even assign values > 1/2 to all the theorems of PA and values < 1/2 to all the sentences refuted by PA.
The set A is convex because the convex combination (t times one plus (1-t) times the other) of two coherent probability distributions remains a coherent probability distribution. This in turn is because the convex combination of two probability measures over a space of models (cf. definition 1) remains a probability distribution over the space of models.
I think, but am not sure, that your issue is looking at arbitrary points of [0,1]^{L'}, rather than the ones which correspond to probability measures.
Not only is PA not finitely axiomatizable, but any consistent extension of PA isn't (I think this is true, the same proof that works for PA should go through here, but I haven't checked the details)
Well if we had this, we would know immediately that Q + Con(PA) is not an extension of PA (which is what we originally wanted), because it certainly is finitely axiomatizable. I know there are several proofs that PA is not finitely axiomatizable, but I have not read any of them, so can't comment on the strengthened statement, though it sounds true.
Ah, so my question was more along the line: does finite axiomatizability of a stronger (consistent) theory imply finite axiomatizability of the weaker theory? (This would of course imply Q + Con(PA) is not stronger than PA, Q being the usual symbol for Robinson arithmetic.)
On the model theoretic side, I think I can make something work, but it depends on distorting the specific definition of Con(PA) in a way that I'm not really happy about. In any case, I agree that your example is trivial to state and trivial to believe correct, but maybe it's less trivial...
Your post confused me for a moment, because Robinson + Con(PA) is of course not weaker than PA. It proves Con(PA), and PA doesn't.
I see now that your point is that Robinson arithmetic is sufficiently weak compared to PA that PA should not be weaker than Robinson + Con(PA). Is there an obvious proof of this?
(For example, if Robinson + Con(PA) proved all theorems of PA, would this contradict the fact that PA is not finitely axiomatizable?)
Since December, I've been persuing a "remedial computer science education", for the sake of both well-roundedness and employability. My background is in the purest of pure math (Ph. Dropout from a well-ranked program), so I feel I can move fairly quickly here, though the territory is new.
My biggest milestone to date has been solving the first 100 Project Euler problems in Python (no omissions!). I had had a bit of Python experience before, and I picked 100 as the smallest number that sounded impressive (to me).
Second biggest milestone: following ...
Registered.
From Kahneman's Thinking, Fast and Slow (p 325):
The probability of a rare event is most likely to be overestimated when the alternative is not fully specified... [Researcher Craig Fox] asked [participants] to estimate the probability that each of the eight participating teams would win the playoff; the victory of each team in turn was the focal event.
... The result: the probability judgments generated sucessively for the eight teams added up to 240%!
Do you (r_claypool) have reason to suspect that Christianity is much more likely to be true than other, ...
But orthonormal, your example displays hindsight bias rather than confirmation bias!
I interpret billswift's comment to mean:
GlaDOS, you should not just seek confirmation of the legitimacy of the text; you should also seek refutation.
(Or possibly it was meant the other way around?)
In any case, I agree that billswift's comment is off-base, because GLaDOS' comment does not actually show confirmation bias.
I am currently reading Kahneman's book, and about 100 pages in I realized I was going to cache a lot more of the information if I started mapping out some of the dependencies between ideas in a directed graph. Example: I've got an edge from {Substitution} to {Affect heuristic}, labeled with the reminder "How do I feel about it? vs. What do I think about it?". My goal is not to write down everything I want to remember, rather to (1) provide just enough to jog my memory when I consult this graph in the future, and (2) force me to think critically about what I'm reading when deciding whether or not add more nodes and edges.
Grognor, I don't think it's fair to insinuate that you may have learned a wrong lesson here. If it's wrong (I actually doubt that it is), then it's up to you to try to resist learning it.
As regards walking readers into a trap to teach them lessons, one of my all-time favorite LW posts does exactly this, but is very forthcoming about it. By contrast, I think thomblake overestimates the absurdity of the examples here: I thought they seemed plausible, and that "Frodo Baggins" was just poor reasoning. The comments show I'm not alone here. This level ...
I really, really dislike April Fool's jokes like this. Somebody will stumble onto this post at a later date, read it quickly, and come away misinformed.
I'll grant that the obviously horrible "Frodo Baggins" example should leave a bad taste in rationalists' mouths, but a glance at the comments shows that several readers initially took the post seriously, even on April 1st.
The post's own title describes the bias as fictional. It is tagged "aprilfools". All of the citations are, even at a glance, either made up or about different biases. The post is peppered with weasel words in place of real references, like "As it turns out". The comments mention it's an April Fool's gag, and point to related real articles. The examples are completely absurd - and while the absurdity heuristic isn't perfect, "even though appearances can be misleading, they're usually not."
And the effect probably does really exist!
There are several lessons in there...
I agree completely. If you didn't read the references or notice the date, the article seems completely legitimate. It makes a couple weird claims (fictional drugs?), but if you didn't know the literature they wouldn't necessarily seem any stranger than the actual things people do (like anchoring their estimate of a car's value to their social security number). Remember that the absurdity heuristic is not a very good mode of reasoning!
So this means that while people who know Less Wrong can have a little inside joke, people who are new to rationalism and behavioral sciences could easily be fooled.
I suspect it has to do with some LW users taking FAI seriously and dropping everything to join the cause, as suggested in this comment by cousin_it. In the following discussion, RichardKennaway specifically links to "Taking ideas seriously".
Oh! Well I feel stupid indeed. I thought that all the text after the sidenote was a quotation from Luke (which I would find at the link in said sidenote), rather than a continuation of Mike Darwin's statement. I don't know why I didn't even consider the latter.
Additionally, the link in the OP is wrong. I followed it in hopes that Luke would provide a citation where I could see these estimates.
Well, models can have the same reals by fiat. If I cut off an existing model below an inaccessible, I certainly haven't changed the reals. Alternately I could restrict to the constructible closure of the reals L(R), which satisfies ZF but generally fails Choice (you don't expect to have a well-ordering of the reals in this model).
I think, though, that Stuart_Armstrong's statement
Often, different models of set theory will have the same model of the reals inside them
is mistaken, or at least misguided. Models of set theory and their corresponding sets of ...
The predicate "is a real number" is absolute for transitive models of ZFC in the sense that if M and N are such models with M contained in N, then for every element x of M, the two models agree on whether x is a real number. But it can certainly happen than N has more real numbers than M; they just have to lie completely outside of M.
Example 1: If M is countable with respect to N, then obviously M doesn't contain all of N's reals.
Example 2 (perhaps more relevant to what you asked): Under mild large cardinal assumptions (existence of a measurable ...
Cached wisdom?
Anyway, I'd be more interested in hearing the regrets of those people who lived true to themselves, didn't work too hard, let themselves be happier, etc. Do they wish they'd worked harder and "made something of themselves"? Been better at cooperating with the rest of society?
Signed up. Upon reflection, I believe the deadline is what let me get away with doing this right now at the expense of putting off studying for yet another hour. But it's hard to say, because I decided pretty quickly that I was going to do it, and I only came up with that explanation after the fact.
Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors' analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G - do the math out and you'll see.
I really don't know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?
After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.
Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you'll end up paying iff X ≤ L, and thus you'll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.
I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.
Question E: Given that you're playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?
Question F: Given that you're playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?
Question G: With 99% certainty, you will be executed. With 1% cer...
Thanks for pointing that out. My feeling is still "well yes, that's technically true, but it still seems unnatural, and explosion is still the odd axiom out".
Coq, for example, allows empty case expressions (for empty types), and I expect that other languages which double as proof assistants would support them as well... for the very purpose of satisfying explosion. General purpose languages like Haskell (and I just checked OCaml too) can seemingly overlook explosion/empty cases with few if any practical problems.