Comment author: Clarity 20 October 2015 09:55:52AM 6 points [-]

Think you have a finely callibrated and important information diet? Imagine if you had the world's strongest intelligence agency tailor the news for you. Well, you don't have to imagine, because the president's daily briefs have just been declassified. If you're interested, you can collaborate with researchers to get a better handle on it. Enjoy.

Comment author: satt 20 October 2015 09:31:45PM 5 points [-]

For convenience, here's a link to the individual briefs as separate PDF files, for anyone else who doesn't want to download all 34MB at once. (I thought the Flickr page might have a few convenient, face-on snapshots of pages from the briefs, but the CIA reckoned it was more important to take 5 photos of a woman wheeling a trolley of briefs through the CIA lobby. #thanksguys)

I suspect daily presidential briefings from the CIA are finely (as in carefully & deliberately) calibrated but not that well calibrated (as in being accurate, representative and not tendentious). The CIA doubtless has incentives to misrepresent some things to the president — and indeed a president probably has some incentives to allow/encourage being misled about certain things!

Comment author: philh 14 October 2015 10:08:03AM 2 points [-]

My main reaction to that graphical model is that it would be surprising if the intersection point was currently exactly on the cusp in the demand curve, unless there was something keeping it there. To the extent that that model works, I'd expect our current situation to have a shorter vertical bit on the demand curve (there are in fact people going hungry), so that the intersection is somewhere in the slopey bit, at lower price than your first picture. Then UBI could bring us to the second picture, where the price has risen, but food is still more widely available than the status quo. (This is one of the directions I was looking at.)

With competition, it seems to me that retailers currently have margins that competition could eat into, but doesn't. If one of the factors keeping margins above epsilon is the amount of money people are willing to spend, then an increase in that would presumably also increase margins.

Comment author: satt 16 October 2015 12:35:18AM 0 points [-]

I guess I ruled out the possibility that the status-quo intersection was on the slopey bit because then everyone would be going hungry (from the assumptions that everyone were spending $200/month on food and that everyone shared the same subsistence level). However, I don't have an argument for why the status quo would be on the cusp rather than below it; I just had a hunch which I should (with hindsight) probably have ignored.

Comment author: Clarity 15 October 2015 12:48:43PM *  0 points [-]

Ya know the funny thing is, I instinctively came here to upvote your reply. I suspect I would have done that even if you're reply was of poor quality. Perhaps that could be construed as a form of retributive upvoting, in gratitude for the courtesy of replying. In that case, I would intuit that it is not of good practice, since it would equally skew the karma system (unless everyone is doing it, I suppose). Though, karma isn't ahhh...can't remember the economic term...replaceable by another unit of karma. There is a marginal value to karma and very different signals for negative/positive karma.

Comment author: satt 15 October 2015 06:26:41PM 0 points [-]

karma isn't ahhh...can't remember the economic term...replaceable by another unit of karma

Fungible?

Comment author: philh 13 October 2015 09:58:35PM -1 points [-]

Also not an economist.

The simple model would be: everyone needs a certain minimum amount of food. If everyone is getting $300 a month and spending $200 a month on food, and if the price of food suddenly jumps to $300 a month, people will start to spend $300 a month on food. So we'd expect the price of food to increase, so retailers can extract everything they can from customers.

I'm not sure that prices rise because of inflation, so much as inflation being the name we give to the phenomenon of rising prices. I'd be moderately surprised if economists could accurately (and precisely) predict the effects of UBI on inflation.

Comment author: satt 13 October 2015 11:55:20PM *  4 points [-]

Also also not an economist, although I took economics classes once.

I had a go at translating the simple model into one of those supply 'n' demand scribbles. For parsimony I assumed a straight line for the supply curve. For the demand curve I assumed no one bought more than the subsistence level of food, and that if the price was too high to reach that level, everyone simply bought as much food as they could with a constant budget.

That makes the status quo

and after a universal jump in income to relax everyone's budget constraint, the non-vertical part of the demand curve rises:

At both times the intersection of S and D determines the equilibrium price. The intersection stays in the same place, so, in this incredibly simplified model, the equilibrium price is unaffected by everyone getting more money.

Being so primitive, this graphical model does not remotely prove that the price would stay the same in real life. But in trying to figure out why the graphical model disagreed with the verbal model, I managed to put my finger on why the two differ, and I think it's a hole in the verbal model.

The verbal model observes that if people have $300/month, all of the retailers could jack the price of food up to $300/month, and everyone would be compelled to pay that. But that assumes coordination/cooperation/collusion between retailers rather than competition. If every food retailer raised their price to $300/month, any one of those retailers could swoop in and steal the others' custom by cutting their own price to $299/month. And then another retailer could cut their price to $298/month, and so on. By the obvious inductive argument, the equilibrium price would wind up at the same $200/month it was before.

Comment author: moridinamael 05 October 2015 03:23:46PM *  5 points [-]

I am of the opinion that if you do grad school and you don't attach yourself to a powerful and wise mentor in the form of your academic adviser, you're doing it wrong. Mentorship is a highly underrated phenomenon among rationalists.

I mean, if you're ~22, you really don't know what the hell you're doing. That's why you're going to grad school, basically. To get some further direction in how to cultivate your professional career.

If you happen to have access to an adviser who won a Nobel or whose adviser won a Nobel, they would make a good choice. The implicit skills involved in doing great work are sometimes passed down this way. The adviser won't even necessarily know which of their habits are the good ones. I'm thinking specifically of a professor I knew whose adviser was a Nobel laureate, who would take his students out for coffee almost every day. They would casually talk shop while getting coffee. This professor's students were generally well above average in their research accomplishments.

Comment author: satt 07 October 2015 04:15:05AM 5 points [-]

I mostly agree, but would add two caveats.

Relying too much on getting one very specific advisor is risky. Most advisors are middle-aged (or outright old), especially those with Nobel Prizes, and they do sometimes die or move away with little notice. If that happens, universities can be very bad about finding replacements (let alone comparably brilliant replacements) for any students cast adrift.

Also, an adviser's personality & schedule are as important as their research skills: a Nobel Prize winner who's usually away giving speeches, and is a raging, neglectful arsehole when they are around, is likely to be more of a hindrance than a help in getting a PhD. Put like that, what I just wrote is obvious, but I can imagine it being the kind of thing potential applicants would overlook.

Comment author: satt 07 October 2015 01:47:43AM 2 points [-]

Me previously on the topic of getting a PhD.

Comment author: James_Miller 21 September 2015 10:58:45PM 1 point [-]
Comment author: satt 22 September 2015 04:23:16AM 4 points [-]

I made a grab for some low-hanging knowledge on the counterfactual question by looking at the first couple of pages of a Google Scholar search for articles I could access which offered background on the topic. (I don't have the time or the interest to do anything like a real literature review, but I expect even a cursory Google Scholar search to be more reliable than a lone NewsBusters article.) Ignoring the books and paywalled Foreign Affairs articles I can't read, I got

I haven't perused these from start to finish, and even if I had I couldn't discuss them comprehensively in a blog comment. So I have to give a radically compressed (hence necessarily selective) digest of the bits I saw which shed light on the counterfactual question.

First, Mazarr's essay. It summarizes itself, but even the summary won't fit here, so I skip to its p. 104, where Mazarr referred to NK's "alleged one or two nuclear weapons" (fitting NBC's report that NK had a nuclear weapon), and quote a longer block from the same page:

Down one road lies an ultimatum—a demand for perfect confidence and complete disarmament; its way-stations are confrontation, an end to IAEA inspections and other forms of international control [...] sanctions, and possibly war. The other road holds a more accommodating approach, lessened tensions, expanded international monitoring [...] and the hope of eventual disarmament; its price is a greater near- to medium-term risk that the proliferant might be able to hide a rudimentary nuclear program.

Mazarr adds that, in practice, the US "always resorts" to the softer approach "in cases of hard-core proliferation", having "accepted ambiguous proliferation in India and Israel for many years", and likewise didn't pursue an all-out approach against India & Pakistan. Further along, on p. 110, in the section on sanctions:

Even had a tougher approach been more appealing, there was little chance it would have worked. North Korea had a long history of rejecting international opinion when phrased as a demand and accompanied by sanctions or the threat of them. Nor could economic sanctions have been effective without the participation of China, South Korea, Russia, and Japan, each of which expressed some degree of unease with a confrontational approach to the North, and reluctance to take any steps that might spark a rapid collapse of the North Korean system.

The section on sanctions was generally pessimistic, though Mazarr granted that "the de facto sanction of existing trade restrictions" could help shape "a proliferant's motives" (p. 111), and that NK seemed to have an interest "in avoiding condemnation and sanctions as voted by the Security Council" (p. 112). Mazarr was even more doubtful that military action would "have offered a definitive answer to the North Korean nuclear challenge" because it could have "led directly to a Korean war" and "military strikes [...] probably would not work" anyway (p. 113).

Mazarr's essay was most optimistic about the kind of approach represented by Clinton's '94 agreement: "a broad-based policy of incentives built around the offer of a package deal" (p. 114). Even a rejected package deal "would have its uses" because it "would force North Korea to make a clear choice, deprive it of excuses, and seize the political high ground, firming up a political consensus (including China) for UN sanctions" (p. 117).

Niksch's report doesn't seem useful for the counterfactual question at issue, because the report is mainly about the (second) Bush administration's goals & actions. My skimming revealed a description of the US's obligations under the '94 Agreed Framework, but no substantial, explicit evaluation of alternatives to the Framework.

Walt's article is a general assessment of Clinton's foreign policy. From its paragraph about the 1994 NK deal, on pages 72-73:

Hard-liners have criticized Clinton for rewarding North Korea's defiance of the nonproliferation regime, but they have yet to offer an alternate policy that would have achieved as much with as little. A preemptive air strike might well not eliminate North Korea's nuclear capability. Moreover, both South Korea and Japan opposed the use of force. [...] the situation called for flexibility, persistence and creativity; the administration displayed them all. Without the 1994 Agreed Framework, North Korea would almost certainly have obtained enough fissile material for a sizable number of nuclear bombs. [...] Given the limited array of options and the potential for disaster, Clinton's handling of North Korea is an impressive diplomatic achievement.

Mack's essay reminds me of Mazarr's in its scepticism about sanctions (e.g. p. 32: "What all this suggests is that imposing sanctions will be far more problematic than their more naive proponents in the West realize"), and Mack was at least as negative as Mazarr about military action, writing on p. 33 that "[t]he idea of resolving the nuclear issue by 'taking out' the Yongbyon nuclear facilities suffers from three fatal defects". Those three, briefly: (1) "it is by definition impossible to hit unknown targets" potentially kept secret by a "paranoid" regime; (2) "'surgical strikes' against Yongbyon might not only fail to destroy all of the North's nuclear program, they would also unleash a very unsurgical war against the South"; and (3) "it would be politically impossible to pursue the military option until the less risky alternatives of persuasion and sanctions had [...] failed. But sanctions would likely take years to have the desired effect". Ultimately, Mack was not sure anything would work. From p. 35:

Given the very real possibility that neither persuasion nor bribery, economic coercion, military action, or even unilateral reassurance will divert Pyongyang from its nuclear path, the international community needs to start thinking about what this may mean for regional—and global—security.

The 1999 Perry et al. review reads to me as broadly positive about the Agreed Framework, asserting on p.2 that it

succeeded in verifiably freezing North Korean plutonium production at Yongbyon — it stopped plutonium production at that facility so that North Korea currently has at most a small amount of fissile material it may have secreted away from operations prior to 1994; without the Agreed Framework, North Korea could have produced enough additional plutonium by now for a significant number of nuclear weapons.

The review team behind the report recommended on p. 6 that the Agreed Framework "be preserved and implemented" as one recommendation of six:

With the Agreed Framework, the DPRK's ability to produce plutonium at Yongbyon is verifiably frozen. Withou the Agreed Framework, however, it is estimated that the North could reprocess enough plutonium to produce a significant number of nuclear weapons per year. The Agreed Framework's limitations, such as the fact that it does not verifiably freeze all nuclear weapons-related activities [...] are best addressed by supplementing rather than replacing the Agreed Framework.


Insofar as these sources are accurate and I've understood and digested them properly, it's not only possible but likely that Clinton did about as well on this count as a different president could've. If so, then (even if NK didn't already have a nuclear weapon in '94) I'd think it unfair to assert that "Clinton let North Korea get nuclear weapons" as if there were an alternative decision Clinton could've taken to delay North Korea's first nuclear test for 13+ years.

(2/2)

Comment author: James_Miller 21 September 2015 10:58:45PM 1 point [-]
Comment author: satt 22 September 2015 04:23:01AM 3 points [-]

The linked article does an OK job of documenting that contemporary news reports were too optimistic about how much Clinton's 1994 deal would constrain North Korea's bomb seeking. However, I don't think that's an adequate basis for "Clinton let North Korea get nuclear weapons" — not least because the article itself echoes, in apparent agreement, NBC's contemporary claim that NK already had a nuclear bomb.

Even setting aside that claim, I wouldn't be confident in inferring that "Clinton let North Korea get nuclear weapons" merely because Clinton made a deal and 12 years later (and 6 years after Clinton left office) NK set off a nuke. Given my original state of ignorance (I didn't know anything about this 1994 deal before this thread), I can't rule out the possibilities that (1) Clinton actually made smart moves which were later vitiated by Bush or a lower-ranked politician, or that (2) Clinton made the best of a bad hand, there being no reasonable counterfactual where a US president in 1994 could've ensured, without triggering some patently worse consequence, that NK's first nuclear explosion happened substantially after 2006.

(1/2)

Comment author: gwern 17 September 2015 06:25:16PM 2 points [-]

And I'd probably make the noise term multiplicative and non-negative, instead of additive, to prevent the sampler from landing on a negative sales figure, which is presumably nonsensical in this context.

I know JAGS lets you put interval limits onto terms which lets you specify that some variable must be non-negative (looks something like dist(x,y)[0,∞]), so maybe STAN has something similar.

Comment author: satt 19 September 2015 12:33:20PM *  2 points [-]

It does. However...

I see now I could've described the model better. In Stan I don't think you can literally write the observed data as the sum of the signal and the noise; I think the data always has to be incorporated into the model as something sampled from a probability distribution, so you'd actually translate the simplest additive model into Stan-speak as something like

data {
int<lower=1> N;
int<lower=1> Ncities;
int<lower=1> Nwidgets;
int<lower=1> city[N];
int<lower=1> widget[N];
real<lower=0> sales[N];
}
parameters {
real<lower=0> alpha;
real beta[Ncities];
real gamma[Nwidgets];
real<lower=0> sigma;
}
model {
// put code here to define explicit prior distributions for parameters
for (n in 1:N) {
// the tilde means the left side's sampled from the right side
sales[n] ~ normal(alpha + beta[city[n]] + theta[widget[n]], sigma);
}
}

which could give you a headache because a normal distribution puts nonzero probability density on negative sales values, so the sampler might occasionally try to give sales[n] a negative value. When this happens, Stan notices that's inconsistent with sales[n]'s zero lower bound, and generates a warning message. (The quality of the sampling probably gets hurt too, I'd guess.)

And I don't know a way to tell Stan, "ah, the normal error has to be non-negative", since the error isn't explicitly broken out into a separate term on which one can set bounds; the error's folded into the procedure of sampling from a normal distribution.

The way to avoid this that clicks most with me is to bake the non-negativity into the model's heart by sampling sales[n] from a distribution with non-negative support:

for (n in 1:N) {
sales[n] ~ lognormal(log(alpha * beta[city[n]] * theta[widget[n]]), sigma);
}

Of course, bearing in mind the last time I indulged my lognormal fetish, this is likely to have trouble too, for the different reason that a lognormal excludes the possibility of exactly zero sales, and you'd want to either zero-inflate the model or add a fixed nonzero offset to sales before putting the data into Stan. But a lognormal does eliminate the problem of sampling negative values for sales[n], and aligns nicely with multiplicative city & widget effects.

Comment author: philh 15 September 2015 09:30:01AM 1 point [-]

Sure, I'm aware that this is the sort of thing I need to think about. It's just that right now, even if I do specify exactly how I think the generating process works, I still need to work out how to do the estimation. I somewhat suspect that's outside of my weight class (I wouldn't trust myself to be able to invent linear regression, for example). Even if it's not, if someone else has already done the work, I'd prefer not to duplicate it.

Comment author: satt 16 September 2015 01:17:42AM *  3 points [-]

Even if you know only the generating process and not an estimation procedure, you might be able to get away with just feeding a parametrization of the generating process into an MCMC sampler, and seeing whether the sampler converges on sensible posterior distributions for the parameters.

I like Stan for this; you write a file telling Stan the data's structure, the parameters of the generating process, and how the generating process produced the data, and Stan turns it into an MCMC sampling program you can run.

If the model isn't fully identified you can get problems like the sampler bouncing around the parameter space indefinitely without ever converging on a decent posterior. This could be a problem here; to illustrate, suppose I write out my version of skeptical_lurker's formulation of the model in the obvious naive way —

sales(city, widget) = α × β(city) × γ(widget) + noise(city, widget)

— where brackets capture city & widget-type indices, I have a β for every city and a γ for every widget type, and I assume there's no odd correlations between the different parameters.

This version of the model won't have a single optimal solution! If the model finds a promising set of parameter values, it can always produce another equally good set of parameter values by halving all of the β values and doubling all of the γ values; or by halving α and the γ values while quadrupling the β values; or by...you get the idea. A sampler might end up pulling a Flying Dutchman, swooping back and forth along a hyper-hyperbola in parameter space.

I think this sort of under-identification isn't necessarily a killer in Stan if your parameter priors are unimodal and not too diffuse, because the priors end up as a lodestar for the sampler, but I'm not an expert. To be safe, I could avoid the issue by picking a specific city and a specific widget as a reference widget type, with the other cities' β and other widgets' γ effectively defined as proportional to those:

if city == 1 and widget == 1: sales(city, widget) = α + noise(city, widget)

else, if city == 1: sales(city, widget) = α × γ(widget) + noise(city, widget)

else, if widget == 1: sales(city, widget) = α × β(city) + noise(city, widget)

else: sales(city, widget) = α × β(city) × γ(widget) + noise(city, widget)

Then run the sampler and back out estimates of the overall city-level sales fractions from the parameter estimates (1 / (1+sum(β)), β(2) / (1+sum(β)), β(3) / (1+sum(β)), etc.).

And I'd probably make the noise term multiplicative and non-negative, instead of additive, to prevent the sampler from landing on a negative sales figure, which is presumably nonsensical in this context.

Apologies if I'm rambling at you about something you already know about, or if I've focused so much on one specific version of the toy example that this is basically useless. Hopefully this is of some interest...

View more: Prev | Next