Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: economy 25 January 2015 02:22:22AM *  0 points [-]

I like this post, because I find it very familiar, but it is familiar because it is reminiscent of the sorts of confusions and mix-ups that can result from half an economics education. And please allow me to push back on some points, to invite your further thoughts and considerations.

Just going in order, you treat classical economists as if they adhered to a revealed preference theory. That theory is Paul Samuelson's, introduced at the tail end of neoclassical economics. It cannot explain anything the classical economists did. And in general, those who are happy not to work are not unemployed, they are nonemployed. Only people who want a job but cannot get one are unemployed, so the revealed preference explanation cannot explain unemployment in any case.

The classical economists did indeed blame a lot of unemployment on union activity and price floors. They were not wrong to do so, either. Both quite predictably create unemployment.

Irrationality is not a charge that should be thrown around lightly in economics. There is nothing about the traditional model of rationality that says people cannot tie their status to their nominal wealth. After all, it is really more a question of whether others do so, isn't it? Nor is status necessary to invoke: workers are ignorant about the state of aggregate demand and their value to the company, so they use their coworkers and past experiences as references.

Some of the frictions you cite, such as changes in technology, are not frictions. Less productive workers are simply paid less, not unemployed.

Regarding all the difficulties of firing workers and changing jobs and so on, why doesn't a series of markets emerge to address this very problem? If one can turn unemployed resources into employed resources, then one can potentially profit.

Joseph Stiglitz on information and the invisible hand.

Janet Yellen on efficiency wages and unemployment.

Alchian has a good paper or three on related subjects, am thinking it's called "Information Costs," but Google brings up the wrong paper.

Macroeconomics is not based on the observation that people spend what they earn. That observation is pretty old, probably older than economics itself, if you take Adam Smith as the starting point. And neither Keynesians nor monetarists really exist today. We understand now that Keynes and Old Keynesianism are basically wrong, and New Keynesianism incorporates those criticisms into a new research program.

Monetary theories of recession and unemployment go back to at least the days of Thomas Malthus, Jean-Baptiste Say, and John Stuart Mill. Then you have Jevons's sunspots, the Austrian theory, and the rudimentary development of real business cycle theory as well, all before Keynes.

The circular flow model does not predict that a drop in spending creates a recession. The model just isn't meant to do that. Markets have equilibrating properties that pull firms to discovering what consumers demand and making full use of resources. And It should be pretty obvious from experience that that mechanism must break down somewhere. Otherwise the next time you lose a penny the economy will collapse. It just does not describe any theory of unemployment, not even really the underconsumptionist theory. See the permanent income hypothesis for one example--Milton Friedman's major break from Keynes.

You don't mention real business cycle or new classical theory. They are rather academically dominant or recently were.

Since, as you seem to know, spending is income, reducing everyone's income by 50% changes nothing. Everything goes down by 50%, and the economy is as it was before.

Of course people won't work when the pay is too low. Or, if you think otherwise, will you please clean my bathroom every day for a dollar? Or point me to someone unemployed who will? And employers will not hire when the cost is too high--though I don't doubt your bathroom-cleaning proficiency, I certainly wouldn't pay you $100 for the service.

You don't discuss the natural rate of unemployment at all.

No one is a Keynesian or a monetarist anymore, so most academic economists disagree with you about the business cycle. And I am not sure if even real business cycle theorists seriously will defend the argument that the oil shocks of the 70s caused a recession.

Yes, the media is pretty bad about this.

Comment author: Stuart_Armstrong 27 January 2015 11:32:47AM 0 points [-]

Thanks for your comments! It seems you are more knowledgeable in this field, so I'll follow your views on most of these things (though I had read both papers you linked to).

The one thing I will disagree with is on terminology discussions. Nonemployment vs unemployment is not particularly useful, as its hard to observe the difference from the outside, and they overlap and can change within the same person (sometimes from day to day). And I know that some economists tie themselves in knots to avoid ever describing someone as "irrational", but that's entirely terminology - everyone agrees on what the behaviour is, and often on what motivates it, and the whole discussion is whether it merits the label rational or not. I personally prefer to use "irrational", as it's a useful descriptive term which would otherwise be almost empty of content. But that's just a preference.

Comment author: So8res 22 January 2015 08:37:29PM 1 point [-]

Yep, I think you're right.

My preferred method of cashing out "do(o not in Press)" etc. is to say that observations are a tuple (p, r) where p is either Press or not Press and r is the rest of the observation (unrelated to the button). Now the causal intervention is done only on p, so (desugaring everything) we define

U(a1, (p, r), a2) :=
{ UN(a1, (p, r), a2) + E[US(a1, (P, R), A2(a1, (P, R))) | do(P := Press)] if r = not Press
, US(a1, (p, r), a2) + E[UN(a1, (P, R), A2(a1, (P, R))) | do(P := not Press)] else }

Then whether or not the agent pays the blackmail comes down to whether or not E[UN(YES, (P, R), A2(YES, (P, R))) | do(P := not Press)] is -1 or +1. This counterfactual is computed by taking the world-model indexed on the action being "YES" as it was before making the observation, overwriting P to "not Press" via causal intervention, then propagating the update and computing the expectation of UN (with respect to R). In this case, A2(-, -) is constant, so the question comes down to the value of UN(YES, (P, R), -) under the causal counterfactual assumption that P := not Press.

There are two ways this could go down. First, let's say that R is all physical observations, including whether or not the humans seem to be pressing the button. In this case, the agent treats P as if it is a separate platonic variable controlled by the Causal Counterfactual God rather than actual physics, and it is perfectly capable of reasoning about the case where the humans pressed the button but P was not Press anyway. In this case, if the deal is "Give me one util from UN, and then I'll give 2 utils to UN if the humans don't press the button", then the agent is perfectly capable of rejecting the deal, because under the counterfactual do(P:= not Press), the agent still believes that the humans physically pressed the button (and therefore does not expect its 2 utils).

But if the deal is "Give me one util from UN, and I'll give 2 utils to UN if P = not Press" then the agent will pay up, because under the counterfactual do(P := not Press) it clearly expects to get paid. (Or, rather, it actually cares about the counterfactual world where the humans press the button and P = not Press anyway, so you can blackmail the agent by promising great things in that world.)

Yep, it's blackmailable. Nice catch.

(The intuitive fix is to try to prevent P from being the causal ancestor of anything in the graph; e.g., have the agent act as if it doesn't believe that the blackmailer can really observe / base their action on P. That sounds really difficult to set up and horribly hacky, though.)

Comment author: Stuart_Armstrong 23 January 2015 11:31:51AM 0 points [-]

The intuitive fix is to try to prevent P from being the causal ancestor of anything in the graph; e.g., have the agent act as if it doesn't believe that the blackmailer can really observe / base their action on P. That sounds really difficult to set up and horribly hacky, though.

It is relevant that the decision to blackmail (probably need a better word) is determined by the fact that P=not Press, and because of the particular structure of the algorithm. This flags up the blackmail as something unusual, but I'm not sure how to safely exploit that fact... The rule "don't take deals that only exist because of property Q of your algorithm" is too rigid, but maybe a probabilistic version of that?

Comment author: So8res 13 January 2015 06:08:13PM 2 points [-]

I think that the version of utility indifference discussed in this comment is immune to Benja's blackmail. It acts according to a mixture of u and v before the switch event, though.

I suggest "u" as the unit symbol for utilons.

Comment author: Stuart_Armstrong 22 January 2015 05:09:57PM 0 points [-]

I think that version is still vulnerable to blackmail...

Comment author: So8res 25 October 2014 02:59:01AM *  3 points [-]

Thanks, and nice work!

Thus the utility of (a1, o) for o in Press should be equivalent to the utility of the same (a1, o) under the counterfactual assumption that o is not in Press, and vice versa

Yeah, this is pretty key. You need it to optimize for both cases as if the probability of the button being pressed is fixed and independent of whether the programmers actually press the button. We can achieve this via a causal intervention on whether or not the button is pressed, and then clean up your U a bit by redefining it as follows:

U(a1, o, a2) :=
{ UN(a1, o, a2) + E[US|do(O in Press)] if o not in Press
; US(a1, o, a2) + E[UN|do(O not in Press)] else }

(Choosing how to compare UN values to US values makes the choice of priors redundant. If you want the priors to be 2:1 in favor of US then you could also have just doubled US in the first place instead; the degree of freedom in the prior is the same as the degree of freedom in the relative scaling. See also Loudness Priors, a technical report from the last workshop.)

This method does seem to fulfill all the desiderata in the paper, although we're not too confident in it yet (it took us a little while to notice the "managing the news" problem in the first version, and it seems pretty likely that this too will have undesirable properties lurking somewhere). I'm fairly pleased with this solution, though, and a little miffed -- we found something similar to this a little while back (our research outstrips our writing speed, unfortunately) and now you've gone and ruined the surprise! :-)

(In seriousness, though, nice work. Next question is, can we pick any holes in it?)

Comment author: Stuart_Armstrong 22 January 2015 05:09:23PM *  0 points [-]

What do we mean by "o in Press"? That whoever wrote the list defining "Press" made a different decision that day?

Maybe we can have the definition of "Press" as being a specific list (rather than being the reality described by that list), and the counterfactual being that a certain stochastic process erases a particular part of that list? Not sure about that... Need some sort of abstract version of this (might be doable).

This still seems vulnerable to Benja's blackmail. Let me explain:

Assume that a2 is irrelevant, that o=Pr, and that a1 is merely yes or no to the following deal: "Give me 1 utilon from UN, and if o is not Pr, then I give 2 utilons to UN". This is the only way of changing UN; otherwise UN=0.

Then the agent is immune to blackmail. It will computer U(YES,-,-) = U(YES,Pr,-) = US(YES,Pr,-)+E(UN|Pr not in Press, YES, Pr) = US(YES,Pr,-) - 1. On the other hand, U(NO,-,-) = US(NO,Pr,-).

So if US doesn't care about YES vs NO, it will reject the deal. Yay!

But now imagine that the deal is phrased differently: "Give me 1 utilon from UN, and if o is not in Press, then I give 2 utilons to UN"

Here E(UN|Pr not in Press, YES, Pr) become -1+2=+1, so the agent will accept the deal.

This may depend on how it cashes out statements like "if o is not in Press".

Comment author: Gunnar_Zarncke 15 January 2015 10:32:43PM 0 points [-]

If the rewand channel has only one bit per day I don't think any agent can infer much about the authors. Their days maybe. Some fundamental components of their preferrences possibly. But nothing a human could infer from all the bits of background he possesses. There are convergence rate results for classifiers that require just too many sample to extract enough information - especially in the face of real life feature vectors.

Comment author: Stuart_Armstrong 16 January 2015 12:07:21PM 1 point [-]

I'd assume there would be a reward for every story, that this would be on a ordinal scale with several options, and that it included feedback/corrections about grammar and phrasing.

In response to 2014 Survey Results
Comment author: Stuart_Armstrong 15 January 2015 04:45:35PM 2 points [-]

Thanks Alex and Ozy!

Comment author: Stuart_Armstrong 15 January 2015 03:17:10PM 1 point [-]

If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc.

This is only relevant if you're arguing that there is some systematic investment error being made by people unfamiliar with bitcoin, that knowledgeable investors can exploit. If not, extra knowledge has no added value.

In your analysis, it doesn't seem there is any strong division between the behaviour of the knowledgeable and the not, so there seems no reason to go and get that knowledge.

Comment author: Dahlen 14 January 2015 09:55:16PM 2 points [-]

Yeah, me too.

Perhaps change the "observes", "manipulates" to "observing", "manipulating"? It doesn't have the same connotation of "this actually happened".

Also, had it been a real occurrence, it might have been the first thing to make me care just a little about MIRI's mission.

Comment author: Stuart_Armstrong 14 January 2015 10:08:18PM 2 points [-]

Well, FaceBook and Google algorithms are real occurrences - they're just not "simple algorithms in a box".

Comment author: Luke_A_Somers 14 January 2015 05:52:23PM 16 points [-]

Oh, dang. Well, I mean, phew? Both? See, I thought this was going to be a news story.

Comment author: Stuart_Armstrong 14 January 2015 08:28:17PM 0 points [-]


Comment author: RedErin 12 January 2015 04:39:43PM 2 points [-]

So if an AI were created that had consciousness and sentience, like in the new Chappie movie. Would they advocate killing it?

Comment author: Stuart_Armstrong 14 January 2015 02:53:55PM 3 points [-]

If the AI were superintelligent and would otherwise kill everyone else on Earth - yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.

View more: Next