Open Thread June 2010, Part 3
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (606)
Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he's probably lying, since Guede most likely is the killer, and it's not who this new guy claims. But what can you do against the irrational?
I found this on a Slashdot discussion as a result of -- forgive me -- practicing the dark arts. (Pretty depressing I got upmodded twice on net.)
Should be easy to test his claims...
I sometimes wonder, is the Italian judicial system really that lousy or is there some sort of linguistic or cultural barrier there.
Slashdot threads have a bad enough signal to noise ratio as is. Please don't do that sort of thing.
You were arguing against your real opinion as a 5th columner? May I ask why?
(Well done, by the way, in a technical sense. Just the right amount of character assassination: "Sollecito and Knox were known to be practitioners of dangerous sex acts.")
Just don't kill the younglings, Anakin!
I thought it would get modded down and then provoke someone as well-informed as komponisto to thoroughly refute it, and make people realize how stupid those arguments were.
Damn ... now that's starting to sound like a fake justification!
Eh, I guess I just like trolling too :-/
Internet, Silas. Silas, Internet. ;)
I think you will find an ample number of inspiringly bad arguments out there, without adding to their number. I believe this is called cutting one's nose to spite one's face.
FYI, this was discussed previously here
The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes "here's a directed acyclic graph... we're going to add on a teensy weensy few extra assumptions... and out of it construct the minkowski metric, and relativistic transformations"
I'm slowly making my way through this paper (partly slowed by the fact that I'm not all that familiar with order theory), but the reason I mention the paper (A Derivation of Special Relativity from Causal Sets) is because I can't help but wonder if it might give us a hook to go in the other direction. That is, if this line of research might let us bring the mathematical machinery of much of physics to help us analyze stuff like Bayes nets and decision theory and give us a (potentially) really powerful mathematical tool.
Maybe I'm completely wrong and nothing interesting will come of trying to "reverse" the causal set line of research, (but causal set stuff is neat anyways, so at least I get some fun from reading and thinking about it) but does seem potentially worth looking into.
Besides, if this does end up being a useful tool, it would be perhaps one of the biggest and subtlest punchlines the universe pulled on us: since causal-sets are an approach to quantum gravity, if it ended up helping with the rationality/AI/etc stuff...
That would mean that Penrose was right about quantum gravity being a key to mind... BUT IN A WAY ENTIRELY DIFFERENT THAN HE INTENDED! bwahahahaha. :)
Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don't mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?
And here is a public list of known nuclear accidents
I am not. To even suggest that that this is a possibility anywhere near the level of a sovereign actor giving terrorists nukes is to dramatically overestimate terrorist groups' technical competence, and also ascribe basic instrumental rationality to them (a mistake; see my Terrorism is not about Terror).
Even if a terrorist could marshal the interest, assemble in one place the millions necessary, and actually hire a world-class submersible and in the scant days they can afford, find the wreckage of a bomb, it would probably be useless. US nukes are designed to failsafe, so if the wiring has corroded, or the explosives are misaligned? And that's ignoring issues with radioactive decay. (Was the bomb a tritium-pumped H-bomb? Well, given tritium's extremely short half-life, I'm afraid that bomb is now useless.)
Maybe, although remember there are a lot more players interested in obtaining nuclear weapons then just a few terrorists. And the best crimes are the ones no one knew were commited. Unsucessful criminals are over represented as opposed to ones that got away. I suspect the same is true for terrorists. Blowing up a building isn't going to achieve your goals, but blowing up a city might. After all, it's ended a war once and just the threat stopped another from ever happening. Also, even if the bomb itself is useless, it is probably worth quite a bit of money, more then the millions it would take to retrieve it (maybe thousands as technology improves? There are some in shallower water. In 1958 the government was prepared to retrieve a lost bomb, but never located it.) I don't honestly know a lot about nuclear weapons, but the materials in it, maybe even the design itself, would be worth something to somebody. Maybe said organization has the resources to salvage it, after all, they already had enough money to get it in the first place.
Even if no bombs go off, I wouldn't be suprised if the government eventually gets around to searching for them and finds they're not there. And there are other nuclear threats to. Although I can't find anywhere to confirm it, it was floating around the internet that up to 80 "suitcase nukes" are missing. This quote from wikipedia particularly distrubed me:
I will leave it at that for now, I'm not one of those paranoid people that goes around ranting about nuclear proliferation or whatever. If there really is a problem, there's not much we can do (except maybe try to get to those lost bombs first, or take anti-terrorism more seriously.)
I prefer spending my precious mental CPUs on worrying about the US government going really bad.
Admittedly, a terrorist nuke (especially if exploded in the US) would be likely to cause the US government to take a lot more control.
I don't take Lunev seriously. Defectors are notoriously unreliable sources of information (as I think Iraq should have proven. Again.).
The problem with nuclear terrorism is that atomic bombs come with return addresses - the US has always collected isotopic samples (eg. with aerial collecting missions in international airspace) precisely to make sure this is the case. (Ironically, invading Afghanistan and Iraq may've helped deter nuclear terrorism: 'If the US invaded both these countries over just a few thousand dead, then it's plausible they will nuke us even if we cry to the heavens that we just carelessly lost that bomb.')
Notice that many of the incidents mentioned at your link don't involve nuclear bombs at all: many involve leaks at research facilities and power stations. Here's a chronological list of radiation incidents that caused injury from the start of the 20th century onwards. The vast majority don't involve nuclear bombs.
Historically, unless you were in Hiroshima or Nagasaki, you would have been less likely to die from a nuclear bombing than you would have been to die from a radiation leak, picking up a lost radioactive source without recognizing it (or living with someone who's brought one into your home), being poisoned with radiation by a coworker, or medical overexposure. (Note also that the list is surely incomplete.) It is possible that this trend will reverse in the future, but it's not obvious that it will.
More generally, gwern sounds about right to me on the subject of terrorists putting together their own nuke. (Or hauling one up from the bottom of the ocean.)
Coincidentally I just the other day learned of the banana equivalent dose as a way of placing the risk of radiation leaks in context.
A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:
Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.
Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.
You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.
So: do you know any counterarguments or articles that address either of these points?
Re: "can a mind understand itself?"
That is no big deal: copy the mind a few billion times, and then it will probably collectively manage to grok its construction plans well enough.
Another argument against the difficulties of self-modeling point: It's possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.
It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.
Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn't trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.
What if it just works on having a better understanding of math, logic, and probability?
Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn't terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn't known whether causality could be reversed to use such a plan to make a society more democratic.
Such plans work in societies with rule of law, and fail miserably in societies that are clan based and tribal. A quarter of Afghanistan's GDP may go to bribes and shakedowns. A more honest description from NPR would be that historically, mineral wealth when controlled by deeply corrupt governments like Afghanistan's, is primarily used for graft and nepotism, benefiting a few elites in government and industry while funding the oppression of everyone else.
In other words, Afghanistan is more like Nigeria than Norway.
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?
Q: What Is I.B.M.’s Watson?
http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all
A: what is Skynet?
And now it's time for the Daily Double!
In the video, I didn't understand whether that series of wrong answers was staged or actually happened.
Very impressive though. Class.
Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)
Just in case you were wondering too.
I was wondering indeed. That was surreal.
Any LessWrongers understand basic economics? This could be another great topic set for all of us. Let's kick things off with a simple question:
I'm renting an apartment for X dollars a month. My parents have a spare apartment that they rent out to someone else for Y dollars a month. If I moved into that apartment instead, would that help or hurt the country's economy as a whole? Consider the cases X>Y, X<Y, X=Y.
ETA: It's fascinating how tricky this question turned out to be. Maybe someone knowledgeable in economics could offer a simpler question that does have a definite answer?
Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.
If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.
If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.
If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.
ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".
Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the economy isn't doing well enough, well, we need more "aggregate demand" -- you see, people aren't buying enough things, which must be bad.
Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.
This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.
Now, it's true there are prisoner's dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is ... um ... more pointless work that doesn't satisfy real demand .. but hey, it keeps up "aggregate demand", so it must be what a sluggish economy needs.
Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction -- despite most people being made better off and efficiency improving. If people work longer hours than they'd like, to produce stuff no one wants, well, that shows up as more GDP, and it's therefore "good".
How the **** did we get into this mindset?
Sorry, [/another rant].
Nice to see this kind of thinking from a capitalistish.
I'll accept that compliment, backhanded though it might be :-) (I canceled out the downmod you got for that comment -- no offense taken.)
I would appreciate, though, if you could (as best you can) tell me what it was I said that led you to believe I'm capitalistish (in the sense that you meant), or that I would otherwise disagree with my above GDP rant. No need to dig up links, just tell me whatever you remember or can quickly find.
I'm not doing this to make you feel foolish for having said what you did (like I've been known to try with you ...), but because I want to know what it is that gives of these impressions of my views, and whether I should be using different terms to describe them.
As I've said before, I have a love-hate relationship with libertarianism. I believe largely what I did ten years ago about the proper role of government, but much of what self-described libertarians advocate is sharply contrary to what I considered to be my libertarian view.
Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.
Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.
I most certainly am defending it -- by showing the errors in the classification of what counts as a benefit. If the argument is that stimulus will get GDP numbers back up, then yes, I didn't provide counterarguments. But my point was that the effect of the stimulus is to worsen that which we really mean by a "good economy".
The stimulus is getting people to do blow resources doing (mostly) useless things. Whether or not it's effective at getting these numbers where they need to be, the numbers aren't measuring what we really want to know about. Success would mean the useless, make-work jobs eventually lead to jobs satisfying real demand, yet no metric that they focus on captures this.
This is because it isn't. A "lower level of output/work" means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn't mean that each person works 1% less, produces 1% less, and consumes 1% less, it means that 1 in 100 people lose their job, can't find another one, and become poor, while the rest keep going on as they have been. So, when output/work falls, you don't get more leisure, you get more poverty.
And I disagree that most stimulus spending ends up being directed to "worthless" projects. Maybe they're not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution. Furthermore, if people are willing to lend the government money for really, really low interest rates (as demonstrated by prices of U.S Treasury securities) then isn't that a signal that it's an unusually good time for the U.S. government to borrow and spend - that the economy wants more of what the government produces and less of what private industry produces?
This I think reflects a status-quo bias. When the per capita GDP was lower in 2000, or 1990, the economy managed to employ a higher percentage of people. While you're right that current institutions, inertia, and laws prevent shorter workweeks, that is an argument for removing these barriers, not an argument for trying to game the GDP numbers in the (false) hope that this will somehow translate into sustainable employment because of the historical correlation.
Okay, but that still looks like a case of lost purposes and fake utlity functions. If you're spending money to redistribute, then spend the money to redistribute! Don't spend it on a project that hogs up real resources just to get a small side-effect of transferring money to people you want to help. ("What's your real objection" and all.) If it's important that they feel they earn the paycheck, then require that they take job training.
And the reason I call the projects worthless is this (and it doesn't require an ideological commitment to being against government projects): people couldn't justify asking the government to provide these things before the recession. But if the recession is a contraction of productive capacity, then the projects we commit to should also contract -- it should look like an even worse deal.
The fact that the government can issue debt cheaper doesn't change this fact. The reduced productive capacity is a real (i.e. non-nominal) phenomenon. The greater ease with which government can procure resources does not mean our aggregate ability to produce them has increased; it just means the government can more easily increase its share of the shrinking pie. That still implies that our "choice set" is being reduced, and the newer, larger wastefulness of these projects will have to show up somewhere.
If the fundamental determinant of reduced unemployment is whether the economy has entered into (as Arnold Kling says) sustainable patterns of specialization and trade, then temporary stimulus projects can't accelerate this, because they're by definition not sustainable: after they're over, we'll just have to readjust again.
I must emphasize, as I did in this blog post, that this does not mean we should give suffering families the finger because "it would be inefficient and all" -- the fact that they (under a stimulus project) are working, feeling productive, and getting a paycheck is very significant, and definitely counts as a benefit. It's just that you should help them a way that doesn't inhibit the economy's search for efficient use of factors of production, nor (significantly) favor these families over the ones that are going to be screwed again when the projects have to stop, and the hunt for re-coordination starts anew.
Downvote explanation requested. This looks like a reasoned reply to MichaelBishop's criticism, and I'm interested in knowing how it errs and how Michael's comment doesn't, and how this is so obvious.
[Didn't downvote.] This is silly. The 'leisure' of unemployment is concentrated on a few, and comes with elevated rates of low status, depression, suicide, divorce, degradation of employability, etc.
That's a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn't mean the "leisure" is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.
Furthermore, the lower consumption is only consumption of goods purchased with money; with significant restructuring, labor with predictable demand (like babysitting) can be handled by cooperatives that avoid the need to pay for it out of cash reserves.
I don't deny that make-work programs allow workers to show off and practice their skills, retaining employability. I criticize economists who miss this benefit. But if you're going to spend money to get this benefit, you should spend it in a way that directly targets the achievement of this benefit to the workers, rather than on make-work projects that only achieve this benefit as a site effect, and which waste capital goods and distort markets in the process.
Unfortunately, in the United States, you really would end up with much more of the former and less of the latter. Europe would be better off, though, thanks to different labor laws; would you suggest that the United States adopt something like France's maximum 35 hour workweek, or Germany's subsidies to part-time workers?
Currently, hours worked per week is positively correlated with hourly wages; one person working 80 hours a week usually makes more money than two people who both work 40 hours a week. Also, specifically wanting to do part-time work is a bad signal to employers. It signals that you're not committed to your job, that you're probably lazy, and that you're weird. So, absent government intervention, you probably won't see people voluntarily reducing their working hours.
What isn't reflected in the GDP is huge.
There's the underground economy-- I've seen claims about the size of it, but how would you check them?
There's everything people do for each other without it going through the official economy.
And there's what people do for themselves-- every time you turn over in bed, you are presumably increasing value. If you needed paid help, it would be adding to the GDP.
I don't understand where you acquired this view of economists. I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.
This is not an accurate portrayal of what Keynesians believe. The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.
The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy; low quality spending by government drives high quality spending by the private sector.
If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.
James_K:
Aside from the standard arguments about the shortcomings of GDP, my principal objection to the way economists use it is the fact that only the nominal GDP figures are a well-defined variable. To make sensible comparisons between the GDP figures for different times and places, you must convert them to "real" figures using price indexes. These indexes, however, are impossible to define meaningfully. They are produced in practice using complicated, but ultimately arbitrary number games (and often additionally slanted due to political and bureaucratic incentives operating in the institutions whose job is to come up with them).
In fact, when economists talk about "nominal" vs. "real" figures, it's a travesty of language. The "nominal" figures are the only ones that measure an actual aspect of reality (even if one that's not particularly interesting per se), while the "real" figures are fictional quantities with only a tenuous connection to reality.
It's not so much a matter of being overconfident as it is not listing the disclaimers at every opportunity. The Laspeyres Price Index (the usual type of price index) has well understood limitations (specifically that it overestimates consumer price growth as it doesn't deal with technological improvement and substitution effects very well), but since we don't have anything better, we use it anyway.
"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.
Bear in mind that everything economists use is an estimate of a sort, even nominal GDP. Believe it or not, they don't actually ask every business in the country how much they produced and / or received in income (which is why the income and expenditure methods of calculating GDP give slightly different numbers although they should give exactly the same result in theory). The reason this may not be readily apparent is that most non-technical audiences start to black out the moment you talk about calculating a price index (hell, it makes me drowsy) and technical audiences already understand the limitations.
James_K:
You're talking about the "real" figures being "less certain," as if there were some objective fact of the matter that these numbers are trying to approximate. But in reality, there is no such thing, since there exists no objective property of the real world that would make one way to calculate the necessary price index correct, and others incorrect.
The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods). However, even if we limit ourselves to those that look reasonable, there is still an infinite number of different procedures that can be used to calculate a price index, all of which will yield different results, and there is no objective way whatsoever to determine which one is "more correct" than others. If all the reasonable-looking procedures led to the same results, that would indeed make these results meaningful, but this is not the case in reality.
Or to put it differently, an "objective" price index is a logical impossibility, for at least two reasons. First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers. Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used. Therefore, comparisons of "real" variables invariably involve arbitrary and unwarranted assumptions about the relative values of different things to different people. Again, of course, different arbitrary choices of methodology yield different numbers here.
(By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective, unquestioningly use price indexes without stopping to think that the basic assumption behind the very notion of a price index is that value is objective and measurable after all.)
Very true. A good general measure in human economic systems should NOT merely look at the ease of availability of finished paperclips. It should also include, in the "basket", such things as extrudable metal, equipment for detecting and extracting metal, metallic wire extrusion machines, equipment for maintaining wire extrusion machines, bend radius blocks, and so forth.
Thank you for pointing this out; you are a relatively good human.
That is a very poor inference on their part.
The basket used is based on a representation of what people are currently consuming. This means we don't have to second-guess people's preferences. Unique goods like houses pose a problem, but there's not really anything we can do about that, so the normal process is to take an average of existing houses.
Which is a well understood problem. Every economist knows this, but what would you have us do? It is necessary to inflation-adjust certain statistics, and if the choice is between doing it badly and not doing it at all, then we'll do it badly. Just because we don't preface every sentence with this fact doesn't mean we're not aware of it.
Just to avoid confusion among readers, I want to distance myself from part of Vladimir_M's position. While I agree with many of the points he's made, I don't go so far as to say that CPI is a fundamentally flawed concept, and I agree with you that we have to pick some measure and go with it; and that the use of it does not require its caveats to be restated each time.
However, I do think that, for the specific purpose that it is used, it is horribly flawed in noticeable, fixable ways, and that economists don't make these changes because of lost purpose syndrome -- they get so focused on this or that variable that they're disconnected from the fundamental it's supposed to represent. They're doing the economic equivalent of suggesting to generals that their living soldiers be burned to ashes so that the media will stop broadcasting images of dead soldier bodies being brought home.
I wouldn't be in a good position to determine if it's lost purpose syndrome since I'm an insider, but I would suggest that path dependence has a lot to do with it.
Price indices are produced by governments, who are notoriously averse to change. And what's worse the broad methodology is dictated by international standards, so if an economist or some other intelligent person comes up with a better price index they have to convince the body of economists and statisticians that they have a good idea, and then convince the majority of OECD countries (at a minimum) that their method is worth the considerable effort of changing every country's methodology.
That's a high hurdle to cross.
On my blog I suggested using insulin prices as a good proxy for inflation. That should be pretty easy for economists to find, even historical data. One economists could find the historical data for one country and use it as a competing measure. No collective action problem to solve there! Just a research paper to present.
(Though I can't find it on google searches, but economists should be able to get access to the appropriate databases.)
Here's a crude metric I use for gauging the relative goodness of societies as places to live: Immigration vs. emigration.
It's obviously fuzzy-- you can't get exact numbers on illegal migration, and the barriers (physical, legal, and cultural) to relocation matter, but have to be estimated. So does the possibility that one country may be better than another, but a third may be enough better than either of them to get the immigrants.
For example, the evidence suggests that the EU and the US are about equally good places to live.
I don't think that's a good metric. Societies that aren't open to mass immigration can have negligible numbers of immigrants regardless of the quality of life their members enjoy. Japan is the prime example.
Moreover, in the very worst places, emigration can be negligible because people can be too poor to pay for the ticket to move anywhere, or prohibited to leave.
But "given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power", you could predict how people would choose if they were not faced with legal and moving-cost barriers - e.g. imagine a philanthropist willing to pay the moving costs. So your objection to this metric seems to be a surmountable one, in principle, assuming perfect knowledge etc. The main remaining barrier to migration may be sentimental attachment - but given perfect knowledge etc. one could predict how the choices would change without that remaining barrier.
Applying this metric to Europa versus Earth, presumably Europans would choose to stay on Europa and humans would choose to stay on Earth even with legal, moving-cost, and sentimental barriers removed, indeed both would pay a great deal to avoid being moved.
In contrast to Europans versus humans, humans-of-one-epoch are not very different from humans-of-another-epoch.
A fair point, though I think societies like that are pretty rare. Any other notable examples?
Off the top of my head, I know that Finland had negligible levels of immigration until a few years ago. Several Eastern European post-Communist countries are pretty decent places to live these days (I have in mind primarily the Czech Republic), but still have no mass immigration. As far as I know, the same holds for South Korea.
Regarding emigration, the prime example were the communist countries, which strictly prohibited emigration for the most part (though, rather than looking at the numbers of emigrants, we could look at the efforts and risks many people were ready to undertake to escape, which often included dodging snipers and crawling through minefields).
If some price indexes are "clearly absurd", then they apparently have some value to us - for if they were valueless, then why call any particular one "absurd"? If they yield different results, then so be it - let us simply be open about how the different indexes are defined and what result they yield. The absence of a canonical standard will of course not be useful to people primarily interested in such things as pissing contests between nations, but the results should be useful nonetheless.
We commonly talk about tradeoffs, e.g., "if I do this then I will benefit in one way but lose in another". We can do the same thing with price indexes. "In this respect things have improved but in this other respect things have gotten worse."
It's pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this - and they tend to be the better economists.
You may like Morgenstern's book On the Accuracy of Economic Observations. How I rue the day I saw this in a used bookstore in NY and didn't have the cash to buy it..
EDIT: fixed title name
I'm going through Morgenstern's book right now, and it's really good. It's the first economic text I've ever seen that tries to address, in a systematic and no-nonsense way, the crucial question of whether various sorts of numbers routinely used by economists (and especially macroeconomists) make any sense at all. That this book hasn't become a first-rank classic, and is instead out of print and languishing in near-total obscurity, is an extremely damning fact about the intellectual standards of the economic profession.
I've also looked at some other texts by Morgenstern I found online. I knew about his work in game theory, but I had no idea that he was such an insightful contrarian on the issues of economic statistics and aggregates. He even wrote a scathing critique of the concept ot GNP/GDP (a more readable draft is here). Unfortunately, while this article sets forth numerous valid objections to the use of these numbers, it doesn't discuss the problems with price indexes that I pointed out in this thread.
realitygrill:
Could you please list some examples? Aside from Austrians and a few other fringe contrarians, I almost always see economists talking about the "real" figures derived using various price indexes as if they were physicists talking about some objectively measurable property of the universe that has an existence independent of them and their theories.
Thanks for the pointer! Just a minor correction: apparently, the title of the book is On the Accuracy of Economic Observations. It's out of print, but a PDF scan is available (warning -- 31MB file) in an online collection hosted by the Stanford University.
I just skimmed a few pages, and the book definitely looks promising. Thanks again for the recommendation!
No, that's precisely what I assumed they're arguing, and I believe my points were completely responsive. I will address the position you describe in the context of the criticism in my rant.
Now, unpack the meaning of all of those terms, back to the fundamentals we really care about, and what is all that actually saying? Well, first of all, have you played rationalist taboo with this and tried to phrase everything without economics jargon, so as to fully break down exactly what all the above means at the layperson level? To me, economists seem to talk as if they have not done so.
I would like for you to tell me whether you have done so in the past, and write up the phrasing you get before reading further. You've already tabooed a lot, but I think you need to go further, and remove the terms: recession, depression, stimulus, excessive, pessimism, invest, and economic activity. (What's left? Terms like prefer, satisfaction, wants, market exchange, resources, working, changing actions.)
Now, here's what I get: (bracketed phrases indicate a substitution of standard economic jargon)
"People [believe that future market interactions with others will be less capable of satisfyng their wants], which leads them to [allocate resources so as to anticipate lower gains from such activity]. As people do this, the combined effect of their actions is to make this suspicion true, [increasing the relative benefit of non-market exchanges or unmeasured market exchanges].
"The government should therefore [purchase things on the market] in order to produce a [false signal of the relative merit of selling certain goods], and facilitate production of [goods people don't want at current prices or that they previously couldn't justify asking their government to provide]. This, then, becomes a self-fulfilling prophecy: once people [sell unwanted goods due to this government action], it actually becomes beneficial for others to sell goods people do want on the market, [preventing a different kind of adjustment to conditions from happening]."
Phrased in these terms, does it even make sense? Does it even claim to do something people might want?
That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I'm right, you're coming at this from a goods market perspective i.e. "I, a typical consumer am not interested in any of these goods at these prices, so I'm going to not buy so much", whereas the Keynesians are blaming this kind of attitude: "I, a typical consumer am fearful of the future. While I want to buy stuff, I'd better start saving for the future instead in case I lose my job" and it's the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).
A couple of other contextual points: 1) The monetary stimulus that Keynes recommended was based on governments running deficits, not necessarily spending more. Cutting taxes works just as well
2) Keynes was trying to reduce the magnitude of boom-bust swings, not increase trend economic growth rates. As such he prescribed the opposite behaviour in boom times, have government run surpluses to tamp down consumer exuberance. This is less widely known since politicians only ever talk about Keynes during recessions, when it gives them intellectual cover to spend lots of money.
3) The Keynesian consensus is not universal. Arnold Kling's "recalculation" story is much closer to your picture, and you'll notice he doesn't advocate stimulus, but rather waiting to see how people adjust to the new economic circumstances.
4) GDP is the preoccupation of macroeconomists. Microeconomists (like me) care much more about allocative efficiency, which is to say to what extent are things in the hands of the people who value them most? So there's a whole branch of the profession to which your initial GDP-centrism comment does not apply.
It's points 3 and 4 in particular that lead me to object to your claim that economists are obsessed with GDP. To my way of thinking, it's politicians that are obsessed with GDP because they believe their chances of re-election are tied to economic growth and unemployment figures. So they spend a lot of time asking economists how to increase GDP, and therefore economists more often than not to discuss GDP when they appear in public.
It's still not clear to me that you've done what I asked (taboo your model's predicates down to fundamentals laypeople care about), or that you have the understanding that would result from having done what I asked.
What's the difference between the "goods market" perspective and the "blaming this kind of attitude"/Keynesian perspective? Why is one wrong or less helpful, and what problems would result from using it?
Why is it bad for people to believe they are poorer when they are in fact poorer?
Why is it bad for more money to go into savings? Why does "the economy" entirely hinge on money not doing this?
Until you can answer (or avoid assuming away) those problems, it's not clear to me that your understanding is fully grounded in what we actually care about when we talk about a "good economy", and so you're making the same oversights I mentioned before.
No, I'm not making those oversights because I am a) not a Keynesian and b) not a macroeconomist. My offering defences of this position should not be construed as fundamental agreement that position.
This is quickly turning into a debate about the merits of Keynesianism which is not a debate I am interested in because stabilisation policy is not my field and I don't find it very interesting, I got enough of it at university. I'm going to touch on a few points here, but I'm not going to engage fully with your argument; you really need to talk to a Keynesian macroeconomist if you want to discuss most of this stuff. For one thing my ability to taboo certain words is affected by the fact I don't have a very solid grip on the theory and I don't spend much of my time thinking about high level aggregates like GDP.
Now here's the best I can do on your bullet point questions, sorry if it doesn't help much, but it's all I've got: 1) The difference is that Keynesians believe savings reduce the money supply by taking money out of circulation, this makes them think they are poorer, which makes them act like they're poorer, which makes other people poorer.
2) Because it starts with an illusion of poverty. The first cause of recessions in a Keynesian model is "animal spirits", or in layman's terms, irrational fear of financial collapse. Viewed from this perspective, stimulus is a hack that undoes the irrationality that caused the problem in the first place (and because it's caused by irrationality they can feel confident it is a problem).
3) This is actually one of my biggest problems with Keynesian theory. If it strikes you as counter-intuitive or silly, I'm not going to dissuade you.
One final point: The reason I replied to your initial comment in the first place, was your suggestion that all economists are obsessed with maximising measured GDP over everything else.
But many economists don't deal with GDP at all. When I was learning labour market theory we were taught that once people's wage rate gets high enough, one could expect them to work fewer hours since the demand for leisure time increases with income. There was never a suggestion that this was anything to be concerned about, the goal is utility, not income.
In environmental economics I recall reading a paper by Robert Solow (the seminal figure in the theory of economic growth) arguing that it was important to consider changes in environmental quality along with GDP, to get a better picture of how well off people really are.
I look at what I have been taught in economics, and I simply can't square it with your view of the profession. Some kinds of economists tend to be obsessed with growth, but they tend to be economists who specialise in economic growth. The rest of us have other pursuits, and other obsessions.
Alright, I'll let anyone judge for themselves if the canonical Keynesian replies reveal a truly grounded understanding of what counts as "helping the economy".
Forget Keynesian theory for a minute: I want to know if you have the understanding I expect of whatever theory it is you do endorse. Can you taboo that theory's terminology and ground it layperson level fundamentals? Can you force me to care about whatever jargon you do in fact use?
Because, at risk of sounding rude, I don't think you've acquired this "Level 2" understanding, and I don't think you're atypical among economists in lacking it -- from what I've read of Mankiw, Sumner, and Krugman, they don't have it either.
(btw, you call yourself an economist but don't have a grip on Keynesian theory? Isn't that pretty much required these days?)
Sure -- I only meant that economic policy advocates who are concerned about aggregate economic variables are obsessed with GDP as one of those variables, but that should be assumed from context. Obviously, you're not going to care about GDP in your capacity as a microeconomist of company behavior.
On macro policy I doubt I have level 2 understanding. I had to take papers in macro at university, and I was able to get reasonable grades on them, but level 0 or 1 understanding is sufficient to do that.
My guess is that if you asked a Keynesian why they care, they would say that boom-bust cycles create uncertainty and fear in people because they don't know if they're going to lose their job (and they want their job, or they'd have already quit) and by taming the boom-bust cycle people will have a more certain and therefore more pleasant life).
Equally if you asked a development economist, they would point to the misery in third world countries and for wealthy countries point out that productivity growth means being able to do more with less, and whether you want to have more, or want to do less, that's a win. Unemployed people are by definition people who want a job but don't have one, so concern about unemployment is easy to work out.
And as for me, well the reason I care about allocative efficiency is that allocative efficiency is the attempt to match reality to people's preferences as well as is possible under current constraints. How do we use our resources and knowledge to create the things people want and how do we get them to the people who want them the most?
The market does a pretty good job of this most of the time, but it does fail sometimes. And when it fails there are things government can do to improve matters, but the government can fail too, so you have to balance out the imperfections of the market and the imperfections of government and try to work out which set of imperfections is more problematic. If I succeed, or if people like me succeed then people will have more of what they want, be that flat screen TVs, or cars or clean air or time with their families. Not everything falls within economics' purview of course, love and truth and beauty are things I can't help with. But for everything else, my goal is to help the market to match infinite wants with finite resources, and imperfect information.
Perhaps it should have been, but I failed to assume this. And microeconomics is a lot wider than company behaviour, it covers pretty much everything but GDP and unemployment.
I've heard that the trick works less well each time it's used (perhaps within a limited time period). Is this plausible?
There could be indirect consequences of the decision in question, resulting from counter-intuitive effects on the existing economic process, on lives of other people not directly involved in the decision. The relevant question is about estimate of those indirect consequences. However imprecise economic indicators are, you can't just replace them with presumption of total lack of consequences, and only consider the obvious.
Here's another question to chew on:
Suppose you're in a country that grows and consumes lots of cabbages, and all the cabbages consumed are home-grown. Suppose that one year people suddenly, for no apparent reason, decide that they like cabbages a lot more than they used to, and the price doubles. But at least to begin with, rates of production remain the same throughout the economy. Does this help or harm the economy, or have no effect?
In one sense it 'obviously' has no effect, because the same quantities of all goods and services are produced 'before' and 'afterwards'. So whether we're evaluating them according to the 'earlier' or the 'later' utility function, the total value of what we're producing hasn't changed. (Presumably the prices of non-cabbages would decline to some extent, so it's at least consistent that GDP wouldn't change, though I still can't see anything resembling a mathematical proof that it wouldn't.)
An interesting question. Here are some initial thoughts:
In terms of broad economic aggregates, it won't make any difference. If you rent the room off your parents for a market rate, GDP is exactly unaffected, people are paying the same money to different people. If you rent it for less than market rate, GDP is lower, but this reflects deficiencies in measured GDP since GDP uses market prices as a proxy for the value of a transaction (this is fine for the most part, but doing your child a favour is an exception conventional methodology can't deal with). So from a macroeconomic perspective I'd say it's a wash either way.
Microeconomically, there could be some efficiencies in you renting from your parents. If they trust you more than a random stranger (and let's hope they do) they will spend less time monitoring your behaviour (property inspections and the like) than they would a random stranger, but the value of your familial relationship should constrain you from taking advantage of that lax monitoring in the way a stranger would. This mean that your parents save time (which makes their life easier) and no one should be worse off (I assume the current tenant of their room would find adequate accommodation elsewhere).
However, one note of caution. If you were to get into a dispute of some sort with your parents over the tenancy, this could damage your relationship with your parents. If you value this relationship (and I assume you do), this is a potential downside that doesn't exist under the status quo. Also, some people might see renting from your parents as little different to living with your parents which (depending on your age) may cost you status in your day-to-day life (even if you pay a market rate). If you value status, you should be aware of this drawback.
So in summary, the most efficient outcome depends on three variables: 1) How much time and effort do your parents spend monitoring their tenant at the moment? 2) How likely is it that your relationship with them could be strained as a result of you living there? 3) How many friends / acquaintances / colleagues do you have that would think less of your for renting from your parents (and how much do you care)?
I hope that helps.
I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands, presumably if you rent the apartment for $X when X>Y. My impression is that in good economic times, marginal spending is not considered to improve economic welfare.
Imagine that the "economy" is sluggish, and that a widget maker currently profits $1 on each widget sale. Now, consider these two scenarios:
a) I buy 100 widgets that I don't want, in order "to help the economy".
b) I give the widget-maker $100. Then, I lie and say, "OMG!!! I just heard that demand for widgets is SURGING, you've GOT to make more than usual!" (Assume they trust me.)
In both cases, the widget-maker is $100 richer, the real resources in the economy are unchanged, and the widget-maker has gotten a false signal that more widgets should be produced. Yet one of those "helps the economy", while the other doesn't? How does that make sense?
If you believe that either one of those "helps the economy", your whole view of "the economy" took a wrong turn somewhere.
What exact metric do you have in mind?
I've noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind ("Doom"), (2) Unfriendly AI ("UFAI") and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.
The main argument (known as Rolf Nelson's AI deterrence) can be modeled by counterfactual mugging: an UFAI will give up a (small) portion of the control over its world to FAI's preference (pay the $100), if there is a (correspondingly small) probability that FAI could've been created, had the circumstances played out differently (which corresponds to the coin landing differently in counterfactual mugging), in exchange for the FAI (counterfactually) giving up a portion of control to the UFAI (reward from Omega).
As a result, having an UFAI in the world is better than having no AI (at any point in the future), because this UFAI can work as a counterfactual trading partner to a FAI that could've existed under other circumstances, which would make the FAI stronger (improve the value of the possible worlds). Of course, the negative effect of decreasing the probability of FAI is much stronger than the positive effect of increasing the probability of UFAI to the same extent, which means that if the choice is purely between UFAI and FAI, the balance is conclusively in FAI's favor. That there are FAIs in the possible worlds also shows that the Doom outcome is not completely devoid of moral value.
More arguments and a related discussion here.
It can mostly be ignored, but uFAI affects physically-nearby aliens who might have developed a half-Friendly AI otherwise. (But if they could have, then they have counterfactual leverage in trading with your uFAI.) No reason to suspect that those aliens had a much better shot than we did at creating FAI, though. Creating uFAI might also benefit the aliens for other reasons... that I won't go into, so instead I will just say that it is easy to miss important factors when thinking about these things. Anyway, if the nanobots are swarming the Earth, then launching uFAI does indeed seem very reasonable for many reasons.
Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.
Is there any more efficient way to do it?
Hmm... I don't know about recent comments, I just go to the posts I'm following. Hit control+F and then type (or copy/paste) "load more comments" and go through and hit each one. Then erase it and type the current date or yesterday's date in the formate "date month" (18 June) and it will highlight all of those comments (if you use youtube a lot, you might already use this method on the "see all comments" page except you have to type "hour" or "minute" instead of an exact time which is actually more convenient.) When you're done checking all of the new comments you can erase that and put in "continue this thread" (is that right, I forgot what it is exactly.)
Hope that helps.
The only measure I know of that might make it more efficient to catch up on recent comments is for you to go to your preferences page, and where it says "Display 50 comments by default," change the "50" to some larger number. I have been using "200" on a very slow (33.6 K bits/sec) connection.
Are there periods in your life when you read or at least skim every comment made on Less Wrong? The reason I ask is that I am a computer programmer, and every now and then I imagine ways of making the software behind Less Wrong easier to use. To do that effectively, I need to know things about how people use Less Wrong.
Here's my wishlist:
As much trn functionality as it seems to be worth coding-- in particular, the ability to default to only seeing unread comments (or at least a Recent Comments for posts as well as for the whole site) while reading comments to a post while having easy access to old comments. the ability to default to not seeing chosen threads and sub-threads, and tree navigation.
If you want to find out how people generally use the site, I think a top level post asking about it is the only way to get the questions noticed. If you post it, I'll upvote it.
Use the RSS feed that appears on the recent comments page. I use reader.google.com to read my RSS feeds. This will allow you to scroll back in bulk using just the scrollbar then read at leisure. It also shows comments as 'read' or 'unread' based on where you are up to.
Fascinating talk (Highly LW-relevant)
http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html
Replicator constructed in Conway's Life
One of Eliezer's posts talks about realizing that conventional science is content with an intolerably slow pace. Here we have an example of less time leading to a better solution.
Apparently it doesn't replicate itself any more than a glider does; the old copy is destroyed as it creates a new copy.
Reading the conwaylife.com thread gives a better sense of this thingie's importance than the comparison with a glider. ;)
Aaron Swartz: That Sounds Smart
I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.
Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.
Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.
My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.
If you noticed, this idea comes from Differential Geometry, where you use a collection ("atlas") of overlapping charts/local homeomorphisms to R^n ("maps") as a suitable structure for discussing manifolds.
I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I'm not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.
For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).
But I would go farther than this. I would also claim that we shouldn't imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that "It's maps (or models, or turtles) all the way down".
What's an example of people doing this?
I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.
Mostly agree. It's really irritating and unproductive (and for me, all too frequent) when someone thinks they've got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.
Yes, people need to watch for the hidden assumptions they make, but they shouldn't point out the assumptions others make unless they can say why it's unreasonable and how its weakening would hurt the argument it's being used for. "You're assuming X!" is not, by itself, relevant counterargument.
These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful -- only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely available -- but I already programmed my own program for that about 14 years ago as a nice entry level programming exercise, and used it quite extensively and successfully for about 2 years in school, till I suddenly stopped. That made me wonder which other great ideas I already used and discarded, why former me would do such a thing and to make it a public question: which great things LWers might have tried and discarded for no particular reason.
Another obvious example from my own stack would be the use of checklists to pack for holidays. Worked great for years and still does.
Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.
I am instead looking for an analysis of how people's varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psychology, society, or economics.
By "bargaining power" I mean the ability to steer transactions toward one's preferred outcome within a zone of win-win agreements. For example, if we are trapped on a desert island and I have a computer with satellite internet access and you have a hand-crank generator and we have nothing else on the island except that and our bathing suits and we are both scrupulously honest and non-violent, we will come to some kind of agreement about how to share our resources...but it is an open question whether you will pay me something of value, I will pay you something, or neither. Whoever has more bargaining power, by definition, will come out ahead in this transaction.
I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?
Calling them "dark arts" is itself a tactic for framing that only affects the less-rational parts of our judgement.
A purely rational agent will (the word "should" isn't necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it's goals.
The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they're wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic.
Put another way, persistent disagreement indicates mutual contempt for each others' rationality. If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.
Dark arts, huh? Sometime ago I put forward the following scenario:
Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?
(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)
Is that actually the FAI's only or best technique?
Off the top of my non-amplified brain:
Reward Fred for not torturing kittens.
Give Fred simulated kittens to torture and deny Fred access to real kittens.
Give Fred something harmless to do which he likes better than torturing kittens.
ETA Convince Fred that torturing kittens is wrong.
Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.
Yes.
Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.
Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.
That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:
Eliezer on Informers and Persuaders
It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.
I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.
Gawande on the need to develop competent systems for delivering medical care
Gawande on the need to develop competent systems for delivering medical care
(Closing parenthesis.)
I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.
Unfortunately I can't remember the title or the authors. Does anyone remember this paper? I'd like to refer to it in this talk. Thanks!
That would probably be "Why do humans reason" by Mercier and Sperber, which I covered in this post.
Ladies and gentlemen, the human brain: acetaminophen reduces the pain of social rejection.
An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said "70% chance of rain/snow/whatever," and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.
I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.
I will report my findings here, if you are interested, and if I stay interested.
Note that this sort of thing has been done a bit before. See for example this analysis.
Edit: The linked analysis has a lot of problems. See discussion below.
Cool, but hold on a minute though. I quote:
Isn't something wrong here? If you say "60% chance of rain," and it doesn't rain, you are not necessarily a bad forecaster. Not unless it actually rained on less (or more!) than 60% of those occasions. It should rain on ~60% of occasions on which you say "60% chance of rain."
Am I just confused about this fellow's methodology?
If I'm reading this correctly they are doing exactly what you want but only breaking into two categories "more likely to rain than not" and "less likely to rain than not." But I'm confused by the fact that 50 percent gets into the expecting rain category.
To me it almost seems as though a scenario like this is happening:
In other words, isn't the author misrepresenting the forecasters in throwing away their POPs, which could be interpreted as subjective beliefs about likelihoods?
I was also sort of confused by:
Is changing the forecast as new information comes in a bad thing?? Or is it merely that they are changing the forecast too much?
Nota bene: I am also very tired and may just be being thickheaded - I rate that possibility at about 50%, and you're welcome to check my calibration. =)
I think the criticism is that if they need to change their predictions so much between time 1 and time 2, then it is irresponsible to make any prediction at time 1. This is a hard case to make out for the temperature swings, since I think 8 degrees is only about one standard deviation for a prediction of a day's temperature in a city knowing only what day of the year it is, but it's an easy case to make out for the precipitation swings: if, on average, you are wrong by 40% objective probability (not even 40% error; 40% chance of rain, here), then a prediction of, e.g., 30% will on average convey virtually no information; that could easily mean 0% or it could easily mean 70%, and without too much implausibiliy it could even mean 90% -- so why bother saying 30% at all when you could (more honestly) admit your ignorance about whether it will rain next week.
In the meteorologists' defense, their medium-range predictions become useful when tested against broader time periods. Specifically, a 60% chance of rain on Thursday means you can be pretty sure that it will rain on Wednesday, Thursday, or Friday -- perhaps with 90% confidence. The reason for this is that predictions of rain generally come from tracking low-pressure pockets of air as they sweep across the continent; these pockets might speed up or slow down, or alter their course by a few degrees, but they rarely disappear or turn around altogether.
This is a much more reasonable testing method when one's predictions are based on an alleged causal process. For example, suppose I claim that I can predict how many cards Bob will draw in a game of blackjack by taking into consideration all of the variables in the game. A totally naive predictor might be "Bob will hit no matter what." That predictor might be right about 60% of the time. A slightly better predictor might be "Bob will hit if his cards show a total of 13 or less." That predictor might be right about 70% of the time. If I, as a skilled blackjack kibitzer, can really add predictive value to these simple predictors, then I should be able to beat their hit-miss ratio, maybe getting Bob's decision right 75% of the time. If I knew Bob quite well and could read his tells, maybe I would go up to 90%.
Anyway, 66% is pretty good for a blind guess that can't be varied from episode to episode. So the test with the die that you're using in your analogy is a fair test, but the bar is set too high. If you can get 66% on a hit-miss test with a one-sentence rule, you're doing pretty well.
Point taken about forecast updating - information changing that drastically may be merely worthless noise.
However, on the coin toss/blackjack thing...
In your blackjack example, the answer you give is binary - Bob will either say "hit me" or "[whatever the opposite is, I've never played]." The meteorologists are giving answers in terms of probabilities: "there is a 70% chance that it will rain."
If you did that in the Blackjack example; i.e., you said "I rate it as 65% likely that Bob will take another card," and then he DIDN'T take another card, that would not mean you were bad at predicting - we would have to watch you for longer.
My complaint is that the author interpreted forecasters' probabilities as certainties, rounding them up to 1 or down to 0. This was unfair as it ignored their self-stated levels of confidence.
Sorry, I didn't communicate clearly.
Correct. However, suppose we repeat this experiment 100 times, each time reducing my probability estimate to a binary prediction of hit-stay. Suppose that Bob hits 60 times, 50 of which were on occasions when I assigned greater than 50% probability to Bob hitting, and Bob stays 40 times, 13 of which were on occasions when I assigned less than 50% probability to Bob hitting. Thus, my overall accuracy, when reduced to a hit-stay prediction, is 63%. This is worse than my claimed certainty level of 65%, but better than the naive predictor "Bob always hits," which only got 60% of the episodes right. Thus, the pass-fail test is one way of distinguishing my predictive abilities from the predictive abilities of a broad generalization.
To see this, suppose instead that I always predict, with 65% certainty, that Bob will hit or that Bob will stay. I might rate the chance of Bob hitting at 65%, or I might rate it at 35%. In this experiment, Bob hits 75 times, 50 of which were on occasions when I assigned a 65% probability that Bob would hit. Bob stays 25 times, 18 of which were on occasions when I assigned a 65% probability that Bob would stay. I correctly predicted Bob's action 68% of the time, which is better than my stated certainty of 65%. However, my accuracy is worse than the accuracy of the naive predictor "Bob always hits," which would have scored 75%. Thus, my predictions are not very good, by one relatively objective benchmark, despite the fact that they are, in a narrow Bayesian sense, fairly well-calibrated.
Again, sorry for the confusion. I gave an incomplete example before.
So if I understand correctly, the issue is not that the meteorologists are poorly calibrated (maybe they are, maybe they aren't), but rather that their predictions are less useful than a simple rule like "it never rains" for actually predicting whether it will rain or not.
I think I am beginning to see the light here. Basically, in this scenario you are too ignorant of the phenomenon itself, even though you are very good at quantifying your epistemic state with respect to the phenomenon? If this is more or less right, is there terminology that might help me get a better handle on this?
Bingo! That's exactly what I was trying to say. Thanks for listening. :-)
My jargon mostly comes from political science. We'd say the meteorologists are using an overly complicated model, or seizing on spurious correlations, or that they have a low pseudo-R-squared. I'm not sure any of those are helpful. Personally, I think your words -- the meteorologists are too ignorant for us to applaud their calibration -- are more elegant.
The only other thing I would add is that the reason why it doesn't make sense to applaud the meteorologists' guess-level calibration is because they have such poor model-level calibration. In other words, while their confidence about any given guess seems accurate, their implicit confidence about the accuracy of their model as a whole is too high. If your (complex) model does not beat a naive predictor, social science (and, frankly, Occam's Razor) says you ought to abandon it in favor of a simpler model. By sticking to their complex models in the face of weak predictive power, the meteorologists suggest that either (1) they don't know or care about Occam's Razor, or (2) they actually think their model has strong predictive power.
Related thought: Maybe see if they will give you their data? That would save you sometime and I'm now very interested in if a more careful analysis will substantially disagree with their results.
Oh. I see. Yes, they aren't taking into account the accuracy estimations at all. Your criticism seems correct. Your complaints about the other aspects seem accurate also.
Huh. This is disturbing; most of the Freakonomics blog entries I've read have good analysis of data. It looks like this one really screwed the pooch. I have to wonder if others they've done have similar problems that I haven't noticed.
Yeah, I am a fan of Freakonomics generally too. I will write to them, I think. Will let you know how it goes. I want to confirm I am right about the probability stuff though, I still have a niggling doubt that I've just misunderstood something. But I think they are definitely wrong about the forecast updating.
Okay, this is like a sore tooth. Somebody's wrong, and I don't know if it's me. A queazy feeling.
Listen to this though:
Uhhh.... it's remarkable that a forecast changed significantly in SEVEN DAYS? What?!
The weather is the canonical example of mathematical chaos in an (in principle) deterministic system. Of course the forecasts will change, because Tuesday's weather sets the initial conditions for Wednesday, and chaotic systems are ultra-sensitive to initial conditions! The forecasters would be idiots if they didn't update their forecasts as much as possible.
The "close second," moreover, should be first! That change occurred in a two day period versus a seven! ARGGHHH.
IBM's Watson AI trumps humans in "Jeopardy!"
http://news.ycombinator.com/item?id=1436625
Thanks a lot for the link. I remember Eliezer arguing with Robin whether AI will advance explosively by using few big insights, or incrementally by amassing encoded knowledge and many small insights. Watson seems to constitute evidence in favor of Robin's position as it has no single key insight:
Interview with Lloyd's of London space underwriter.
http://www.lloyds.com/News_Centre/Features_from_Lloyds/News_and_features_2009/Market_news/60_seconds_with_David_Wade.htm
Does anyone happen to know the status of Eliezer's rationality book?
The first draft is in progress.
Second draft, technically. The first draft was a rough outline of the contents.
I wasn't counting that as a "draft".
Message from Warren Buffett to other rich Americans
http://money.cnn.com/2010/06/15/news/newsmakers/Warren_Buffett_Pledge_Letter.fortune/index.htm?postversion=2010061608
I find super-rich people's level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to 'get there'. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:
In this sense they are sort of 'natural experiments' of cognitive biases at work.
Wow. That is some seriously clear thinking. Too bad Mr. Buffet isn't here to get the upvote himself, so I upvoted you instead. ;-)
I think in Buffett's case this is not an accident; I venture to claim that his wealth is a result of fortune combining with an unusual doze of rationality (even if he calls it 'genes'). My strongest piece of evidence is that his business partner for the past 40 years, Charlie Munger, is one of the very early outspoken adopters of the good parts of modern psychology, such as ideas of Cialdini and Tversky/Kahneman and decision-making under uncertainty.
http://vinvesting.com/docs/munger/human_misjudgement.html
Oh wow, I think I have a new role model. Any chance we can get these two (Buffet and Munger) to open a rationality dojo? (Who knows, they might be impressed, given that most people ask them for wealth advice instead...)
A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion - e.g. that Jesus loves them.
Yes, in that it could be open-ended in any "direction" independent of the delusion. However, that might require contrived initial conditions or cognitive architecture. You might also find the delusion becoming neutralized for all practical purposes, e.g. the delusional proposition is held to be true in "real reality" but all actual actions and decisions pertain to some "lesser reality", which turns out to be empirical reality.
ETA: Harder question: are there thinking systems which can know that they aren't bounded in such a way?
Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.
How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.
How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.
Of course, both those arguments fall apart if the deception equipment is "unusually clever" at deceiving you. In that case both questions are probably hopeless.
The first one fails terribly. I've had dreams where I've thought I've proven some statement I'm thinking about and when waking up can remember most of the "proof" and it is clearly incoherent. No, subconscious, the fact that Martin van Buren was the 8th President of the United States does not tell me anything about zeros of L-functions. (I've had other proofs that were valid though so I don't want the subconscious to stop working completely).
The second one seems more viable. May I suggest using something like electromagnetic stimulation of specific areas of the brain rather than deliberately damaging sections? For that matter, the fact that drugs can alter thought processes not just perception also strongly argues against being a brain in the vat by the same sort of logic.
I like your idea way better than mine. Smoke dope to prove you're not in the Matrix!
Regarding the first point, yes, I guess dreams can hijack your reasoning in arbitrary ways. But maybe I'm atypical like that: whenever my dreams contain verse, music or math proofs, they always make perfect sense upon waking. They do sound "creatively weird", and I must take care to repeat them in my mind to avoid amnesia, but they work fine on real world terms.
A similar method was used by Solaris protagonist to check if he isn't hallucinating.
Ouch! I read Solaris long ago. It seems the idea stuck in my head and I forgot its origin. And it does make much more sense if you substitute "hallucinating" for "dreaming".
The trick, then, is to instill in yourself a habit of checking whether you are asleep regularly (ie. even when you are awake). A habit of thinking "am I awake, let me check" is the hard part and without that habit your sleeping mind isn't likely to question itself. Literature on lucid dreaming talks a lot about such tests. In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.
The odd thing is that despite reading stuff about reality tests and trying to build a habit from doing them while awake, on the rare occasions I've had a lucid dream I've just spontaneously become aware that I'm presently dreaming. I don't remember ever having a non-lucid dream where I've done a reality test.
Instead of fancy stuff like determining prime factors, one consistent dream sign I've had is utter incompetence in telling time from digital watches and clocks. This generally doesn't tip me off that I'm dreaming though, and doesn't occur often enough that I could effectively condition myself to recognize it.
There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.
Interesting. And personally I find experimenting with trance and self-hypnosis by themselves to be even more fascinating than vivid dreaming. If only I did not come with the apparent in built feature of inoculating myself to any particular method of trance or self hypnosis after a few successful uses.
Do you have access to the computer software of your choice in your dreams? That sounds unusually vivid to me, maybe even lucid. I'm lucky if I can find a working pen and a desk that obeys the laws of physics in my dreams.
I know I do. In the last couple of years I have gone from almost never remembering a dream to having dreams that are sometimes even more vivid than my memories of real life. I even had to check my computer one day to see whether or not what I remembered doing was 'real' or not.
Heck, I'm lucky if I can find trousers in my dreams.
Depends on how you define 'lucky' I guess. ;)
No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
If you are a brain in a vat then that should alter sensory perception. It shouldn't alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn't purely sensory.
You don't seem to be familiar with this concept.
This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.
Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
Hmm. Your comment has brought to my attention an issue I hadn't thought of before.
Are you familiar with Aumann's knowledge operators? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): "I know that E". Note that the operator's output is of the same type as its input - a subset of the all-encompassing universe of discourse - and so it's natural to try iterating the operator, obtaining K(K(E)) and so on.
Which brings me to my question. Let E be the event "you are a thing that thinks", or "you exist". You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E - smaller subsets of the universe of discourse - so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
When I was younger, a group of my friends started teasing others because they didn't know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn't know that they knew it -- that was the joke.
I have a sensory/gut experience of being a thinking being, or, as you put it, E.
Based on that experience, I develop the abstract belief that I exist, i.e., K(E).
By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.
So I like the distinction between E and K(E), but I'm not sure what insights further recursion is supposed to provide.
I think "unusually clever" should be "sufficiently clever" in your caveat. I have very wide error bars on what I think would be usual, but I suspect that it's almost guaranteed to defeat those tests if it's defeated the overall test you've already applied of "have only memories of experiences consistent with a believable reality".
In which case both questions are indeed hopeless.
Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?
ETA: absent other suggestions, I'm going to call such devices "AI bombs".
These ideas have already been investigated and documented:
Box: http://fragments.consc.net/djc/2010/04/the-singularity-a-philosophical-analysis.html
Stopping: http://alife.co.uk/essays/stopping_superintelligence/
If these precautions become necessary, end of the world will follow shortly (which is the only possible conclusion of "AGI research", so I guess the researchers should rejoice at the work well done, and maybe "relax a bit", as the world burns).
I don't understand your argument. Are you saying this containment scheme won't work because people won't use it? If so, doesn't the same objection apply to any FAI effort?
What khafra said - also this sounds like propelling toy cars using thermonuclear explosions. How is this analogous to FAI? You want to let the FAI genie out of the bottle (although it will likely need a good sandbox for testing ground).
Yep, I caught that analogy as I was writing the original comment. Might be more like producing electricity from small, slow thermonuclear explosions, though :-)
Not small explosions. Spill one drop of this toxic stuff and it will eat away the universe, nowhere to hide! It's not called "intelligence explosion" for nothing.
That's right - I didn't offer any arguments that a containment failure would not be catastrophic. But to be fair, FAI has exactly the same requirements for an error-free hardware and software platform, otherwise it destroys the universe just as efficiently.
If my Vladimir-modelling heuristic is correct, he's saying that you're postulating a world where humanity has developed GAI but not FAI. Having your non-self-improving GAI solve stuff one math problem at a time for you is not going to save the world quickly enough to stop all the other research groups at a similar level of development from turning you and your boxed GAI into paperclips.
An AI in a simulated world isn't prohibited from improving itself.
More to the point, I didn't imagine I would save the world by writing one comment on LW :-) My idea of progress is solving small problems conclusively. Eliezer has spent a lot of effort convincing everybody here that AI containment is not just useless - it's impossible. (Hence the AI-box experiments, the arguments against oracle AIs, etc.) If we update to thinking it's possible after all, I think that would be enough progress for the day.
I don't think it's really an airtight proof--there's a lot that a sufficiently powerful intelligence could learn about its questioners and their environment from a question; and when we can't even prove there's no such thing as a Langford Basilisk, we can't establish an upper bound on the complexity of a safe answer. Essentially, researchers would be constrained by their own best judgement in the complexity of the questions and of the responses.
Of course, all that's rather unlikely, especially as it (hopefully) wouldn't be able to upgrade its hardware--but you're right, software-only self-improvement would still be possible.
Yes, I agree. It would be safest to use such "AI bombs" for solving hard problems with short and machine-checkable solutions, like proving math theorems, designing algorithms or breaking crypto. There's not much point for the AI to insert backdoors into the answer if it only cares about the verifier's response after a trillion cycles, but the really paranoid programmer may also include a term in the AI's utility function to favor shorter answers over longer ones.
How to Keep Someone with You Forever.
This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.
I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.
One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?
One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there's a belief that raising children is almost impossible to do well enough.
Also, it's interesting that people keep spontaneously inventing sick systems. It isn't as though there's a manual. I'm guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.
On the other hand, there's a commenter who reports being treated better by her family after she disconnected from the craziness.
Interesting. I suspect that sick systems are actually highly competitively-fit, and while people who opt-out of them may be happier, those people will propagate themselves less, and therefore will be overwhelmed by Azathothian forces.
Is there any way to combat Azathoth aside from forming a singleton?
Why do you think sick systems are highly competitively fit? They seem to get a lot of work out of people, but also waste a great deal of it.
If your hypothesis is that sick systems must be competitively fit because there are a great many of them, I think stronger evidence is needed.
I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?
Funny, I've been entertaining the same idea for a few weeks.
Every time I read statements like "... and then I update the probabilities, based on this evidence ...", I think to myself: "I wish I had the time (or processing power) he thinks he has. ;)"
yay! music composition AI
we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.
good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?
Good music isn't about good music. It's about which music authorities have approved of it.
Thanks for the link.
Mozart developed the Mozart sonata.
Great article. Thanks for the link!
P. Z. Myers discusses the relevance of gender as a proxy for intelligence.
Related: Argument Screens Off Authority.
I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :
From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.
The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.
Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.
Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.) Also, if evaluating individual intelligence is costly and/or inevitably noisy, it is (selfishly) rational for evaluators to give significant weight to gender, i.e. discriminate. And given how little people understand statistics, and the extent to which judgments of status/worth are tied to intelligence and to group membership, it seems inevitable that belief in group differences will lead people to discriminate far more than would be rational.
Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?
Say we have two somewhat similar positions:
A straw man is pretending that people arguing B are arguing A, or pretending that there's no difference between the two - which seems to be what P.Z. Myers is doing.
You're saying that position B gives support for position A, and, yes, it does. That can be a good reason to attack people who support position B (especially if you really don't like position A), but that holds even if position B is true.
Agreed. I don't necessarily approve of this sort of rhetoric, but I think it's worth trying to figure out what causes it, and recognize any good reasons that might be involved. (I also don't mean to say that people who use this rhetoric are calculating instrumental rationalists — mostly, I think they, as I alluded to, don't recognize the possibility of saying things representative of and useful to an outgroup without being allied with it.)
Well, I think PZ Myers is a liar who has never heard of such people, but they do exist. Robin Hanson, for one. More representative is conchis's claim early in the comments that
Rewritten: I've heard hints along these lines in America, where girls get better grades, in both high school and college, than boys with the same SATs. This is suggested to be about conscientiously doing homework. If American colleges don't want to reward conscientiousness, they could change their grading to avoid homework.
That would make them be like my understanding of Oxford, where I believe grades are based on high-stakes testing, not on homework. But I also thought admissions was only based on high-stakes testing, too. That is, I don't even know what the quoted claim means by "grades," nor have I been able to track down people openly admitting anything like it.
Do British students get grades other than A-levels? Are there sex divergences between the grades and A-levels? A-levels and predictions? I hear that Oxbridge grades are lower variance for girls than boys. I also hear that boys do better on the math SATs than on the math A-levels, which seems like it should be a condemnation of one of the tests.
Feds under pressure to open US skies to drones
http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america
I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain's post titled "That Other Kind of Status." I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I'm leaving it up to keep the responses in context).
I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.
I've been a lurker in this community for three months and I've found that it's the smartest community that I've ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having "arrived home."
At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.
I don't want to get involved in a debate about this point now (although I'd be happy to elaborate and give my thoughts in detail if there's interest).
What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).
My drawing attention to this question is not out of malice toward any of you - as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I've ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic - we're all only human.
But I am concerned that I haven't seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I've seen is Yvain's post titled "Extreme Rationality: It's Not That Great". Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).
Any thoughts? I'd also be interested in any relevant references.
[Edited in response to cupholder's comment, deleted extraneous words.]
You know what... I'm going to come right out and say it.
A lot of people need their clergy. And after a decade of denial, I'm finally willing to admit it - I am one of those people.
The vast majority of people do not give their 10% tithe to their church because some rule in some "holy" book demands it. They don't do it because they want a reward in heaven, or to avoid hell, or because their utility function assigns all such donated dollars 1.34 points of utility up to 10% of gross income.
They do it because they want their priests to kick more ass than the OTHER group's priests. OUR priests have more money, more power, and more intellect and YOUR sorry-ass excuse for a holy-man. "My priest bad, cures cancer and mends bones; your priest weak, tell your priest to go home!"
So when I give money to the SIAI (or FHI or similar causes) I don't do it because I necessarily think it's the best/most important possible use of my fungible resources. I do it because I believe Eliezer & Co are the most like-me actors out there who can influence the future. I do it because of all the people out there with the ability to alter the flow of future events, their utility function is the closest to my own, and I don't have the time/energy/talent to pursue my own interests directly. I want the future to look more like me, but I also want enough excess time/money to get hammered on the weekends while holding down an easy accounting job.
In short - I want to be able to just give a portion of my income to people I trust to be enough like me that they will further my goals simply by pursuing their own interests. Which is to say: I want to support my priests.
And my priests are Eliezer Yudkowsky and the SIAI fellows. I don't believe they leach off of me, I feel they earn every bit of respect and funding they get. But that's besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.
The vatican isn't made out of gold because the pope is greedy, it's made out of gold because the peasants demand that it be so. And frankly, I demand that the vatican be put to fucking shame when it compares itself us.
Standard Disclaimer, but really... some enthusiasm is needed to fight Azathoth.
Voted up for honesty.
I'm not aware of anyone here who would claim that LW is one of the most important things in the world right now but I think a lot of people here would agree that improving human reasoning is important if we can have those improvements apply to lots of different people across many different fields.
There is a definite group of people here who think that SIAI is really important. If one thinks that a near Singularity is a likely event then this attitude makes some sense. It makes a lot of sense if you assign a high probability to a Singularity in the near future and also assign a high probability to the possibility that many Singularitarians either have no idea what they are doing or are dangerously wrong. I agree with you that the SIAI is not that important. In particular, I think that a Singularity is not a likely event for the foreseeable future, although I agree with the general consensus here that a large fraction of Singularity proponents are extremely wrong at multiple levels.
Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important. That's the same reason that a lot of the general public thinks that tokamak fusion reactors will be practical in the next fifty years: The physicists and engineers who think that are going to loudly push for funding. The ones who don't are going to generally just go and do something else. Thus, in any given setting it can be difficult to estimate the general communal attitude towards something since the strongest views will be the views that are most apparent.
I don't think intelligence explosion is imminent either. But I believe it's certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.
Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.
"But I believe it's certain to eventually happen, absent the end of civilization before that."
And I will live 1000 years, provided I don't die first.
(As opposed to gradual progress, of course. I could make a case with your analogy facing an unexpected distinction also, as in what happens if you got overrun by a Friendly intelligence explosion, and persons don't prove to be a valuable pattern, but death doesn't adequately describe the transition either, as value doesn't get lost.)
Vladimir, I agree with you that people should be thinking intelligence explosion, that there's a very poor level of awareness of the problem, and that the intellectual standards for discourse about this problem in the general public are poor.
I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.
SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.
I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.
That paper makes a convincing case that the 'generic' AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I'm working on a write up of those issues.
Thanks Craig, I'll check it out!
Not clear to me either that unfriendly AI is the greatest risk, in the sense of having the most probability of terminating the future (though "resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues; and "world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed).
But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).
I mean "existential risk" in a broad sense.
Suppose we run out of a source of, oh, say, electricity too fast to find a substitute. Then we would be forced to revert to a preindustrial society. This would be a permanent obstruction to technological progress - we would have no chance of creating a transhuman paradise or populating the galaxy with happy sentient machines and this would be an astronomical waste.
Similarly if we ran out of any number of things (say, one of the materials that's currently needed to build computers) before finding an adequate substitute.
My understanding is that a large scale nuclear war could seriously damage infrastructure. I could imagine this preventing technological development as well.
On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.
Yes.
That's not how economics works. If one source of electricity becomes scarce, that means it's more expensive, so people will switch to cheaper alternatives. All the energy we use ultimately comes from either decaying isotopes (fission, geothermal) or the sun; neither of those will run out in the next thousand years.
Modern computer chips are doped silicon semiconductors. We're not going to run out of sand any time soon, either. Of course, purification is the hard part, but people have been thinking up clever ways to purify stuff since before they stopped calling it 'alchemistry.'
The energy requirements for running modern civilization aren't just a scalar number--we need large amounts of highly concentrated energy, and an infrastructure for distributing it cheaply. The normal economics of substitution don't work for energy.
It's entirely possible that failure to create a superintelligence before the average EROI drops too low for sustainment would render us unable to create one for long enough to render other existential risks inevitabilities.
I would have thought that those 'cheaper alternatives' could still be more expensive than the initial cost of the original source of electricity...? In which case losing that original source of electricity could still bite pretty hard (albeit maybe not to the extent of being an existential risk).
A stably benevolent stable world government/singleton could take its time solving AI, or inching up to it with biological and culture intelligence enhancement. From our perspective we should count that as almost a maximal win in terms of existential risks.
I don't see your point. It would take an unrealistic world dictatorship (whether it's "benevolent" seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI. And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.
You were talking about hundred year time scales. That's time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That's time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off
But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don't need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.
It's also time enough for any of the huge number of other outcomes. It's not outright impossible, but pretty improbable, that the world will go this exact road. And don't underestimate how crazy people are.
After the change of mind about value of drifted human preference, I agree that WBE/intelligence enhancement is a viable road. Here're my arguments about the impact of these paths at this point.
WBE is still at least decades away, probably more than a hundred years if you take planning fallacy into account, and depends on the development of global technological efforts that are not easily influenced. Value of any "institutional arrangements" and viability of arguing for them given the remoteness (hence irrelevance at present) and implausibility (to most people) of WBE, also seems doubtful at present. This in my mind makes the marginal value on any present effort related to WBE relatively small. This will go up sharply as WBE tech gets closer
I suspect that FAI theory, once understood, will still be simple enough (if any general theory is possible), and can be developed by vanilla humans (on unknown timescale, probably decades to hundreds of years, but at some point WBEs overtake the timescale estimates). By the time WBE becomes viable, the risk situation will be already very explosive, so if we can get a good understanding earlier, we could possibly avoid that risky period entirely. Also, having a viable technical Friendliness programme might give academic recognition to the problem (that these risks are as unavoidable as laws of physics, and not just something to talk with your friends about, like politics or football), which might spread awareness of the AI risks on an otherwise unachievable level, helping with institutional change promoting measures against wild AI and other existential risks. On the other hand, I won't underestimate human craziness on this point as well - technical recognition of the problem may still live side to side with global indifference.