Open Thread June 2010, Part 3
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (606)
Today is Autistic Pride Day, if you didn't know. Celebrate by getting your fellow high-functioning autistic friends together to march around a populated area chanting "Aspie Power!" Preferably with signs that say "Neurotypical = manipulative", "fake people aren't real", or something to that effect.
Kidding. (About everything after the first sentence, I mean.)
I've noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind ("Doom"), (2) Unfriendly AI ("UFAI") and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.
The main argument (known as Rolf Nelson's AI deterrence) can be modeled by counterfactual mugging: an UFAI will give up a (small) portion of the control over its world to FAI's preference (pay the $100), if there is a (correspondingly small) probability that FAI could've been created, had the circumstances played out differently (which corresponds to the coin landing differently in counterfactual mugging), in exchange for the FAI (counterfactually) giving up a portion of control to the UFAI (reward from Omega).
As a result, having an UFAI in the world is better than having no AI (at any point in the future), because this UFAI can work as a counterfactual trading partner to a FAI that could've existed under other circumstances, which would make the FAI stronger (improve the value of the possible worlds). Of course, the negative effect of decreasing the probability of FAI is much stronger than the positive effect of increasing the probability of UFAI to the same extent, which means that if the choice is purely between UFAI and FAI, the balance is conclusively in FAI's favor. That there are FAIs in the possible worlds also shows that the Doom outcome is not completely devoid of moral value.
More arguments and a related discussion here.
It can mostly be ignored, but uFAI affects physically-nearby aliens who might have developed a half-Friendly AI otherwise. (But if they could have, then they have counterfactual leverage in trading with your uFAI.) No reason to suspect that those aliens had a much better shot than we did at creating FAI, though. Creating uFAI might also benefit the aliens for other reasons... that I won't go into, so instead I will just say that it is easy to miss important factors when thinking about these things. Anyway, if the nanobots are swarming the Earth, then launching uFAI does indeed seem very reasonable for many reasons.
Fascinating! Do you still agree with what you wrote there? Are you still researching this issues and do you plan on writing a progress report or an open problems post? Would you be willing to write a survey paper on decision theoretic issues related to acausal trade?
My best guess about what's preferable to what is still this way, but I'm significantly less certain of its truth (there are analogies that make the answer come out differently, and level of rigor in the above comment is not much better than these analogies). In any case, I don't see how we can actually use these considerations. (I'm working in a direction that should ideally make questions like this more clear in the future.)
If you know how to build a uFAI (or "probably somewhat reflective on its goal system but nowhere near provably Friendly" AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn't predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.
Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn't terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn't known whether causality could be reversed to use such a plan to make a society more democratic.
Such plans work in societies with rule of law, and fail miserably in societies that are clan based and tribal. A quarter of Afghanistan's GDP may go to bribes and shakedowns. A more honest description from NPR would be that historically, mineral wealth when controlled by deeply corrupt governments like Afghanistan's, is primarily used for graft and nepotism, benefiting a few elites in government and industry while funding the oppression of everyone else.
In other words, Afghanistan is more like Nigeria than Norway.
Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.
Is there any more efficient way to do it?
Hmm... I don't know about recent comments, I just go to the posts I'm following. Hit control+F and then type (or copy/paste) "load more comments" and go through and hit each one. Then erase it and type the current date or yesterday's date in the formate "date month" (18 June) and it will highlight all of those comments (if you use youtube a lot, you might already use this method on the "see all comments" page except you have to type "hour" or "minute" instead of an exact time which is actually more convenient.) When you're done checking all of the new comments you can erase that and put in "continue this thread" (is that right, I forgot what it is exactly.)
Hope that helps.
The only measure I know of that might make it more efficient to catch up on recent comments is for you to go to your preferences page, and where it says "Display 50 comments by default," change the "50" to some larger number. I have been using "200" on a very slow (33.6 K bits/sec) connection.
Are there periods in your life when you read or at least skim every comment made on Less Wrong? The reason I ask is that I am a computer programmer, and every now and then I imagine ways of making the software behind Less Wrong easier to use. To do that effectively, I need to know things about how people use Less Wrong.
Here's my wishlist:
As much trn functionality as it seems to be worth coding-- in particular, the ability to default to only seeing unread comments (or at least a Recent Comments for posts as well as for the whole site) while reading comments to a post while having easy access to old comments. the ability to default to not seeing chosen threads and sub-threads, and tree navigation.
If you want to find out how people generally use the site, I think a top level post asking about it is the only way to get the questions noticed. If you post it, I'll upvote it.
I also find this problem annoying and would like to see more recent comments on a page. I usually read through every comment on recent comments when I come to LW.
Thanks. I've got it set at 500 comments, but I don't think it actually shows 500-- and in any case, I think it's just for comment threads, not for recent comments.
It's akrasia, but yeah, I've been using Recent Comments to read or at least skim everything.
I don't even have clear ideas of the right questions to ask about how people use LW, but a survey would be interesting.
I never noticed that before, but you are right: all the /comments/ pages I have asked for have 100 comments on them regardless of how I try to change that. (I tried setting the number in prefs to a smaller value, logging out and in again, following a "Next" link.)
(Oddly, although it will show me a page with 100 comments on it if I click it, the URL in the "Next" link bottom of a /comments/ page contains the string "count=50".)
Use the RSS feed that appears on the recent comments page. I use reader.google.com to read my RSS feeds. This will allow you to scroll back in bulk using just the scrollbar then read at leisure. It also shows comments as 'read' or 'unread' based on where you are up to.
Fascinating talk (Highly LW-relevant)
http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html
Replicator constructed in Conway's Life
One of Eliezer's posts talks about realizing that conventional science is content with an intolerably slow pace. Here we have an example of less time leading to a better solution.
Now I'm wondering what screen resolution and how many potions of longevity would be required to evolve intelligent life while playing ADOM.
Apparently it doesn't replicate itself any more than a glider does; the old copy is destroyed as it creates a new copy.
Reading the conwaylife.com thread gives a better sense of this thingie's importance than the comparison with a glider. ;)
Aaron Swartz: That Sounds Smart
I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.
Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.
Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.
My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.
If you noticed, this idea comes from Differential Geometry, where you use a collection ("atlas") of overlapping charts/local homeomorphisms to R^n ("maps") as a suitable structure for discussing manifolds.
I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I'm not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.
For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).
But I would go farther than this. I would also claim that we shouldn't imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that "It's maps (or models, or turtles) all the way down".
What's an example of people doing this?
I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.
Mostly agree. It's really irritating and unproductive (and for me, all too frequent) when someone thinks they've got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.
Yes, people need to watch for the hidden assumptions they make, but they shouldn't point out the assumptions others make unless they can say why it's unreasonable and how its weakening would hurt the argument it's being used for. "You're assuming X!" is not, by itself, relevant counterargument.
You might be interested in How to Lie with Maps.
These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful -- only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely available -- but I already programmed my own program for that about 14 years ago as a nice entry level programming exercise, and used it quite extensively and successfully for about 2 years in school, till I suddenly stopped. That made me wonder which other great ideas I already used and discarded, why former me would do such a thing and to make it a public question: which great things LWers might have tried and discarded for no particular reason.
Another obvious example from my own stack would be the use of checklists to pack for holidays. Worked great for years and still does.
That's kind of hard - if they were so great, how could we remember they were great and also not immediately reinstate them?
Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.
I am instead looking for an analysis of how people's varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psychology, society, or economics.
By "bargaining power" I mean the ability to steer transactions toward one's preferred outcome within a zone of win-win agreements. For example, if we are trapped on a desert island and I have a computer with satellite internet access and you have a hand-crank generator and we have nothing else on the island except that and our bathing suits and we are both scrupulously honest and non-violent, we will come to some kind of agreement about how to share our resources...but it is an open question whether you will pay me something of value, I will pay you something, or neither. Whoever has more bargaining power, by definition, will come out ahead in this transaction.
I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.
Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he's probably lying, since Guede most likely is the killer, and it's not who this new guy claims. But what can you do against the irrational?
I found this on a Slashdot discussion as a result of -- forgive me -- practicing the dark arts. (Pretty depressing I got upmodded twice on net.)
Should be easy to test his claims...
I sometimes wonder, is the Italian judicial system really that lousy or is there some sort of linguistic or cultural barrier there.
Slashdot threads have a bad enough signal to noise ratio as is. Please don't do that sort of thing.
Should I stop doing this too? Or at least wait until people start challenging the term "top theologian"?
Yes, as a regular reader of Slashdot, I'd prefer if you didn't do that. I don't see what you are accomplishing from these remarks. It really does come across as simple trolling.
You know what, bro? I'm not even going to ask your opinion about this.
Notable:
That's at least humorous although I have to inquire if the other AC who replies to you about also being a Christian on Slashdot is also you.
Edit: Also, to be clear: My general response whenever these sorts of dark arts come up is very simple: If one needs to do this to get people convinced of you position that's a cause to worry if one's position is actually correct.
Um, I don't believe the position I linked if that's what you're worried about...
No, I mean you are deliberately portraying an alternate position as stupid apparently hoping that people will think that reversed intelligence is stupidity. That's a serious dark art. So if one is going to do that sort of thing one should worry that maybe one's position is really not correct.
Hm, good point. I guess I am fake justifying. I'll admit, I like to troll, and I'm kinda let down that no one has ever objected to the term "top theologian", saying, "wait, what exactly do you have to do to count as a top theologian? What predictions, exactly?"
I actually participate as a "friendly troll" on a private board on gamefaqs.com. "Friendly troll" in that most everyone there knows I'm a troll and just makes fun of the people who make serious replies to my topics; and I casually chat with people there about what troll topics I should make. The easiest one is, "Isn't evolution still basically just a theory at this point?"
In high school (late 90s), I would troll chatrooms and print transcripts to share with my friends the next day. One of them was a real "internet paladin" type and said, "people like you should be banned from the internet". My crowning "achievement" was to say a bunch of offensive stuff in a gameroom on a card game site, which got a moderator called in; but by that point, everyone was yelling really offensive stuff at me, and got themselves banned. I was left alone because I made (mocking) apologies just in time, and the moderator couldn't scroll up enough to see most of my earlier comments.
I've mostly toned it down and gotten away from it but I still do it here and there. Well, not here, but you get the point.
It can be fun, I will guiltily admit, but not nearly as much fun as trying to present what you actually believe in a clever enough way that somebody goes... click. (In which endeavour, by all means be sarcastic and use pathos).
You have to do some sort of calculus on what the upshot of this trolling is though... if the upshot is increased irrationality, well, there isn't much functional difference between you and your alter ego.
And all the Anonymous_Coward arguments I've seen that you listed are BETTER arguments (sad as that is) than most sincere ones in support of similar conclusions. The Good Soldier Švejk isn't actually supposed to be a good soldier. :P
You were arguing against your real opinion as a 5th columner? May I ask why?
(Well done, by the way, in a technical sense. Just the right amount of character assassination: "Sollecito and Knox were known to be practitioners of dangerous sex acts.")
Just don't kill the younglings, Anakin!
I thought it would get modded down and then provoke someone as well-informed as komponisto to thoroughly refute it, and make people realize how stupid those arguments were.
Damn ... now that's starting to sound like a fake justification!
Eh, I guess I just like trolling too :-/
Internet, Silas. Silas, Internet. ;)
I think you will find an ample number of inspiringly bad arguments out there, without adding to their number. I believe this is called cutting one's nose to spite one's face.
FYI, this was discussed previously here
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?
Calling them "dark arts" is itself a tactic for framing that only affects the less-rational parts of our judgement.
A purely rational agent will (the word "should" isn't necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it's goals.
The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they're wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic.
Put another way, persistent disagreement indicates mutual contempt for each others' rationality. If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.
If both parties are imperfectly rational, limited use of dark arts can speed things up. The question shouldn't be whether it's possible to present dry facts and logic with no spin, but whether it's efficient. There are certain biases that tend to prevent ideas from even being considered. Using other biases and heuristics to counteract those biases - just to get more alternative explanations to be seriously considered - won't impair or bypass the rationality of the listener.
Dark arts, huh? Sometime ago I put forward the following scenario:
Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?
(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)
Is that actually the FAI's only or best technique?
Off the top of my non-amplified brain:
Reward Fred for not torturing kittens.
Give Fred simulated kittens to torture and deny Fred access to real kittens.
Give Fred something harmless to do which he likes better than torturing kittens.
ETA Convince Fred that torturing kittens is wrong.
our CEV is (and has to be) detailed enough to answer the question of "do we want that?". Saving a kitten is a good thing. Being truthful to Bob is a good thing. Not torturing Bob is a good thing. The relative weights of these good things determines the FAI's actions.
I'd say that the FAI should calculate some game-theoretic chance of torturing Bob for 50 years based on relative pain of kitten death and of having to inflict the torture. Depending on Bob's expected rationality level, we could tell him "you'll be tortured", or "you might be tortured", or the actual mechanism of determining whether he is tortured.
Actually, strike that. Any competent AI will find ways aside from possible torture to make Bob not want that. Either agree with Bob's reason for killing the kitten, or fix him so he only wants things that make sense. I'm not sure how friendly this is - I haven't seen a good writeup or come to any conclusions myself of what FAI does with internal contradictions in a CEV (that is, when a population's extrapolated volition is not coherent).
My thoughts about this problem are kind of a mess right now, but I feel there's more than meets the eye.
Ignore the torture, "possible torture" and all that. It's all a red herring. The real issue is lying, tricking humans into utility-increasing behaviors. It's almost certain that some combination of "relative weights of good things" will make the FAI lie to humans. Maybe not the Bob+kitten scenario exactly, but something is bound to turn up. (Unless of course our CEV places a huge disutility on lies, which I'm pretty sure won't be the case.) On the other hand, we humans quickly jump to distrusting anyone who has lied in the past, even if we know it's for our own good. So now the FAI has huge incentive to conceal its lies, prevent the news from spreading among humans. I don't have enough brainpower to model this scenario further, but it troubles me.
Lying is a form of manipulation, and humans don't want/like to be manipulated. If the CEV works, then it will understand human concepts like "trust" and "lying" and hopefully avoid it. The only situations where it will intentionally manipulate people is when it is trying to do what is best for humanity. In these cases, you don't have to worry because the CEV is smarter then you, but is still trying to do the "right thing" that you would do if you knew everything it knew.
Well... that depends...
Exactly.
Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.
Yes.
Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.
Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.
That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:
Eliezer on Informers and Persuaders
It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.
I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.
Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?
Gawande on the need to develop competent systems for delivering medical care
Gawande on the need to develop competent systems for delivering medical care
(Closing parenthesis.)
Thanks.
I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.
Unfortunately I can't remember the title or the authors. Does anyone remember this paper? I'd like to refer to it in this talk. Thanks!
That would probably be "Why do humans reason" by Mercier and Sperber, which I covered in this post.
The very one. Thanks - and wow, that was swift!
Ladies and gentlemen, the human brain: acetaminophen reduces the pain of social rejection.
Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)
Just in case you were wondering too.
I was wondering indeed. That was surreal.
The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes "here's a directed acyclic graph... we're going to add on a teensy weensy few extra assumptions... and out of it construct the minkowski metric, and relativistic transformations"
I'm slowly making my way through this paper (partly slowed by the fact that I'm not all that familiar with order theory), but the reason I mention the paper (A Derivation of Special Relativity from Causal Sets) is because I can't help but wonder if it might give us a hook to go in the other direction. That is, if this line of research might let us bring the mathematical machinery of much of physics to help us analyze stuff like Bayes nets and decision theory and give us a (potentially) really powerful mathematical tool.
Maybe I'm completely wrong and nothing interesting will come of trying to "reverse" the causal set line of research, (but causal set stuff is neat anyways, so at least I get some fun from reading and thinking about it) but does seem potentially worth looking into.
Besides, if this does end up being a useful tool, it would be perhaps one of the biggest and subtlest punchlines the universe pulled on us: since causal-sets are an approach to quantum gravity, if it ended up helping with the rationality/AI/etc stuff...
That would mean that Penrose was right about quantum gravity being a key to mind... BUT IN A WAY ENTIRELY DIFFERENT THAN HE INTENDED! bwahahahaha. :)
An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said "70% chance of rain/snow/whatever," and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.
I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.
I will report my findings here, if you are interested, and if I stay interested.
Note that this sort of thing has been done a bit before. See for example this analysis.
Edit: The linked analysis has a lot of problems. See discussion below.
Cool, but hold on a minute though. I quote:
Isn't something wrong here? If you say "60% chance of rain," and it doesn't rain, you are not necessarily a bad forecaster. Not unless it actually rained on less (or more!) than 60% of those occasions. It should rain on ~60% of occasions on which you say "60% chance of rain."
Am I just confused about this fellow's methodology?
If I'm reading this correctly they are doing exactly what you want but only breaking into two categories "more likely to rain than not" and "less likely to rain than not." But I'm confused by the fact that 50 percent gets into the expecting rain category.
Okay, this is like a sore tooth. Somebody's wrong, and I don't know if it's me. A queazy feeling.
Listen to this though:
Uhhh.... it's remarkable that a forecast changed significantly in SEVEN DAYS? What?!
The weather is the canonical example of mathematical chaos in an (in principle) deterministic system. Of course the forecasts will change, because Tuesday's weather sets the initial conditions for Wednesday, and chaotic systems are ultra-sensitive to initial conditions! The forecasters would be idiots if they didn't update their forecasts as much as possible.
The "close second," moreover, should be first! That change occurred in a two day period versus a seven! ARGGHHH.
To me it almost seems as though a scenario like this is happening:
In other words, isn't the author misrepresenting the forecasters in throwing away their POPs, which could be interpreted as subjective beliefs about likelihoods?
I was also sort of confused by:
Is changing the forecast as new information comes in a bad thing?? Or is it merely that they are changing the forecast too much?
Nota bene: I am also very tired and may just be being thickheaded - I rate that possibility at about 50%, and you're welcome to check my calibration. =)
I think the criticism is that if they need to change their predictions so much between time 1 and time 2, then it is irresponsible to make any prediction at time 1. This is a hard case to make out for the temperature swings, since I think 8 degrees is only about one standard deviation for a prediction of a day's temperature in a city knowing only what day of the year it is, but it's an easy case to make out for the precipitation swings: if, on average, you are wrong by 40% objective probability (not even 40% error; 40% chance of rain, here), then a prediction of, e.g., 30% will on average convey virtually no information; that could easily mean 0% or it could easily mean 70%, and without too much implausibiliy it could even mean 90% -- so why bother saying 30% at all when you could (more honestly) admit your ignorance about whether it will rain next week.
In the meteorologists' defense, their medium-range predictions become useful when tested against broader time periods. Specifically, a 60% chance of rain on Thursday means you can be pretty sure that it will rain on Wednesday, Thursday, or Friday -- perhaps with 90% confidence. The reason for this is that predictions of rain generally come from tracking low-pressure pockets of air as they sweep across the continent; these pockets might speed up or slow down, or alter their course by a few degrees, but they rarely disappear or turn around altogether.
This is a much more reasonable testing method when one's predictions are based on an alleged causal process. For example, suppose I claim that I can predict how many cards Bob will draw in a game of blackjack by taking into consideration all of the variables in the game. A totally naive predictor might be "Bob will hit no matter what." That predictor might be right about 60% of the time. A slightly better predictor might be "Bob will hit if his cards show a total of 13 or less." That predictor might be right about 70% of the time. If I, as a skilled blackjack kibitzer, can really add predictive value to these simple predictors, then I should be able to beat their hit-miss ratio, maybe getting Bob's decision right 75% of the time. If I knew Bob quite well and could read his tells, maybe I would go up to 90%.
Anyway, 66% is pretty good for a blind guess that can't be varied from episode to episode. So the test with the die that you're using in your analogy is a fair test, but the bar is set too high. If you can get 66% on a hit-miss test with a one-sentence rule, you're doing pretty well.
Point taken about forecast updating - information changing that drastically may be merely worthless noise.
However, on the coin toss/blackjack thing...
In your blackjack example, the answer you give is binary - Bob will either say "hit me" or "[whatever the opposite is, I've never played]." The meteorologists are giving answers in terms of probabilities: "there is a 70% chance that it will rain."
If you did that in the Blackjack example; i.e., you said "I rate it as 65% likely that Bob will take another card," and then he DIDN'T take another card, that would not mean you were bad at predicting - we would have to watch you for longer.
My complaint is that the author interpreted forecasters' probabilities as certainties, rounding them up to 1 or down to 0. This was unfair as it ignored their self-stated levels of confidence.
Sorry, I didn't communicate clearly.
Correct. However, suppose we repeat this experiment 100 times, each time reducing my probability estimate to a binary prediction of hit-stay. Suppose that Bob hits 60 times, 50 of which were on occasions when I assigned greater than 50% probability to Bob hitting, and Bob stays 40 times, 13 of which were on occasions when I assigned less than 50% probability to Bob hitting. Thus, my overall accuracy, when reduced to a hit-stay prediction, is 63%. This is worse than my claimed certainty level of 65%, but better than the naive predictor "Bob always hits," which only got 60% of the episodes right. Thus, the pass-fail test is one way of distinguishing my predictive abilities from the predictive abilities of a broad generalization.
To see this, suppose instead that I always predict, with 65% certainty, that Bob will hit or that Bob will stay. I might rate the chance of Bob hitting at 65%, or I might rate it at 35%. In this experiment, Bob hits 75 times, 50 of which were on occasions when I assigned a 65% probability that Bob would hit. Bob stays 25 times, 18 of which were on occasions when I assigned a 65% probability that Bob would stay. I correctly predicted Bob's action 68% of the time, which is better than my stated certainty of 65%. However, my accuracy is worse than the accuracy of the naive predictor "Bob always hits," which would have scored 75%. Thus, my predictions are not very good, by one relatively objective benchmark, despite the fact that they are, in a narrow Bayesian sense, fairly well-calibrated.
Again, sorry for the confusion. I gave an incomplete example before.
So if I understand correctly, the issue is not that the meteorologists are poorly calibrated (maybe they are, maybe they aren't), but rather that their predictions are less useful than a simple rule like "it never rains" for actually predicting whether it will rain or not.
I think I am beginning to see the light here. Basically, in this scenario you are too ignorant of the phenomenon itself, even though you are very good at quantifying your epistemic state with respect to the phenomenon? If this is more or less right, is there terminology that might help me get a better handle on this?
Bingo! That's exactly what I was trying to say. Thanks for listening. :-)
My jargon mostly comes from political science. We'd say the meteorologists are using an overly complicated model, or seizing on spurious correlations, or that they have a low pseudo-R-squared. I'm not sure any of those are helpful. Personally, I think your words -- the meteorologists are too ignorant for us to applaud their calibration -- are more elegant.
The only other thing I would add is that the reason why it doesn't make sense to applaud the meteorologists' guess-level calibration is because they have such poor model-level calibration. In other words, while their confidence about any given guess seems accurate, their implicit confidence about the accuracy of their model as a whole is too high. If your (complex) model does not beat a naive predictor, social science (and, frankly, Occam's Razor) says you ought to abandon it in favor of a simpler model. By sticking to their complex models in the face of weak predictive power, the meteorologists suggest that either (1) they don't know or care about Occam's Razor, or (2) they actually think their model has strong predictive power.
Related thought: Maybe see if they will give you their data? That would save you sometime and I'm now very interested in if a more careful analysis will substantially disagree with their results.
Oh. I see. Yes, they aren't taking into account the accuracy estimations at all. Your criticism seems correct. Your complaints about the other aspects seem accurate also.
Huh. This is disturbing; most of the Freakonomics blog entries I've read have good analysis of data. It looks like this one really screwed the pooch. I have to wonder if others they've done have similar problems that I haven't noticed.
Yeah, I am a fan of Freakonomics generally too. I will write to them, I think. Will let you know how it goes. I want to confirm I am right about the probability stuff though, I still have a niggling doubt that I've just misunderstood something. But I think they are definitely wrong about the forecast updating.
Q: What Is I.B.M.’s Watson?
http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all
A: what is Skynet?
Sounds a little like Shalmaneser.
And now it's time for the Daily Double!
In the video, I didn't understand whether that series of wrong answers was staged or actually happened.
Very impressive though. Class.
Episode of the show Outnumbered that might appeal to this community. The show in general is very funny, smart and well acted, children's roles in particular.
IBM's Watson AI trumps humans in "Jeopardy!"
http://news.ycombinator.com/item?id=1436625
Thanks a lot for the link. I remember Eliezer arguing with Robin whether AI will advance explosively by using few big insights, or incrementally by amassing encoded knowledge and many small insights. Watson seems to constitute evidence in favor of Robin's position as it has no single key insight:
Interview with Lloyd's of London space underwriter.
http://www.lloyds.com/News_Centre/Features_from_Lloyds/News_and_features_2009/Market_news/60_seconds_with_David_Wade.htm
Does anyone happen to know the status of Eliezer's rationality book?
The first draft is in progress.
Second draft, technically. The first draft was a rough outline of the contents.
I wasn't counting that as a "draft".
Message from Warren Buffett to other rich Americans
http://money.cnn.com/2010/06/15/news/newsmakers/Warren_Buffett_Pledge_Letter.fortune/index.htm?postversion=2010061608
I find super-rich people's level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to 'get there'. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:
In this sense they are sort of 'natural experiments' of cognitive biases at work.
Wow. That is some seriously clear thinking. Too bad Mr. Buffet isn't here to get the upvote himself, so I upvoted you instead. ;-)
I think in Buffett's case this is not an accident; I venture to claim that his wealth is a result of fortune combining with an unusual doze of rationality (even if he calls it 'genes'). My strongest piece of evidence is that his business partner for the past 40 years, Charlie Munger, is one of the very early outspoken adopters of the good parts of modern psychology, such as ideas of Cialdini and Tversky/Kahneman and decision-making under uncertainty.
http://vinvesting.com/docs/munger/human_misjudgement.html
Oh wow, I think I have a new role model. Any chance we can get these two (Buffet and Munger) to open a rationality dojo? (Who knows, they might be impressed, given that most people ask them for wealth advice instead...)
A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion - e.g. that Jesus loves them.
Yes, in that it could be open-ended in any "direction" independent of the delusion. However, that might require contrived initial conditions or cognitive architecture. You might also find the delusion becoming neutralized for all practical purposes, e.g. the delusional proposition is held to be true in "real reality" but all actual actions and decisions pertain to some "lesser reality", which turns out to be empirical reality.
ETA: Harder question: are there thinking systems which can know that they aren't bounded in such a way?
Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.
How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.
How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.
Of course, both those arguments fall apart if the deception equipment is "unusually clever" at deceiving you. In that case both questions are probably hopeless.
A similar method was used by Solaris protagonist to check if he isn't hallucinating.
Ouch! I read Solaris long ago. It seems the idea stuck in my head and I forgot its origin. And it does make much more sense if you substitute "hallucinating" for "dreaming".
The trick, then, is to instill in yourself a habit of checking whether you are asleep regularly (ie. even when you are awake). A habit of thinking "am I awake, let me check" is the hard part and without that habit your sleeping mind isn't likely to question itself. Literature on lucid dreaming talks a lot about such tests. In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.
The odd thing is that despite reading stuff about reality tests and trying to build a habit from doing them while awake, on the rare occasions I've had a lucid dream I've just spontaneously become aware that I'm presently dreaming. I don't remember ever having a non-lucid dream where I've done a reality test.
Instead of fancy stuff like determining prime factors, one consistent dream sign I've had is utter incompetence in telling time from digital watches and clocks. This generally doesn't tip me off that I'm dreaming though, and doesn't occur often enough that I could effectively condition myself to recognize it.
There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.
Interesting. And personally I find experimenting with trance and self-hypnosis by themselves to be even more fascinating than vivid dreaming. If only I did not come with the apparent in built feature of inoculating myself to any particular method of trance or self hypnosis after a few successful uses.
Do you have access to the computer software of your choice in your dreams? That sounds unusually vivid to me, maybe even lucid. I'm lucky if I can find a working pen and a desk that obeys the laws of physics in my dreams.
I know I do. In the last couple of years I have gone from almost never remembering a dream to having dreams that are sometimes even more vivid than my memories of real life. I even had to check my computer one day to see whether or not what I remembered doing was 'real' or not.
Heck, I'm lucky if I can find trousers in my dreams.
Depends on how you define 'lucky' I guess. ;)
No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
If you are a brain in a vat then that should alter sensory perception. It shouldn't alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn't purely sensory.
You don't seem to be familiar with this concept.
This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.
Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
Hmm. Your comment has brought to my attention an issue I hadn't thought of before.
Are you familiar with Aumann's knowledge operators? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): "I know that E". Note that the operator's output is of the same type as its input - a subset of the all-encompassing universe of discourse - and so it's natural to try iterating the operator, obtaining K(K(E)) and so on.
Which brings me to my question. Let E be the event "you are a thing that thinks", or "you exist". You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E - smaller subsets of the universe of discourse - so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
I wasn't familiar with this description of "world states", but it sounds interesting, yes. I take it that positing "I am a think that things" is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn't apply.
I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don't know that I know proposition A, then I don't know proposition A.
Edit/Revised: I think all you have to do is realize that "K(K(A)) false" permits "K(A) false". At first I had a little proof but now it seems just redundant so I deleted it.
So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don't see how you can learn anything beyond K(A).
K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a "strong" statement corresponds to a smaller set of world states than a "weak" one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.
Format definition of K(E)={s \in S | P(s) \subset E }, where P is partition of S, ensures that K(K(E))=K(E). It's easy to see: if s \in K(E) then P(s) \subset e, thus s \in K(K(E)), and similarly for s \notin K(E).
As for informal sence, I don't see much use of K(K(E)) where E is a plain fact, if I aware that I know E, introspecting on that awareness will provide as much K-s as I like and little more. If I don't aware that I know E (deep buried memory?), I will be aware of it when I remember it. But If I know that I know some class of facts or rules, that is useful for planning. However I can't come up with useful example for K(K(K())) and higher.
Addition: Aumann's formalization have limitations: it can't represent false knowledge, memory glitches (when I know that I know something, but I can't remember it), meta-knowledge, knowledge of rules of any kind (I'm not completely sure about rules).
When I was younger, a group of my friends started teasing others because they didn't know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn't know that they knew it -- that was the joke.
I have a sensory/gut experience of being a thinking being, or, as you put it, E.
Based on that experience, I develop the abstract belief that I exist, i.e., K(E).
By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.
So I like the distinction between E and K(E), but I'm not sure what insights further recursion is supposed to provide.
I just saw this and realized I basically just expanded on this above.
When I've read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don't mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Considering how much philosophy is complete nonsense I'd think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The "human experiences" are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can't measure the round-trip signal delay... Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations - like brains in vats - can be detected pretty easily.
Um, if you're a brain in a vat, then any "brain" you perceive in the real world like on a "real world" MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you're a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.
I think "unusually clever" should be "sufficiently clever" in your caveat. I have very wide error bars on what I think would be usual, but I suspect that it's almost guaranteed to defeat those tests if it's defeated the overall test you've already applied of "have only memories of experiences consistent with a believable reality".
In which case both questions are indeed hopeless.
The first one fails terribly. I've had dreams where I've thought I've proven some statement I'm thinking about and when waking up can remember most of the "proof" and it is clearly incoherent. No, subconscious, the fact that Martin van Buren was the 8th President of the United States does not tell me anything about zeros of L-functions. (I've had other proofs that were valid though so I don't want the subconscious to stop working completely).
The second one seems more viable. May I suggest using something like electromagnetic stimulation of specific areas of the brain rather than deliberately damaging sections? For that matter, the fact that drugs can alter thought processes not just perception also strongly argues against being a brain in the vat by the same sort of logic.
I like your idea way better than mine. Smoke dope to prove you're not in the Matrix!
Regarding the first point, yes, I guess dreams can hijack your reasoning in arbitrary ways. But maybe I'm atypical like that: whenever my dreams contain verse, music or math proofs, they always make perfect sense upon waking. They do sound "creatively weird", and I must take care to repeat them in my mind to avoid amnesia, but they work fine on real world terms.
I'm looking for some concept which I am sure has been talked about before in stats but I'm not sure of the technical term for it.
Lets say you have a function you are trying to guess with a certain range and domain. How would you talk about the amount of data you would need to likely get the actual functions with noisy data? My current thoughts are the larger the cardinality of the domain the more data you would need (in a simple relationship) and the type of noise would determine how much the size of the range would affect the amount of data you would need.
First, I would specify what set my 'function' is in. Are there 2 possibilities? 10? A million? log2(x) tells me how many bits of information I need. Then I would treat the data as coming to me through a noisy channel. How noisy? I assume you already know how noisy. Now I can plug in the noise level to Shannon's theorem, and that tells me how many noisy bits I need to get my log2(x) bits.
(This all seems like very layman information theory, which makes me wonder if something makes your problem harder than this.)
By data I meant training data (in a machine learning context), not information.
And it wasn't really the math I was after, it is quite simple, just whether it had been discussed before.
My thoughts on the math. If the cardinality of the input is X and output is Y then the bound for the space of functions you are exploring is by X^Y. E.G. there are 4 possible functions from 1 binary bit to another. (Set to 0, Set to 1, Invert, Keep the Same). I've come across this in simple category theory.
However in order to fully specify which function it is (assuming no noise). You need a minimum of 2 pieces of training data (where training data is input ouput pairs). If you have the training pair (0,0) you don't know if that means keep the same or set to 0. Fairly obviously you need as many unique samples of training data as the cardinality of the domain. You need more when you have noise.
This is a less efficient way of getting information about functions, than just getting the Turing number or similar.
So I'm really wondering if there are guidelines for people designing machine learning systems, that I am missing. So if you know you can only get 5000 training examples, you know you that a system which tries to learn from the entire space of functions with the domain much larger that 5000 is not going to be very accurate unless you have put a lot of information into the prior/hypothesis space.
The closest thing I can think of is identifiability, although that's more about whether it's possible to identify a function given an arbitrarily large amount of data.
Hmm, not quite what I was looking for but interesting none the less.
Thanks.
Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don't mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?
And here is a public list of known nuclear accidents
Notice that many of the incidents mentioned at your link don't involve nuclear bombs at all: many involve leaks at research facilities and power stations. Here's a chronological list of radiation incidents that caused injury from the start of the 20th century onwards. The vast majority don't involve nuclear bombs.
Historically, unless you were in Hiroshima or Nagasaki, you would have been less likely to die from a nuclear bombing than you would have been to die from a radiation leak, picking up a lost radioactive source without recognizing it (or living with someone who's brought one into your home), being poisoned with radiation by a coworker, or medical overexposure. (Note also that the list is surely incomplete.) It is possible that this trend will reverse in the future, but it's not obvious that it will.
More generally, gwern sounds about right to me on the subject of terrorists putting together their own nuke. (Or hauling one up from the bottom of the ocean.)
Coincidentally I just the other day learned of the banana equivalent dose as a way of placing the risk of radiation leaks in context.
I am not. To even suggest that that this is a possibility anywhere near the level of a sovereign actor giving terrorists nukes is to dramatically overestimate terrorist groups' technical competence, and also ascribe basic instrumental rationality to them (a mistake; see my Terrorism is not about Terror).
Even if a terrorist could marshal the interest, assemble in one place the millions necessary, and actually hire a world-class submersible and in the scant days they can afford, find the wreckage of a bomb, it would probably be useless. US nukes are designed to failsafe, so if the wiring has corroded, or the explosives are misaligned? And that's ignoring issues with radioactive decay. (Was the bomb a tritium-pumped H-bomb? Well, given tritium's extremely short half-life, I'm afraid that bomb is now useless.)
Maybe, although remember there are a lot more players interested in obtaining nuclear weapons then just a few terrorists. And the best crimes are the ones no one knew were commited. Unsucessful criminals are over represented as opposed to ones that got away. I suspect the same is true for terrorists. Blowing up a building isn't going to achieve your goals, but blowing up a city might. After all, it's ended a war once and just the threat stopped another from ever happening. Also, even if the bomb itself is useless, it is probably worth quite a bit of money, more then the millions it would take to retrieve it (maybe thousands as technology improves? There are some in shallower water. In 1958 the government was prepared to retrieve a lost bomb, but never located it.) I don't honestly know a lot about nuclear weapons, but the materials in it, maybe even the design itself, would be worth something to somebody. Maybe said organization has the resources to salvage it, after all, they already had enough money to get it in the first place.
Even if no bombs go off, I wouldn't be suprised if the government eventually gets around to searching for them and finds they're not there. And there are other nuclear threats to. Although I can't find anywhere to confirm it, it was floating around the internet that up to 80 "suitcase nukes" are missing. This quote from wikipedia particularly distrubed me:
I will leave it at that for now, I'm not one of those paranoid people that goes around ranting about nuclear proliferation or whatever. If there really is a problem, there's not much we can do (except maybe try to get to those lost bombs first, or take anti-terrorism more seriously.)
I prefer spending my precious mental CPUs on worrying about the US government going really bad.
Admittedly, a terrorist nuke (especially if exploded in the US) would be likely to cause the US government to take a lot more control.
I don't take Lunev seriously. Defectors are notoriously unreliable sources of information (as I think Iraq should have proven. Again.).
The problem with nuclear terrorism is that atomic bombs come with return addresses - the US has always collected isotopic samples (eg. with aerial collecting missions in international airspace) precisely to make sure this is the case. (Ironically, invading Afghanistan and Iraq may've helped deter nuclear terrorism: 'If the US invaded both these countries over just a few thousand dead, then it's plausible they will nuke us even if we cry to the heavens that we just carelessly lost that bomb.')
Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?
ETA: absent other suggestions, I'm going to call such devices "AI bombs".
These ideas have already been investigated and documented:
Box: http://fragments.consc.net/djc/2010/04/the-singularity-a-philosophical-analysis.html
Stopping: http://alife.co.uk/essays/stopping_superintelligence/
If these precautions become necessary, end of the world will follow shortly (which is the only possible conclusion of "AGI research", so I guess the researchers should rejoice at the work well done, and maybe "relax a bit", as the world burns).
I don't understand your argument. Are you saying this containment scheme won't work because people won't use it? If so, doesn't the same objection apply to any FAI effort?
What khafra said - also this sounds like propelling toy cars using thermonuclear explosions. How is this analogous to FAI? You want to let the FAI genie out of the bottle (although it will likely need a good sandbox for testing ground).
Yep, I caught that analogy as I was writing the original comment. Might be more like producing electricity from small, slow thermonuclear explosions, though :-)
Not small explosions. Spill one drop of this toxic stuff and it will eat away the universe, nowhere to hide! It's not called "intelligence explosion" for nothing.
That's right - I didn't offer any arguments that a containment failure would not be catastrophic. But to be fair, FAI has exactly the same requirements for an error-free hardware and software platform, otherwise it destroys the universe just as efficiently.
Sure, prototypes of FAI will be similarly explosive.
If my Vladimir-modelling heuristic is correct, he's saying that you're postulating a world where humanity has developed GAI but not FAI. Having your non-self-improving GAI solve stuff one math problem at a time for you is not going to save the world quickly enough to stop all the other research groups at a similar level of development from turning you and your boxed GAI into paperclips.
An AI in a simulated world isn't prohibited from improving itself.
More to the point, I didn't imagine I would save the world by writing one comment on LW :-) My idea of progress is solving small problems conclusively. Eliezer has spent a lot of effort convincing everybody here that AI containment is not just useless - it's impossible. (Hence the AI-box experiments, the arguments against oracle AIs, etc.) If we update to thinking it's possible after all, I think that would be enough progress for the day.
I don't think it's really an airtight proof--there's a lot that a sufficiently powerful intelligence could learn about its questioners and their environment from a question; and when we can't even prove there's no such thing as a Langford Basilisk, we can't establish an upper bound on the complexity of a safe answer. Essentially, researchers would be constrained by their own best judgement in the complexity of the questions and of the responses.
Of course, all that's rather unlikely, especially as it (hopefully) wouldn't be able to upgrade its hardware--but you're right, software-only self-improvement would still be possible.
Yes, I agree. It would be safest to use such "AI bombs" for solving hard problems with short and machine-checkable solutions, like proving math theorems, designing algorithms or breaking crypto. There's not much point for the AI to insert backdoors into the answer if it only cares about the verifier's response after a trillion cycles, but the really paranoid programmer may also include a term in the AI's utility function to favor shorter answers over longer ones.
How to Keep Someone with You Forever.
This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.
I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.
One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?
One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there's a belief that raising children is almost impossible to do well enough.
Also, it's interesting that people keep spontaneously inventing sick systems. It isn't as though there's a manual. I'm guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.
On the other hand, there's a commenter who reports being treated better by her family after she disconnected from the craziness.
Interesting. I suspect that sick systems are actually highly competitively-fit, and while people who opt-out of them may be happier, those people will propagate themselves less, and therefore will be overwhelmed by Azathothian forces.
Is there any way to combat Azathoth aside from forming a singleton?
Why do you think sick systems are highly competitively fit? They seem to get a lot of work out of people, but also waste a great deal of it.
If your hypothesis is that sick systems must be competitively fit because there are a great many of them, I think stronger evidence is needed.
As long as the system extracts & uses more work than it's equivalent healthy system - after wastage - then it will outperform it. It doesn't matter if the system burns through employees every few years, there are plenty of other employees to burn up.
I would think sick systems have less good judgment than healthy systems-- they don't just burn up employees, management is less likely to get information about any mistakes it 's making.
On the other hand, sick systems do at least persist for quite a while. I'm guessing that they coast on the conscientiousness and other virtues of the employees. It's conceivable that some fraction of the excess work isn't wasted.
I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?
Funny, I've been entertaining the same idea for a few weeks.
Every time I read statements like "... and then I update the probabilities, based on this evidence ...", I think to myself: "I wish I had the time (or processing power) he thinks he has. ;)"
yay! music composition AI
we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.
good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?
Thanks for the link.
Mozart developed the Mozart sonata.
Great article. Thanks for the link!
Good music isn't about good music. It's about which music authorities have approved of it.
What about saleable pop music?
P. Z. Myers discusses the relevance of gender as a proxy for intelligence.
Related: Argument Screens Off Authority.
I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :
From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.
The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.
Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.
Well, I think PZ Myers is a liar who has never heard of such people, but they do exist. Robin Hanson, for one. More representative is conchis's claim early in the comments that
Rewritten: I've heard hints along these lines in America, where girls get better grades, in both high school and college, than boys with the same SATs. This is suggested to be about conscientiously doing homework. If American colleges don't want to reward conscientiousness, they could change their grading to avoid homework.
That would make them be like my understanding of Oxford, where I believe grades are based on high-stakes testing, not on homework. But I also thought admissions was only based on high-stakes testing, too. That is, I don't even know what the quoted claim means by "grades," nor have I been able to track down people openly admitting anything like it.
Do British students get grades other than A-levels? Are there sex divergences between the grades and A-levels? A-levels and predictions? I hear that Oxbridge grades are lower variance for girls than boys. I also hear that boys do better on the math SATs than on the math A-levels, which seems like it should be a condemnation of one of the tests.
Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.) Also, if evaluating individual intelligence is costly and/or inevitably noisy, it is (selfishly) rational for evaluators to give significant weight to gender, i.e. discriminate. And given how little people understand statistics, and the extent to which judgments of status/worth are tied to intelligence and to group membership, it seems inevitable that belief in group differences will lead people to discriminate far more than would be rational.
Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?
Say we have two somewhat similar positions:
A straw man is pretending that people arguing B are arguing A, or pretending that there's no difference between the two - which seems to be what P.Z. Myers is doing.
You're saying that position B gives support for position A, and, yes, it does. That can be a good reason to attack people who support position B (especially if you really don't like position A), but that holds even if position B is true.
Agreed. I don't necessarily approve of this sort of rhetoric, but I think it's worth trying to figure out what causes it, and recognize any good reasons that might be involved. (I also don't mean to say that people who use this rhetoric are calculating instrumental rationalists — mostly, I think they, as I alluded to, don't recognize the possibility of saying things representative of and useful to an outgroup without being allied with it.)
Feds under pressure to open US skies to drones
http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america
Any LessWrongers understand basic economics? This could be another great topic set for all of us. Let's kick things off with a simple question:
I'm renting an apartment for X dollars a month. My parents have a spare apartment that they rent out to someone else for Y dollars a month. If I moved into that apartment instead, would that help or hurt the country's economy as a whole? Consider the cases X>Y, X<Y, X=Y.
ETA: It's fascinating how tricky this question turned out to be. Maybe someone knowledgeable in economics could offer a simpler question that does have a definite answer?
I recommend going to an econ textbook for good questions.
Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.
If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.
If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.
If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.
ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".
Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the economy isn't doing well enough, well, we need more "aggregate demand" -- you see, people aren't buying enough things, which must be bad.
Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.
This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.
Now, it's true there are prisoner's dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is ... um ... more pointless work that doesn't satisfy real demand .. but hey, it keeps up "aggregate demand", so it must be what a sluggish economy needs.
Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction -- despite most people being made better off and efficiency improving. If people work longer hours than they'd like, to produce stuff no one wants, well, that shows up as more GDP, and it's therefore "good".
How the **** did we get into this mindset?
Sorry, [/another rant].
What isn't reflected in the GDP is huge.
There's the underground economy-- I've seen claims about the size of it, but how would you check them?
There's everything people do for each other without it going through the official economy.
And there's what people do for themselves-- every time you turn over in bed, you are presumably increasing value. If you needed paid help, it would be adding to the GDP.
I don't understand where you acquired this view of economists. I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.
This is not an accurate portrayal of what Keynesians believe. The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.
The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy; low quality spending by government drives high quality spending by the private sector.
If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.
James_K:
Aside from the standard arguments about the shortcomings of GDP, my principal objection to the way economists use it is the fact that only the nominal GDP figures are a well-defined variable. To make sensible comparisons between the GDP figures for different times and places, you must convert them to "real" figures using price indexes. These indexes, however, are impossible to define meaningfully. They are produced in practice using complicated, but ultimately arbitrary number games (and often additionally slanted due to political and bureaucratic incentives operating in the institutions whose job is to come up with them).
In fact, when economists talk about "nominal" vs. "real" figures, it's a travesty of language. The "nominal" figures are the only ones that measure an actual aspect of reality (even if one that's not particularly interesting per se), while the "real" figures are fictional quantities with only a tenuous connection to reality.
It's not so much a matter of being overconfident as it is not listing the disclaimers at every opportunity. The Laspeyres Price Index (the usual type of price index) has well understood limitations (specifically that it overestimates consumer price growth as it doesn't deal with technological improvement and substitution effects very well), but since we don't have anything better, we use it anyway.
"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.
Bear in mind that everything economists use is an estimate of a sort, even nominal GDP. Believe it or not, they don't actually ask every business in the country how much they produced and / or received in income (which is why the income and expenditure methods of calculating GDP give slightly different numbers although they should give exactly the same result in theory). The reason this may not be readily apparent is that most non-technical audiences start to black out the moment you talk about calculating a price index (hell, it makes me drowsy) and technical audiences already understand the limitations.
James_K:
You're talking about the "real" figures being "less certain," as if there were some objective fact of the matter that these numbers are trying to approximate. But in reality, there is no such thing, since there exists no objective property of the real world that would make one way to calculate the necessary price index correct, and others incorrect.
The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods). However, even if we limit ourselves to those that look reasonable, there is still an infinite number of different procedures that can be used to calculate a price index, all of which will yield different results, and there is no objective way whatsoever to determine which one is "more correct" than others. If all the reasonable-looking procedures led to the same results, that would indeed make these results meaningful, but this is not the case in reality.
Or to put it differently, an "objective" price index is a logical impossibility, for at least two reasons. First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers. Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used. Therefore, comparisons of "real" variables invariably involve arbitrary and unwarranted assumptions about the relative values of different things to different people. Again, of course, different arbitrary choices of methodology yield different numbers here.
(By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective, unquestioningly use price indexes without stopping to think that the basic assumption behind the very notion of a price index is that value is objective and measurable after all.)
Very true. A good general measure in human economic systems should NOT merely look at the ease of availability of finished paperclips. It should also include, in the "basket", such things as extrudable metal, equipment for detecting and extracting metal, metallic wire extrusion machines, equipment for maintaining wire extrusion machines, bend radius blocks, and so forth.
Thank you for pointing this out; you are a relatively good human.
That is a very poor inference on their part.
The basket used is based on a representation of what people are currently consuming. This means we don't have to second-guess people's preferences. Unique goods like houses pose a problem, but there's not really anything we can do about that, so the normal process is to take an average of existing houses.
Which is a well understood problem. Every economist knows this, but what would you have us do? It is necessary to inflation-adjust certain statistics, and if the choice is between doing it badly and not doing it at all, then we'll do it badly. Just because we don't preface every sentence with this fact doesn't mean we're not aware of it.
Just to avoid confusion among readers, I want to distance myself from part of Vladimir_M's position. While I agree with many of the points he's made, I don't go so far as to say that CPI is a fundamentally flawed concept, and I agree with you that we have to pick some measure and go with it; and that the use of it does not require its caveats to be restated each time.
However, I do think that, for the specific purpose that it is used, it is horribly flawed in noticeable, fixable ways, and that economists don't make these changes because of lost purpose syndrome -- they get so focused on this or that variable that they're disconnected from the fundamental it's supposed to represent. They're doing the economic equivalent of suggesting to generals that their living soldiers be burned to ashes so that the media will stop broadcasting images of dead soldier bodies being brought home.
I wouldn't be in a good position to determine if it's lost purpose syndrome since I'm an insider, but I would suggest that path dependence has a lot to do with it.
Price indices are produced by governments, who are notoriously averse to change. And what's worse the broad methodology is dictated by international standards, so if an economist or some other intelligent person comes up with a better price index they have to convince the body of economists and statisticians that they have a good idea, and then convince the majority of OECD countries (at a minimum) that their method is worth the considerable effort of changing every country's methodology.
That's a high hurdle to cross.
On my blog I suggested using insulin prices as a good proxy for inflation. That should be pretty easy for economists to find, even historical data. One economists could find the historical data for one country and use it as a competing measure. No collective action problem to solve there! Just a research paper to present.
(Though I can't find it on google searches, but economists should be able to get access to the appropriate databases.)
Would error bars be a bad thing?
Economists could calculate error bars that would say how closely the calculated aggregate figures approximate their exact values according to definitions. This is normally not done, and as Morgenstern noted in the book discussed elsewhere in the thread, the results would be quite embarrassing, since they'd show that economists regularly talk about changes in the second, third, or even fourth significant digit of numbers whose error bars are well into double-digit percentages.
However, when it comes to the more essential point I've been making, error bars wouldn't make any sense, since the problem is that there is no true value out there in the first place, just different arbitrary conventions that yield different results, neither of which is more "true" than the others.
There's an old joke: "How can you tell macroeconomists have a sense of humour? They use decimal points." I'll admit spurious precision is a problem with a quite a bit of economic reporting. Remember that these statistics are produced by governments, not academics and politicians can have trouble grokking error bars.
Actually, that's not really the case. There is an ideal, it's just you can't do it. If you knew everyone's preferences and information and endowments of income, you could work out how people's consumption would change as real incomes and relative prices changed so you could figure out what the right basket of goods is to use for the index at every point in time (the right bundle is whatever bundle consumers would actually pick in a given situation).
But in practice you can't get the information you'd need to do this, and that information would be constantly changing anyway. In practice what statistical agencies do is develop a basket of goods based on current consumption and review it every decade or so. This means the index overestimates inflation (the estimates I've seen put it at about 1 percentage point per year) because when prices rise, people change their consumption patterns and we can't predict how until it's already happened.
This is a flawed procedure, but it's not arbitrary, its an honest effort to approximate the ideal price index as well as we can, given the resources at our disposal.
Here's a crude metric I use for gauging the relative goodness of societies as places to live: Immigration vs. emigration.
It's obviously fuzzy-- you can't get exact numbers on illegal migration, and the barriers (physical, legal, and cultural) to relocation matter, but have to be estimated. So does the possibility that one country may be better than another, but a third may be enough better than either of them to get the immigrants.
For example, the evidence suggests that the EU and the US are about equally good places to live.
I don't think that's a good metric. Societies that aren't open to mass immigration can have negligible numbers of immigrants regardless of the quality of life their members enjoy. Japan is the prime example.
Moreover, in the very worst places, emigration can be negligible because people can be too poor to pay for the ticket to move anywhere, or prohibited to leave.
But "given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power", you could predict how people would choose if they were not faced with legal and moving-cost barriers - e.g. imagine a philanthropist willing to pay the moving costs. So your objection to this metric seems to be a surmountable one, in principle, assuming perfect knowledge etc. The main remaining barrier to migration may be sentimental attachment - but given perfect knowledge etc. one could predict how the choices would change without that remaining barrier.
Applying this metric to Europa versus Earth, presumably Europans would choose to stay on Europa and humans would choose to stay on Earth even with legal, moving-cost, and sentimental barriers removed, indeed both would pay a great deal to avoid being moved.
In contrast to Europans versus humans, humans-of-one-epoch are not very different from humans-of-another-epoch.
Excellent point -- although I would pay a good deal to move to Europa, given a few days worth of air and heat.
A fair point, though I think societies like that are pretty rare. Any other notable examples?
Off the top of my head, I know that Finland had negligible levels of immigration until a few years ago. Several Eastern European post-Communist countries are pretty decent places to live these days (I have in mind primarily the Czech Republic), but still have no mass immigration. As far as I know, the same holds for South Korea.
Regarding emigration, the prime example were the communist countries, which strictly prohibited emigration for the most part (though, rather than looking at the numbers of emigrants, we could look at the efforts and risks many people were ready to undertake to escape, which often included dodging snipers and crawling through minefields).
If some price indexes are "clearly absurd", then they apparently have some value to us - for if they were valueless, then why call any particular one "absurd"? If they yield different results, then so be it - let us simply be open about how the different indexes are defined and what result they yield. The absence of a canonical standard will of course not be useful to people primarily interested in such things as pissing contests between nations, but the results should be useful nonetheless.
We commonly talk about tradeoffs, e.g., "if I do this then I will benefit in one way but lose in another". We can do the same thing with price indexes. "In this respect things have improved but in this other respect things have gotten worse."
Constant:
Sure, but such an approach would deny the validity of all these "real" economic variables that are based on a scalar price index. In particular, it would definitely mean discarding the entire concept of "real GDP" as incoherent. This would mean conceding the criticisms I've been expounding in this thread, and admitting the fundamental unsoundness of much of what passes for science in the field of macroeconomics.
Moreover, disentangling the complete truth about what various price indexes reveal and what they hide is an enormously complex topic that requires lengthy, controversial, and subjective judgments. This is inevitable because, after all, value is subjective.
Take for example two identically built houses located in two places that greatly differ in various aspects of the natural environment, society, culture, technological development, economic infrastructure, and political system. (It can also be the same place in two different time periods.) It makes no sense to treat them as equivalent objects of identical value; you'd have a hard time finding even a single individual who would be indifferent between the two. Now, if you want to discuss what exactly has been neglected by treating them as identical (or reducing their differences to a single universally applicable scalar factor) for the purposes of constructing a price index, you can easily end up writing an enormous treatise that touches on every aspect in which these places differ.