If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
I recently remarked that the phrase "that doesn't seem obvious to me" is good at getting people to reassess their stated beliefs without antagonising them into a defensive position, and as such it was on my list of "magic phrases". More recently I've been using "can you give a specific example?" for the same purpose.
What expressions or turns of phrase do you find particularly useful in encouraging others, or yourself, to think to a higher standard?
This is not quite what you want, but if you are a grad student giving a talk and a senior person prefaces her question to you with "I am confused about...", you are likely talking nonsense and they are too polite to tell you straight up.
Which reminds me of my born-again Christian mother - evangelicals bend over backwards to avoid dissing each other, so if you call someone "interesting" in a certain tone of voice it means "dangerous lunatic" and people take due warning. (May vary, this is in Perth, Australia.)
depersonalizing the argument is something I've had great success with. Steelmanning someone's argument directly is insulting, but steelmanning it by stating that it is similar to the position of high status person X, who is opposed by the viewpoint of high status person Y allows you to discuss otherwise inflammatory ideas dispassionately.
That one seems much more effective after one has absorbed certain memes. In contrast the one's given by sixes_and_sevens seem to work in a more general setting.
So, everyone agrees that commuting is terrible for the happiness of the commuter. One thing I've struggled to find much evidence about is how much the method of commute matters. If I get to commute to work in a chauffeur driven limo, is that better than driving myself? What if I live a 10 minute drive/45 minute walk from work, am I better off walking? How does public transport compare to driving?
I suspect the majority of these studies are done in US cities, so mostly cover people who drive to work (with maybe a minority who use transit). I've come across a couple of articles which suggest cycling > driving here and conflicting views on whether driving > public transit here but they're just individual studies - I was wondering if there's much more known about this, and figured that if there is, someone here probably knows it. If no one does, I might get round to a more thorough perusal of the literature myself now I've publicly announced that the subject interests me.
For me an important aspect is feeling of control. 15 minutes of walking is more pleasant that 10 minutes of waiting for bus and 5 minutes of travelling by bus.
Every now and then, I decide that I don't have the patience to wait 10 minutes for a bus that would take me to where I'm going in 10 minutes. So I walk, which takes me an hour.
Not in general, but I recognize your example. Walking is pleasant and active and allows me to think sustained thoughts, so it makes time 'pass' quickly. Whereas riding the subway is passive and stressful and makes me think many scattered thoughts in short time, so it makes time 'pass' slowly, making the ride seem longer. Also, if you walk somewhere in 15 minutes that probably takes about 15 minutes, but if you ride the subway for 15 minutes that probably takes more like half an hour from when you leave home to when you get to your goal.
I've just noticed that the Future of Humanity Institute stopped receiving direct funding from the Oxford Martin School in 2012, while "new donors continue to support its work." http://www.oxfordmartin.ox.ac.uk/institutes/future_humanity
Does that mean it's receiving no funding at all from Oxford University anymore? I'm surprised that there was no mention of that in November here: http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/. Is the FHI significantly worse off funding wise than it was in previous years?
Posting here rather than the 'What are you working on' thread.
3 weeks ago I got two magnets implanted in my fingers. For those who haven't heard of this before, what happens is that moving electro-magnetic fields (read: everything AC) cause the magnets in your fingertips to vibrate. Over time, as nerves in the area heal, your brain learns to interpret these vibrations as varying field strengths. Essentially, you gain a sixth sense of being able to detect magnetic fields, and as an extension, electricity. It's a $350 superpower.
The guy who put them in my finger told me it will take about six months before I get full sensitivity. So, what I'm doing at the moment is research into this and quantifying my sensitivity as it develops over time. The methodology I'm using is wrapping a loop of copper wire around my fingers and hooking it up to a headphone jack, which I will then plug into my computer and send randomized voltage levels through. By writing a program so I can do this blind, I should be able to get a fairly accurate picture of where my sensitivity cutoff level is.
One thing I'm stuck on is how to calculate the field strength acting on my magnets. Getting the B field for a solen...
"superpower" is overstating it. Picking up paperclips is neat and being able to feel metal detectors as you walk through them or tell if things are ferrous is also fun but it's more of just a "power" than a superpower. It also has the downside of you needing to be careful around hard-drives and other strong magnets. On net I'm happy I got them but it's not amazing.
It's all fun and games until you need to get MRI and your fingers burst into flames.
Then it's just fun.
There's something that happens to me with an alarming frequency, something that I almost never (or don't remember) see being referenced (and thus I don't know the proper name). I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.
Sometimes it also happens in a less alarming way, but still bad: for instance, when I'm reading something that is deliberately teaching me an important piece of knowledge (as in, I already know whathever is in this text IS important) I happen to go through it without questioning anything, just "accepting" it and a few moments later it suddenly comes down on me when I'm ahead: "Wait... what, did he just say 2 pages ago that thermal radiation does NOT need matter to propagate?" and I have again to go back and check that I was not crazy.
While I don't know the name of this effect, I have asked some acquantainces of mine about that, while some agreed that they have it others didn't. I would like very much to eliminate this flaw, anybody knows what I could do to train myself not to do it or at least the correct name so I can research more about it?
Hey komponisto (and others interested in music) -- if you haven't already seen Vi Hart's latest offering, Twelve Tones, you might want to take a look. Even though it's 30 minutes long.
(I don't expect komponisto, or others at his level, will learn anything from it. But it's a lot of fun.)
A Big +1 to whoever modified the code to put pink borders around comments that are new since the last time I logged in and looked at an article. Thanks!
I noticed a strategy that many people seem to use; for lack of a better name, I will call it "updating the applause lights". This is how it works:
You have something that you like and it is part of your identity. Let's say that you are a Green. You are proud that Greens are everything good, noble, and true; unlike those stupid evil Blues.
Gradually you discover that the sky is blue. First you deny it, but at some moment you can't resist the overwhelming evidence. But at that moment of history, there are many Green beliefs, and the belief that the sky is green is only one of them, although historically the central one. So you downplay it and say: "All Green beliefs are true, but some of them are meant metaphorically, not literally, such as the belief that the sky is green. This means that we are right, and the Blues are wrong; just as we always said."
Someone asks: "But didn't Greens say the sky is green? Because that seems false to me." And you say: "No, that's a strawman! You obviously don't understand Greens, you are full of prejudice. You should be ashamed of yourself." The someone gives an example of a Green that literally believed the sky i...
My strategy is to avoid conversations of this form entirely by default. Most Greens do not need to be shown that the belief system they claim to have is flawed, and neither do most Blues. Pay attention to what people do, not what they say. Are they good people? Are they important enough that bad epistemology on their part directly has large negative effects on the world? If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?
Yes, like moving-the-goalposts, this is an annoying and dishonest rhetorical move.
Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green.
Suppose some Green says:
Yes, intellectual precursors to the current Green movement stated that the sky was literally Green. And they were less wrong, on the whole, then people who believed that the sky was blue. But the modern intellectual Green rejects that wave of Green-ish thought, and in part identifies the mistake as that wave of Greens being blue-ish in a way. In short, the Green movement of a previous generation made a mistake that the current wave of Greens rejects. Current Greens think we are less wrong than the previous wave of Greens.
Problematic, or reasonable non-mindkiller statement (attacking one's potential allies edition)?
How much of that intuition is driven by the belief that Bluism is correct. If we change the labels to Purple (some Blue) and Orange (no Blue), does the intuition change?
What if it really was like that?
If you read any amount of history, you will discover that people of various times and places have matter-of-factly believed things that today we find incredible (in the original sense of “not credible”). I have found, however, that one of the most interesting questions one can ask is “What if it really was like that?”
... What I’m encouraging is a variant of the exercise I’ve previously called “killing the Buddha”. Sometimes the consequences of supposing that our ancestors reported their experience of the world faithfully, and that their customs were rational adaptations to that experience, lead us to conclusions we find preposterous or uncomfortable. I think that the more uncomfortable we get, the more important it becomes to ask ourselves “What if it really was like that?”
The true meaning of moral panics
...In my experience, moral panics are almost never about what they claim to be about. I am just (barely) old enough to remember the tail end of the period (around 1965) when conservative panic about drugs and rock music was actually rooted in a not very-thinly-veiled fear of the corrupting influence of non-whites on pure American children. In ret
Sure. Also see the recent follow-ups to the Stanford marshmallow experiment. It sure looks like some of what was once considered to be innate lack of self-restraint may rather be acquired by living in an environment where others are unreliable, promises are broken, etc.
{EDITED to clarify, as kinda suggested by wedrifid, some highly relevant context.}
This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it
A number of JoshuaZ's other recent comments there have received similar treatment. It seems a reasonable conclusion (though maybe there are other explanations?) that multiple LW accounts have, within a short period of time, been downvoting perfectly decent comments by JoshuaZ. As per other discussions in that thread [EDITED to add: see next paragraph for more specifics], this seems to have been provoked by his making some "pro-feminist" remarks in the discussions of that topic brought up by recent events in HPMOR.
{EDITED to add...} Highly relevant context: Elsewhere in the thread JoshuaZ reports that, apparently in response to his comments in that discussion, he has had a large number of comments on other topics downvoted in rapid succession. This, to my mind, greatly raises the probability that what's goi...
There is now fanfic about Eliezer in the Optimalverse. I'm not entirely sure what to make of it.
In response to this post: http://www.overcomingbias.com/2013/02/which-biases-matter-most-lets-prioritise-the-worst.html
Robert Wiblin got the following data (treated by a dear friend of mine):
89 Confirmation bias
54 Bandwagon effect
50 Fundamental attribution error
44 Status quo bias
39 Availability heuristic
38 Neglect of probability
37 Bias blind spot
36 Planning fallacy
36 Ingroup bias
35 Hyperbolic discounting
29 Hindsight bias
29 Halo effect
28 Zero-risk bias
28 Illusion of control
28 Clustering illusion
26 Omission bias
25 Outcome bias
25 Neglect of prior base rates effect
25 Just-world phenomenon
25 Anchoring
24 System justification
24 Kruger effect
23 Projection bias
23 Mere exposure effect
23 Loss aversion
22 Overconfidence effect
19 Optimism bias
19 Actor-observer bias
18 Self-serving bias
17 Texas sharpshooter fallacy
17 Recency effect
17 Outgroup homogeneity bias
17 Gambler's fallacy
17 Extreme aversion
16 Irrational escalation
15 Illusory correlation
15 Congruence bias
14 Self-fulfilling prophecy
13 Wobegon effect
13 Selective perception
13 Impact bias
13 Choice-supportive bias
13 Attentional bias
12 Observer-expectancy effect
12 False consensus effect
12 Endowment effect
11 Rosy retrospection
11 Information bias
11...
How do you correct your mistakes?
For example, I recently found out I did something wrong at a conference. In my bio, in areas of expertise I should have written what I can teach about, and in areas of interest what I want to be taught about. This seems to maximize value for me. How do I keep that mistake from happening in the future? I don't know when the next conference will happen. Do I write it on anki and memorize that as a failure mode?
More generally, when you recognize a failure mode in yourself how do you constrain your future self so that it doesn't repeat this failure mode? How do you proceduralize and install the solution?
I've been thinking about tacit knowledge recently.
A very concrete example of tacit knowledge that I rub up against on a regular basis is a basic understanding of file types. In the past I have needed to explain to educated and ostensibly computer-literate professionals under the age of 40 that a jpeg is an image, and a PDF is a document, and they're different kinds of entities that aren't freely interchangeable. It's difficult for me to imagine how someone could not know this. I don't recall ever having to learn it. It seems intuitively obvious. (Uh-oh!)
So I wonder if there aren't some massive gains to be had from understanding tacit knowledge more than I do. Some applications:
What do you think or know about tacit knowledge, LessWrong? Tell me. It might not be obvious.
a jpeg is an image, and a PDF is a document
Sir, you are wrong on the internet. A JPEG is a bitmap (formally, pixmap) image. A PDF is a vector image.
The PDF has additional structure which can support such functionality as copying text, hyperlinks, etc, but the primary function of a PDF is to represent a specific image (particularly, the same image whether displayed on screen or on paper).
Certainly a PDF is more "document"-ish than a JPEG, but there are also "document" qualities a PDF is notably lacking, such as being able to edit it and have the text reflow appropriately (which comes from having a structure of higher-level objects like "this text is in two columns with margins like so" and "this is a figure with caption" and an algorithm to do the layout). To say that there is a sharp line and that PDF belongs on the "document" side is, in my opinion, a poor use of words.
(Yes, this isn't the question you asked.)
That isn't the standard use of "tacit knowledge." At least it doesn't match the definition. Tacit knowledge is supposed to be about things that are hard to communicate. The standard examples are physical activities.
Maybe knowing when to pay attention to file extensions is tacit knowledge, but the list of what they mean is easy to write down, even if it is a very long list. Knowing that it valuable to know about them is probably the key that these people were missing, or perhaps they failed to accurate assess the detail and correctness of their beliefs about file types.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitiv
There's a scam I've heard of;
Mallet, a notorious swindler, picks 10 stocks and generates all 1024 permutations of "stock will go up" vs. "stock will go down" predictions. He then gives his predictions to 1024 different investors. One of the investors receive a perfect, 10 out 10 prediction sheet and is (Mallet hopes) convinced Mallet is a stock picking genius.
Since it's related to the Texas sharpshooter fallacy, I'm tempted to call this the Texas stock-picking scam, but I was wondering if anyone knew a "proper" name for it, and/or any analysis of the scam.
Miles Brundage recently pointed me to these quotes from Ed Fredkin, recorded in McCorduck (1979).
On speed of thought:
...Say there are two artificial intelligences... When these machines want to talk to each other, my guess is they'll get right next to each other so they can have very wide-band communication. You might recognize them as Sam and George, and you'll walk up and knock on Sam and say, "Hi, Sam. What are you talking about?" What Sam will undoubtedly answer is, "Things in general," because there'll be no way for him to tell you.
Related to "magic phrases", what expressions or turns of phrase work for you, but don't work well for a typical real-world audience?
I tend to use "it's not magic" as shorthand for "it's not some inscrutable black-boxed phenomenon that defies analysis and reasoning". Moreover, I seem to have internalised this as a reaction whenever I hear someone describing something as if it were such a phenomenon. Using the phrase generally doesn't go down well, though.
On why playing hard to get is a bad idea, and why a lot of women do it.
This was something I was meaning to post about in some of the gender discussions, but I wasn't sure that a significant proportion of men were still put off by women who were direct about wanting sex with them-- but apparently, it's still pretty common.
Oh neato. The class notes for a recent class by Minsky link to Intelligence Explosion: Evidence and Import under "Suggested excellent reading."
Would be good to have a single central place for all CFAR workshop reviews, good and bad. Here's two:
I've been talking to some friends who have some rather odd spiritual (in the sense of disorganised religion) beliefs. Odd because its a combination of modem philosophy LW would be familiar with (acausal commuication between worlds of Tegmark's level IV multiverse) ancient religion, and general weirdness. I have trouble pointing my finger at exactly what is wrong with this reasoning, although I'm fairly sure there is a flaw, in the same way I'm quite sure I'm not a Boltzmann brain, but it can be hard articulating why. So, if anyone is interested, here is th...
Any LWers in Seattle fancy a coffee?
I'm at UW until the end of the month, so would prefer cafes within walking distance of the university.
Is there a way of making precise, and proving, something like this?
For any noisy dynamic system describable with differential equations, observed through a noisy digitised channel, there exists a program which produces an output stream indistinguishable from the system.
It would be good to add some notion of input too.
There are several issues with making this precise and avoiding certain problems, but I suspect all of this is already solved so it's probably not worth me going into detail here. In the unlikely event this isn't already a solved problem, I could have a go at precisely stating and proving this.
Article discussing how the cost of copper has gone up over time as we've used more and more of the easily accessible, high percentage ores. This is another example of a resource which may contribute to Great Filter considerations (along with fossil fuels). As pointed out in the article, unlike oil, copper doesn't have many good replacements for a lot of what it is used for.
That said, I suspect that this is not a major aspect of the Filter. If the cost goes up, the main impact would be on consumer goods which would become more expensive. That's unpleasant ...
I was reading http://slatestarcodex.com/ and I found myself surprised again, by Yvain persuasively steelmanning an argument that he doesn't himself believe in at http://slatestarcodex.com/2013/06/22/social-psychology-is-a-flamethrower/
It's particularly ironic because in that very post, he mentions:
I can’t find the link for this, but negatively phrased information can sometimes reinforce the positive version of that information.
Which seems to be what I am falling for. He outright says:
...I think some of the arguments below will be completely correct, oth
you should probably update towards "being convincing to me is not sufficient evidence of truth." Everything got easier once I stopped believing I was competent to judge claims about X by people who investigate X professionally. I find it better to investigate their epistemic hygiene rather than their claims. If their epistemic hygiene seems good (can be domain specific) I update towards their conclusions on X.
A married couple has asked me to donate sperm and to impregnate the wife. They would then raise the child as their own, with no help from me. Would it be ethical or unethical for me to give them sperm? In particular, am I doing a service or a disservice to the child I would create?
[pollid:534]
Assuming you don't have any particular reason to expect that this couple will be abusive, it's more ethical the better your genes are. If you have high IQ or other desirable heritable traits, great. (It seems plausible to anticipate that high IQ will become even more closely correlated with success in the future than it is now.) If you have mutations that might cause horrible genetic disorders, less great.
The child is wanted, so if they don't actually neglect it it'll grow up fine.
Note that if you donate sperm without going through the appropriate regulatory hoops as a sperm donor (which vary per country), you will be liable for child support.
I am surprised no one else has brought up the LW party line: consequentialism.
What is the alternative?
What is the consequence of your decision?
Probably the alternative is that someone else donates sperm. Either way, they raise a child that is not the husband's. If creating such a life is terrible (which I don't believe), is it worse that it is your child than someone else's? Consequentialism rejects the idea that you are complicit in one circumstance and not the other.
There are other options, like trying to convince them not to have children, or to get a donation from the husband's relatives, but they are unlikely to work.
If the choice is between your sperm or another's, then, as Qiaochu says, the main difference to the child is genes of the donor. Also, your decision might affect your relationship with the couple.
What can possibly be unethical about it? You are the only one who is vulnerable, since you might be legally on the hook for child support.
It creates a child who will not be raised by their biological father.
What's the specific problem this would cause?
This is a poll for people who have ever made an attempt at obtaining a career in programming or system administration or something like that. I'm interested in your response if you've made any attempt of this sort, whether you've succeeded, changed your mind, etc.
ETA: Oops, I forgot an "I just want to see the results" option. If you vote randomly to see them, I'd appreciate it if you do not vote anonymously, and leave a comment reply.
At what age have you learned to touch-type? [pollid:530]
How did you come to learn to touch-type? [pollid:531]
How d...
I need help finding a particular thread on LW, it was a discussion of either utility or ethics, and it utilized the symbols Q and Q* extensively, as well as talking about Lost Purposes. My inability to locate it is causing me brain hurt.
I'm looking for good, free, online resources on SQL and/or VBA programming, to aid in the hunt for my next job. Does anyone have any useful links? As well, any suggestions on a good program to use for SQL?
Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.
Meta question. Is it better to correct typos and minor, verifiable factual errors (e.g. a date being a year off) in a post in the post's comment thread or a PM to the author?
Does anyone have any information/links/recommendations regarding how to reduce computer-related eye strain? Specifically any info on computer glasses? I was looking at Gunnar but I can't find enough reliable evidence to justify buying them and I would be surprised if there are no better options.
Fwiw I went to an optician today who deemed my vision good, however I spend large amounts of time in front of my screens and my eyes are tired a large fraction of the time.
So I recently released a major update to my commercial game, and announced that I would be letting people have it for donations at half the price for the remainder of July 2013. I suspect I did not make that last part prominent enough in the post to the forum where most of my audience originates, since of the purchases made since, only one took the half price option--the rest all paid the normal price. The post included three links: the game's page, my audio games (general) page, and the front page of my website, I believe in that order. (That is also in a...
I'm trying to find a getting-a-programming-job LW article I remember reading recently for a friend. I thought it was posted within the last few months, but searching through Discussion I didn't find it.
The post detailed one LWer's quest to find a programming job, including how he'd been very thorough preparing for interviews and applying to many positions over a matter of months, finally getting a few offers, playing them off each other, and eventually, I believe, accepting a position at Google.
Anyone know the article I'm talking about?
I find myself a non-altruist in the sense that while I care about the well-being and happiness of many proximate people, I don't care about the good of people unqualifiedly. What am I getting wrong? If asked to justify unqualified altruism, what would you say?
Some folks at the Effective Altruists Facebook group suggested that it might be useful to have a map of EAs. If you would like to be listed in such a map, please fill this form. The data collected will be used to auto-populate this Google Map. (The map is currently unlisted: it can be seen only by those with access to the corresponding URL.)
Anyone care to speculate as to when/at what point bitcoin price is likely to stop dropping?
One of my best friends is a very high suicide risk. Has anybody dealt with this kind of situation; specifically trying to convince the friend to try psychiatry? I'll be happy to talk details, but I'm not sure the Open Thread is the best medium.
Just this: If you friend starts saying that their problems are solved and everything is going to be okay, become more careful. Sudden improvement in the mood and later returning to the original level is more dangerous than the original situation, because at this moment the person has a new belief: "every improvement is only temporary". Which makes them less likely to act reasonably.
I've been there, and this is one of those situations that requires professional help, not random advice from the Internet. If you're in the U.S., call the National Suicide Prevention Lifeline at 1-800-273-8255 and explain the situation. They can assist you further. If you're in some other country, just Google for your local equivalent.
I was browsing through the West L.A Meet up discussion article and found it really fascinating. It will be about humans generating random number strings and the many applications where this would be useful. It's too bad I can't attend. Off the top of my head, I feel like I can only come up with one digit randomly by looking at my watch, not sure how I would get more than that. Does anyone have a decent way to generate random numbers on the spot with out a computer?
Seeking Educational Advice...
I imagine some LW user have these questions, or can answer them. Sorry if this isn’t the right place (but point me to the right place please!).
I’m thinking of returning to university to study evolution/biology, the mind, IT, science-type stuff.
Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?
How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in educatio...
I personally regard this entire subject as a memetic hazard, and will rot13 accordingly.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
... gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.
Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.
V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhn...
Article discussing how the cost of copper has gone up over time as we've used more and more of the easily accessible, high percentage ores. This is another example of a resource which may contribute to Great Filter considerations (along with fossil fuels). As pointed out in the article, unlike oil, copper doesn't have many good replacements for a lot of what it is used for.
That said, I suspect that this is not a major aspect of the Filter. If the cost goes up, the main impact would be on consumer goods which would become more expensive. That's unpleasant but not a Filter event. It also isn't relevant from the standpoint of resources necessary to bootstrap us back up to the current tech level in event of a major disaster since there will be all sorts of nearly pure copper that could be scavenged from the remains of civilization.
This may however be a strong argument for either finding new copper replacements (possibly novel alloys), or for the development of asteroid mining which will help out with a lot of different metals.
Thoughts? Does this analysis seem accurate?