Reflective oracles and superationality
This grew out of an exchange with Jessica Taylor during MIRI’s recent visit to the FHI. Still getting my feel for the fixed point approach; let me know of any errors.
People at MIRI have recently proved you can use reflective oracles so that agents can use them to reason about other agents (including other agents with oracles) and themselves, and consistently reach Nash equilibriums. But can we do better than that?
To recap, a reflective oracle is a machine O such that:
-
P(A()=1)>p implies O(A,p)=1
-
P(A()=0)>1-p implies O(A,p)=0
This works even if A() includes a call to the oracle within its code. Now, all the algorithms used here will be clearly terminating, so we’ll have the other two implications as well (eg (P(A()=0)>p implies O(A,p)=0). And given any δ, we can, with order log(1/δ) questions, establish the probability of A() to within δ. Thus we will write O(A()==a)=p to mean that O(A()==a,(n-1)δ/2)=1 and O(A()==a,(n+1)δ/2)=0, where (n-1)δ/2 < p < (n+1)δ/2.
Note also that O can be used to output a probabilistic output (to within δ), so outputting specific mixed strategies is possible.
Yudkowsky's brain is the pinnacle of evolution
Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?
The answer:
Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."
Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.
How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.
Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.
The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.
Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.
We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.
Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.
The morality of disclosing salary requirements
Many firms require job applicants to tell them either how much money they're making at their current jobs, or how much they want to make at the job they're interviewing for. This is becoming more common, as more companies use web application forms that refuse to accept an application until the "current salary" or "salary requirements" box is filled in with a number.
The Arguments
I've spoken with HR people about this, and they always say that they're just trying to save time by avoiding interviewing people who want more money than they can afford.
Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance
LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it. In that spirit, I will do so now.
First of all, several caveats:
* You should not go blindly buying anything that you do not understand. If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc. I will assume that hte rest of the readers who continue reading this have a decent idea of what Bitcoin is.
* Under absolutely no circumstances should you invest money into Bitcoin that you cannot afford to lose. "Risk money" only! That means that if you were to lose 100% of you money, it would not particularly damage your life. Do not spend money that you will need within the next several years, or ever. You might in fact want to mentally write off the entire thing as a 100% loss from the start, if that helps.
* Even more strongly, under absolutely no circumstances whatsoever will you borrow money in order to buy Bitcoins, such as using margin, credit card loans, using your student loan, etc. This is very much similar to taking out a loan, going to a casino and betting it all on black on the roulette wheel. You would either get very lucky or potentially ruin your life. Its not worth it, this is reality, and there are no laws of the universe preventing you from losing.
* This post is not "investment advice".
* I own Bitcoins, which makes me biased. You should update to reflect that I am going to present a pro-Bitcoin case.
So why is this potentially a time to buy Bitcoins? One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair. As Warren Buffett said, Mr. Market is like a manic depressive. One day he walks into your office and is exuberant, and offers to buy your stocks at a high price. Another day he is depressed and will sell them for a fraction of that.
The root cause of this phenomenon is confirmation bias. When things are going well, and the fundamentals of a stock or commodity are strong, the price is driven up, and this results in a positive feedback loop. Investors receive confirmation of their belief that things are going good from the price increase, confirming their bias. The process repeats and builds upon itself during a bull market, until it reaches a point of euphoria, in which bad news is completely ignored or disbelieved in.
The same process happens in reverse during a price decline, or bear market. Investors receive the feedback that the price is going down => things are bad, and good news is ignored and disbelieved. Both of these processes run away for a while until they reach enough of an extreme that the "smart money" (most well informed and intelligent agents in the system) realizes that the process has gone too far and switches sides.
Bitcoin at this point is certainly somewhere in the despair side of the pendulum. I don't want to imply in any way that it is not possible for it to go lower. Picking a bottom is probably the most difficult thing to do in markets, especially before it happens, and everyone who has claimed that Bitcoin was at a bottom for the past year has been repeatedly proven wrong. (In fact, I feel a tremendous amount of fear in sticking my neck out to create this post, well aware that I could look like a complete idiot weeks or months or years from now and utterly destroy my reputation, yet I will continue anyway).
First of all, lets look at the fundamentals of Bitcoin. On one hand, things are going well.
Use of Bitcoin (network effect):
One measurement of Bitcoin's value is the strenght of its network effect. By Metcalfe's law, the value of a network is proporitonal to the square of the number of nodes in the network.
http://en.wikipedia.org/wiki/Metcalfe%27s_law
Over the long term, Bitcoin's price has generally followed this law (though with wild swings to both the upside and downside as the pendulum swings).
In terms of network effect, Bitcoin is doing well.
Bitcoin transactions are hitting all time highs: (28 day average of number of transactions).
Number of Bitcoin addresses are hitting all time highs:
Merchant adoption continues to hit new highs:
BitPay/Coinbase continue to report 10% monthly growth in the number of merchants that accept Bitcoin.
Prominent companies that began accepting Bitcoin in the past year include: Dell, Overstock, Paypal, Microsoft, etc.
On the other hand, due to the sustained price decline, many Btcoin businesses that started up in the past two years with venture capital funding have shut down. This is more of an effect of the price decline than a cause however. In the past month especially there has been a number of bearish news stories, such as Bitpay laying off employees, exchanges Vault of Satoshi and CEX.io deciding to shut down, exchange Bitstamp being hacked and shut down for 3 days, but ultimately is back up without losing customer funds, etc.
The cost to mine a Bitcoin is commonly seen as one indicator of price. Note that the cost to mine a Bitcoin does not directly determine the *value* or usefulness of a Bitcoin. I do not believe in the labor theory of value: http://en.wikipedia.org/wiki/Labor_theory_of_value
However, there is a stabilizing effect in commodities, in which over time, the price of an item will often converge towards the cost to produce it due to market forces.
If a Bitcoin is being priced at a value much greater than the cost (in mining equipment and electricity) to create it, people will invest in mining equipment. This results in increased 'difficulty' of mining and drives down the amount of Bitcoin that you can create with a particular piece of mining equipment. (The amount of Bitcoins created is a fixed amount per unit of time, and thus the more mining equipment that exists, the less Bitcoin each miner will get).
If Bitcoin is being priced at a value below the cost to create it, people will stop investing in mining equipment. This may be a signal that the price is getting too low, and could rise.
Historically, the one period of time where Bitcoin was priced significantly below the cost to produce it was in late 2011. It was noted on LessWrong. The price has not currently fallen to quite the same extent as it did back then (which may indicate that it has further to fall), however the current price relative to the mining cost indicates we are very much in the bearish side of the pendulum.
It is difficult to calculate an exact cost to mine a Bitcoin, because this depends on the exact hardware used, your cost of electricity, and a prediction of the future difficulty adjustments that will occur. However, we can make estimates with websites such as http://www.vnbitcoin.org/bitcoincalculator.php
According to this site, every available Bitcoin miner will never give you back as much money as it cost, factoring in the hardware cost and electricity cost. Upcoming more efficient miners which have not yet released yet are estimated to pay off in about a year, if difficulty grows extremely slowly, and that is for upcoming technology which has not yet even been released.
There are two important breakpoints when discussing Bitcoin mining profitability. The first is the point at which your return is enough that it pays for both the electricity and the hardware. The second is the point at which you make more than your electricity costs, but cannot recover the hardware cost.
For example, lets say Alice pays $1000 on Bitcoin mining equipment. Every day, this mining equipment can return $10 worth of Bitcoin, but it costs $5 of electricity to run. Her gain for the day is $5, and it would take 200 days at this rate before the mining equipment paid for itself. Once she has made the decision to purchase the mining equipment, the money spent on the miner is a sunk cost. The money spent on electricity is not a sunk cost, she continues to have the decision every day of whether or not to run her mining equipment. The optimal decision is to continue to run the miner as long as it returns more than the electricity cost.
Over time, the payout she will receive from this hardware will decline, as the difficulty of mining Bitcoin increases. Eventually, her payout will decline below the electricity cost, and she should shut the miner down. At this point, if her total gain from running the equipment was higher than the hardware cost, it was a good investment. If it did not recoup its cost, then it was worse than simply spending the money buying Bitcoin on an exchange in the first place.
This process creates a feedback into the market price of Bitcoins. Imagine that Bitcoin investors have two choices, either they can buy Bitcoins (the commodity which has already been produced by others), or they can buy miners, and produce Bitcoins for themself. If the Bitcoin price falls sufficiently that mining equipment will not recover its costs over time, investment money that would have gone into miners instead goes into Bitcoin, helping to support the price. As you can see from mining cost calculators, we have passed this point already. (In fact, we passed it months ago already).
The second breakpoint is when the Bitcoin price falls so low that it falls below the electricity cost of running mining equipment. We have passed this point for many of the less efficient ways to mine. For example, Cointerra recently shut down its cloud mining pool because it was losing money. We have not yet passed this point for more recent and efficient miners, but we are getting fairly close to it. Crossing this point has occurred once in Bitcoin's history, in late 2011 when the price bottomed out near $2, before giving birth to the massive bull run of 2012-2013 in which the price rose by a factor of 500.
Market Sentiment:
I was not active in Bitcoin back in 2011, so I cannot compare the present time to the sentiment at the November 2011 bottom. However, sentiment currently is the worst that I have seen by a significant margin. Again, this does not mean that things could not get much, much worse before they get better! After all, sentiment has been growing worse for months now as the price declines, and everyone who predicted that it was as bad as it could get and the price could not possibly go below $X has been wrong. We are in a feedback loop which is strongly pumping bearishness into all market participants, and that feedback loop can continue and has continued for quite a while.
A look at market indicators tells us that Bitcoin is very, very oversold, almost historically oversold. Again, this does not mean that it could not get worse before it gets better.
As I write this, the price of Bitcoin is $230. For perspective, this is down over 80% from the all time high of $1163 in November 2013. It is still higher than the roughly $100 level it spent most of mid 2013 at.
* The average price of a Bitcoin since the last time it moved is $314.
https://www.reddit.com/r/BitcoinMarkets/comments/2ez90b/and_the_average_bitcoin_cost_basis_is/
The current price is a multiple of .73 of this price. This is very low historically, but not the lowest it has ever ben. THe lowest was about .39 in late 2011.
* Short interest (the number of Bitcoins that were borrowed and sold, and must be rebought later) hit all time highs this week, according to data on the exchange Bitfinex, at more than 25000 Bitcoins sold short:
http://www.bfxdata.com/swaphistory/totals.php
* Weekly RSI (relative strength index), an indicator which tells if a stock or commodity is 'overbought' or 'oversold' relative to its history, just hit its lowest value ever.
Many indicators are telling us that Bitcoin is at or near historical levels in terms of the depth of this bear market. In percentage terms, the price decline is surpassed only by the November 2011 low. In terms of length, the current decline is more than twice as long as the previous longest bear market.
To summarize: At the present time, the market is pricing in a significant probability that Bitcoin is dying.
But there are some indicators (such as # of transactions) which say it is not dying. Maybe it continues down into oblivion, and the remaining fundamentals which looked bullish turn downwards and never recover. Remember that this is reality, and anything can happen, and nothing will save you.
Given all of this, we now have a choice. People have often compared Bitcoin to making a bet in which you have a 50% chance of losing everything, and a 50% chance of making multiples (far more than 2x) of what you started with.
There are times when the payout on that bet is much lower, when everyone is euphoric and has been convinced by the positive feedback loop that they will win. And there are times when the payout on that bet is much higher, when everyone else is extremely fearful and is convinced it will not pay off.
This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision. Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live. Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live. Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet. A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.
And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.
Selfish preferences and self-modification
One question I've had recently is "Are agents acting on selfish preferences doomed to having conflicts with other versions of themselves?" A major motivation of TDT and UDT was the ability to just do the right thing without having to be tied up with precommitments made by your past self - and to trust that your future self would just do the right thing, without you having to tie them up with precommitments. Is this an impossible dream in anthropic problems?
In my recent post, I talked about preferences where "if you are one of two copies and I give the other copy a candy bar, your selfish desires for eating candy are unfulfilled." If you would buy a candy bar for a dollar but not buy your copy a candy bar, this is exactly a case of strategy ranking depending on indexical information.
This dependence on indexical information is inequivalent with UDT, and thus incompatible with peace and harmony.
To be thorough, consider an experiment where I am forked into two copies, A and B. Both have a button in front of them, and 10 candies in their account. If A presses the button, it deducts 1 candy from A. But if B presses the button, it removes 1 candy from B and gives 5 candies to A.
Before the experiment begins, I want my descendants to press the button 10 times (assuming candies come in units such that my utility is linear). In fact, after the copies wake up but before they know which is which, they want to press the button!
The model of selfish preferences that is not UDT-compatible looks like this: once A and B know who is who, A wants B to press the button but B doesn't want to do it. And so earlier, I should try and make precommitments to force B to press the button.
But suppose that we simply decided to use a different model. A model of peace and harmony and, like, free love, where I just maximize the average (or total, if we specify an arbitrary zero point) amount of utility that myselves have. And so B just presses the button.
(It's like non-UDT selfish copies can make all Pareto improvements, but not all average improvements)
Is the peace-and-love model still a selfish preference? It sure seems different from the every-copy-for-themself algorithm. But on the other hand, I'm doing it for myself, in a sense.
And at least this way I don't have to waste time with precomittment. In fact, self-modifying to this form of preferences is such an effective action that conflicting preferences are self-destructive. If I have selfish preferences now but I want my copies to cooperate in the future, I'll try to become an agent who values copies of myself - so long as they date from after the time of my self-modification.
If you recall, I made an argument in favor of averaging the utility of future causal descendants when calculating expected utility, based on this being the fixed point of selfish preferences under modification when confronted with Jan's tropical paradise. But if selfish preferences are unstable under self-modification in a more intrinsic way, this rather goes out the window.
Right now I think of selfish values as a somewhat anything-goes space occupied by non-self-modified agents like me and you. But it feels uncertain. On the mutant third hand, what sort of arguments would convince me that the peace-and-love model actually captures my selfish preferences?
How much does consumption affect production?
A ewe for a ewe
In a discussion with Benquo over his recent suffering-per-calorie estimates I learned that there have been a few different proponents of incorporating short term elasticities into such estimates. But do empirical short term elasticities really improve our estimates of consumption's long term effect on production? For example, if I decide to reduce my lifetime consumption of chicken by one, should I expect the long term production of chicken to drop by ~1, ~0, or something in between?
I believe we should have a relatively strong prior that long term production has a roughly 1:1 relationship with consumption, including for small individual decisions. Below are a couple arguments I find compelling, and a major exception that is not a short term elasticity.
Black box economies in general
If I go to a large alien civilization of uncertain economic structure and surprise them by buying(?) one widget, how should I expect that to affect their long term production of widgets? Seems like I should expect it to increase by one, because now they have one less than they used to. If it was originally decided that that widget should be produced; why wouldn't they decide to replace it when lost?
Neoclassical capitalism in the long term
In a simplified market, I expect there to be a lowest price at which chickens can be reliably produced at scale ("the Cost"). If producers expect the market price to be less than the Cost in the future, they will shut down production to avoid losses. If they expect it to be more than the Cost in the future, they might expand operations to make more profit. In the long term (when we can ignore temporary shocks to the system and producers have time to make adjustments), I expect the equilibrium price of chicken to approach the Cost of chicken (because other prices lead to conditions that push the price back toward the Cost). In other words, my prior is that the "price elasticity of supply" in the arbitrarily long term becomes arbitrarily high (imagine a virtually horizontal supply curve).
How many chickens will be produced at that long term price? However many are worth the Cost to consumers. If 50% of chicken consumers permanently become vegetarians, I expect that eventually the chicken industry will end up producing about 50% fewer chickens at a price similar to today's.
Similarly if consumption is reduced by just one chicken. My prior is that producers have an unbiased estimate of consumption, and that doesn't change when I eat one less chicken (so my best guess about their long term estimate of consumption drops by one when I forgo one chicken).
Time breaks the elastic limit
Compare my prior that every chicken forgone causes (in the long term) one less chicken to be produced, to the estimates that it only causes 6% or 76% of a chicken to not be produced (as Peter Hurford points out in the second case, the enormous range in these estimates alone is enough to raise flags).
Those numbers sound plausible in the short term when there's a backup in the chicken pipeline and a drop in price because producers were caught off guard by the drop in consumption. But if the vegetarians hold their new diets, won't the producers eventually react to the changed market? When they do I bet the equilibrium price will be somewhere close to the original Cost, and the quantity produced will be about 50% less (not 3% less or even 38% less). I think the thing these elasticity estimates are forgetting is that the producers aren't satisfied (in the long term) with the lower price that results from a chicken glut caused by vegetarianism. If they were, they'd be producing more chickens now.
Said another way, it all comes down to the difference between producers' reaction in the short term vs. the long term. In the short term, when someone decides not to eat a chicken, it goes to the next highest bidder (so price drops and production doesn't change much). But in the long term, producers produce all the chickens that will be demanded at the Cost (they want to produce as many as they can at that price, but if they produce any more, the chickens will be sold at a loss). When one person permanently becomes vegetarian, we should expect that long term size of the industry decreases accordingly.
When the long term Cost changes with industry size
To be clear, if we could actually measure consumption's effect on long term production in specific cases, it would rarely be exactly 1:1, though my prior is that it will average out to that over time. The exception is if consumption consistently affects the long term price in a particular direction. For example, here are some reasons that I might expect the Cost of chicken to grow or shrink as the size of the chicken industry increases:
- Finite inputs such as limited agricultural land (Cost grows with size)
- The production process also creates another product like eggs (Cost grows with size if marginal production is used for both)
- Gains to scale such as factory farming (Cost shrinks with size)
- R&D or innovation (Cost shrinks with size)
- Favorable government policies (Cost shrinks with size)
If we have sufficiently certain estimates on any of these effects, we can certainly try to model them, although it would be a very different exercise than using empirical estimates of short-term elasticities. As it is, I have no idea which of the above effects win out (ie, whether the "consumption elasticity of the Cost" is positive or negative in the long term).
I think we would make our estimates more simple and accurate by sticking with the prior that eating one less chicken causes about one less chicken to be produced in the long term.
The Hostile Arguer
“Your instinct is to talk your way out of the situation, but that is an instinct born of prior interactions with reasonable people of good faith, and inapplicable to this interaction…” – Ken White
One of the Less Wrong Study Hall denizens has been having a bit of an issue recently. He became an atheist some time ago. His family was in denial about it for a while, but in recent days they have 1. stopped with the denial bit, and 2. been less than understanding about it. In the course of discussing the issue during break, this line jumped out at me:
“I can defend my views fine enough, just not to my parents.”
And I thought: Well, of course you can’t, because they’re not interested in your views. At all.
I never had to deal with the Religion Argument with my parents, but I did spend my fair share of time failing to argumentatively defend myself. I think I have some useful things to say to those younger and less the-hell-out-of-the-house than me.
A clever arguer is someone that has already decided on their conclusion and is making the best case they possibly can for it. A clever arguer is not necessarily interested in what you currently believe; they are arguing for proposition A and against proposition B. But there is a specific sort of clever arguer, one that I have difficulty defining explicitly but can characterize fairly easily. I call it, as of today, the Hostile Arguer.
It looks something like this:
When your theist parents ask you, “What? Why would you believe that?! We should talk about this,” they do not actually want to know why you believe anything, despite the form of the question. There is no genuine curiosity there. They are instead looking for ammunition. Which, if they are cleverer arguers than you, you are likely to provide. Unless you are epistemically perfect, you believe things that you cannot, on demand, come up with an explicit defense for. Even important things.
In accepting that the onus is solely on you to defend your position – which is what you are implicitly doing, in engaging the question – you are putting yourself at a disadvantage. That is the real point of the question: to bait you into an argument that your interlocutor knows you will lose, whereupon they will expect you to acknowledge defeat and toe the line they define.
Someone in the chat compared this to politics, which makes sense, but I don’t think it’s the best comparison. Politicians usually meet each other as equals. So do debate teams. This is more like a cop asking a suspect where they were on the night of X, or an employer asking a job candidate how much they made at their last job. Answering can hurt you, but can never help you. The question is inherently a trap.
The central characteristic of a hostile arguer is the insincere question. “Why do you believe there is/isn’t a God?” may be genuine curiosity from an impartial friend, or righteous fury from a zealous authority, even though the words themselves are the same. What separates them is the response to answers. The curious friend updates their model of you with your answers; the Hostile Arguer instead updates their battle plan.[1]
So, what do you do about it?
Advice often fails to generalize, so take this with a grain of salt. It seems to me that argument in this sense has at least some of the characteristics of the Prisoner’s Dilemma. Cooperation represents the pursuit of mutual understanding; defection represents the pursuit of victory in debate. Once you are aware that they are defecting, cooperating in return is highly non-optimal. On the other hand, mutual defection – a flamewar online, perhaps, or a big fight in real life in which neither party learns much of anything except how to be pissed off – kind of sucks, too. Especially if you have reason to care, on a personal level, about your opponent. If they’re family, you probably do.
It seems to me that getting out of the game is the way to go, if you can do it.
Never try to defend a proposition against a hostile arguer.[2] They do not care. Your best arguments will fall on deaf ears. Your worst will be picked apart by people who are much better at this than you. Your insecurities will be exploited. If they have direct power over you, it will be abused.
This is especially true for parents, where obstinate disagreement can be viewed as disrespect, and where their power over you is close to absolute. I’m sort of of the opinion that all parents should be considered epistemically hostile until one moves out, as a practical application of the SNAFU Principle. If you find yourself wanting to acknowledge defeat in order to avoid imminent punishment, this is what is going on.
If you have some disagreement important enough for this advice to be relevant, you probably genuinely care about what you believe, and you probably genuinely want to be understood. On some level, you want the other party to “see things your way.” So my second piece of advice is this: Accept that they won’t, and especially accept that it will not happen as a result of anything you say in an argument. If you must explain yourself, write a blog or something and point them to it a few years later. If it’s a religious argument, maybe write the Atheist Sequences. Or the Theist Sequences, if that’s your bent. But don’t let them make you defend yourself on the spot.
The previous point, incidentally, was my personal failure through most of my teenage years (although my difficulties stemmed from school, not religion). I really want to be understood, and I really approach discussion as a search for mutual understanding rather than an attempt at persuasion, by default. I expect most here do the same, which is one reason I feel so at home here. The failure mode I’m warning against is adopting this approach with people who will not respect it and will, in fact, punish your use of it.[3]
It takes two to have an argument, so don’t be the second party, ever, and they will eventually get tired of talking to a wall. You are not morally obliged to justify yourself to people who have pre-judged your justifications. You are not morally obliged to convince the unconvinceable. Silence is always an option. “No comment” also works well, if repeated enough times.
There is the possibility that the other party is able and willing to punish you for refusing to engage. Aside from promoting them from “treat as Hostile Arguer” to “treat as hostile, period”, I’m not sure what to do about this. Someone in the Hall suggested supplying random, irrelevant justifications, as requiring minimal cognitive load while still subverting the argument. I’m not certain how well that will work. It sounds plausible, but I suspect that if someone is running the algorithm “punish all responses that are not ‘yes, I agree and I am sorry and I will do or believe as you say’”, then you’re probably screwed (and should get out sooner rather than later if at all possible).
None of the above advice implies that you are right and they are wrong. You may still be incorrect on whatever factual matter the argument is about. The point I’m trying to make is that, in arguments of this form, the argument is not really about correctness. So if you care about correctness, don’t have it.
Above all, remember this: Tapping out is not just for Less Wrong.
(thanks to all LWSH people who offered suggestions on this post)
After reading the comments and thinking some more about this, I think I need to revise my position a bit. I’m really talking about three different characteristics here:
- People who have already made up their mind.
- People who are personally invested in making you believe as they do.
- People who have power over you.
For all three together, I think my advice still holds. MrMind puts it very concisely in the comments. In the absence of 3, though, JoshuaZ notes some good reasons one might argue anyway; to which I think one ought to add everything mentioned under the Fifth Virtue of Argument.
But one thing that ought not to be added to it is the hope of convincing the other party – either of your position, or of the proposition that you are not stupid or insane for holding it. These are cases where you are personally invested in what they believe, and all I can really say is “don’t do that; it will hurt.” Even if you are correct, you will fail for the reasons given above and more besides. It’s very much a case of Just Lose Hope Already.
-
I’m using religious authorities harshing on atheists as the example here because that was the immediate cause of this post, but atheists take caution: If you’re asking someone “why do you believe in God?” with the primary intent of cutting their answer down, you’re guilty of this, too. ↩
-
Someone commenting on a draft of this post asked how to tell when you’re dealing with a Hostile Arguer. This is the sort of micro-social question that I’m not very good at and probably shouldn’t opine on. Suggestions requested in the comments. ↩
-
It occurs to me that the Gay Talk might have a lot in common with this as well. For those who’ve been on the wrong side of that: Did that also feel like a mismatched battle, with you trying to be understood, and them trying to break you down? ↩
When the uncertainty about the model is higher than the uncertainty in the model
Most models attempting to estimate or predict some elements of the world, will come with their own estimates of uncertainty. It could be the Standard Model of physics predicting the mass of the Z boson as 91.1874 ± 0.0021 GeV, or the rather wider uncertainty ranges of economic predictions.
In many cases, though, the uncertainties in or about the model dwarf the estimated uncertainty in the model itself - especially for low probability events. This is a problem, because people working with models often try to use the in-model uncertainty and adjust it to get an estimate of the true uncertainty. They often realise the model is unreliable, but don't have a better one, and they have a measure of uncertainty already, so surely doubling and tripling this should do the trick? Surely...
The following three cases are going to be my go-to examples for showing what a mistake this can be; they cover three situations: extreme error, being in the domain of a hard science, and extreme negative impact.
Wealth from Self-Replicating Robots
I have high confidence that economically-valuable self-replicating robots are possible with existing technology: initially, something similar in size and complexity to a RepRap, but able to assemble a copy of itself from parts ordered online with zero human interaction. This is important because more robots could provide the economic growth needed to solve many urgent problems. I've held this idea for long enough that I'm worried about being a crank, so any feedback is appreciated.
I care because to fulfill my naive and unrealistic dreams (not dying, owning a spaceship) I need the world to be a LOT richer. Specifically, naively assuming linear returns to medical research funding, a funding increase of ~10x (to ~$5 trillion/year, or ~30% of current USA GDP) is needed to achieve actuarial escape velocity (average lifespans currently increase by about 1 year each decade, so a 10x increase is needed for science to keep up with aging). The simplest way to get there is to have 10x as many machines per person.
My vision is that someone does for hardware what open-source has done for software: make useful tools free. A key advantage of software is that making a build or copying a program takes only one step. In software, you click "compile" and (hopefully) it's done and ready to test in seconds. In hardware, it takes a bunch of steps to build a prototype (order parts, screw fiddly bits together, solder, etc.). A week is an insanely short lead time for building a new prototype of something mechanical. 1-2 months is typical in many industries. This means that mechanical things have high marginal cost, because people have to build and debug them, and typically transport them for thousands of miles from factory to consumer.
Relevant previous research projects include trivial self-replication from pre-fabricated components and an overly-ambitious NASA-funded plan from the 1980s to develop the Moon using self-replicating robots. Current research funding tends to go toward bio-inspired systems, re-configurable systems using prefabricated cubes (conventionally-manufactured), or chemistry deceptively called "nanotech", all of which seem to miss the opportunity to use existing autonomous assembly technology with online ordering of parts to make things cheaper by getting rid of setup cost and building cost.
I envision a library/repository of useful robots for specific tasks (cleaning, manufacturing, etc.), in a standard format for download (parts list, 3D models, assembly instructions, etc.). Parts could be ordered online. A standard fabricator robot with the capability to identify and manipulate parts, and fasten them using screws, would verify that the correct parts were received, put everything together, and run performance checks. For comparison, the RepRap takes >9 hours of careful human labor to build. An initial self-replicating implementation would be a single fastener robot. It would spread by undercutting the price of competing robot arm systems. Existing systems sell for ~2x the cost of components, due to overhead for engineering, assembly, and shipping. This appears true for robots at a range of price points, including $200 robot arms using hobby servos and $40,000+ robot arms using optical encoders and direct-drive brushless motors. A successful system that undercut the price of conventionally-assembled hobby robots would provide a platform for hobbyists to create additional robots that could be autonomously built (e.g. a Roomba for 1/5 the price, due to not needing to pay the 5x markup for overhead and distribution). Once a beachhead is established in the form of a successful self-replicating assembly robot, market pressures would drive full automation of more products/industries, increasing output for everyone.
This is a very hard programming challenge, but the tools exist to identify, manipulate and assemble parts. Specifically, ROS is an open-source software library whose packages can be put together to solve tasks such as mapping a building or folding laundry. It's hard because it would require a lot of steps and a new combination of existing tools.
This is also a hard systems/mechanical challenge: delivering enough data and control bandwidth for observability and controllability, and providing lightweight and rigid hardware, so that the task for the software is possible rather than impossible. Low-cost components have less performance: a webcam has limited resolution, and hobby servos have limited accuracy. The key problem - autonomously picking up a screw and screwing it into a hole - has been solved years ago for assembly-line robots. Doing the same task with low-cost components appears possible in principle. A comparable problem that has been solved is autonomous construction using quadcopters.
Personally, I would like to build a robot arm that could assemble more robot arms. It would require, at minimum, a robot arm using hobby servos, a few webcams, custom grippers (for grasping screws, servos, and laser-cut sheet parts), custom fixtures (blocks with a cutout to hold two parts in place while the robot arm inserts a screw; ideally multiple robot arms would be used to minimize unique tooling but fixtures would be easier initially), and a lot of challenging code using ROS and Gazebo. Just the mechanical stuff, which I have the education for, would be a challenging months-long side project, and the software stuff could take years of study (the equivalent of a CS degree) before I'd have the required background to reasonably attempt it.
I'm not sure what to do with this idea. Getting a CS degree on top of a mechanical engineering degree (so I could know enough to build this) seems like a good career choice for interesting work and high pay (even if/when this doesn't work). Previous ideas like this I've had that are mostly outside my field have been unfeasible for reasons only someone familiar with the field would know. It's challenging to stay motivated to work on this, because the payoff is so distant, but it's also challenging not to work on this, because there's enough of a chance that this would work that I'm excited about it. I'm posting this here in the hopes someone with experience with industrial automation will be inspired to build this, and to get well-reasoned feedback.
Too good to be true
A friend recently posted a link on his Facebook page to an informational graphic about the alleged link between the MMR vaccine and autism. It said, if I recall correctly, that out of 60 studies on the matter, not one had indicated a link.
Presumably, with 95% confidence.
This bothered me. What are the odds, supposing there is no link between X and Y, of conducting 60 studies of the matter, and of all 60 concluding, with 95% confidence, that there is no link between X and Y?
Answer: .95 ^ 60 = .046. (Use the first term of the binomial distribution.)
So if it were in fact true that 60 out of 60 studies failed to find a link between vaccines and autism at 95% confidence, this would prove, with 95% confidence, that studies in the literature are biased against finding a link between vaccines and autism.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)