"Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me")."
Those aren't really mutually exclusive. "Talking like that just to confuse his listeners is just how he talks". It could be an attribution not of any specific malice, but generalized snootiness.
This may seem pedantic, but given that this post is on the importance of precision:
"Some likely died."
Should be
"Likely, some died".
Also, I think you should more clearly distinguish between the two means, such as saying "sample average" rather than "your average". Or use x bar and mu.
The whole concept of confidence is rather problematic, because it's on the one hand one of the most common statistical measures presented to the public, but on the other hand it's one of the most difficult concepts to understand.
What ma...
By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what...
"Also there are important risks that we are in simulation, but that it is created not by our possible ancestors"
Do you mean "descendants"?
What about after the program, if you don't get a job, or don't get a job in the data science field?
1% of a bad bet is still a bad bet.o
They should have some statistics, even if they're not completely conclusive.
As I understand it, the costs are:
$1400 for lodging (commuting would cost even more) $2500 deposit (not clear on the refund policy) 10% of next year's income (with deposit going towards this)
I wouldn't characterize that as "very little". It's enough to warrant asking a lot of questions.
How would you characterize the help you got getting a job? Getting an interview? Knowing what to say in an interview? Having verifiable skills?
Are your finances so dire that if someone offered you $1/day in exchange for playing Russian Roulette, you would accept? If not, aren't you being just as irrational as you are accusing those who fail to accept your argument of being?
You might want to consider what the objective is, and whether you should have different resources for different objectives. Someone who's in a deeply religious community who would be ostracized if people found out they're an atheist would need different resources than someone in a more secular environment who simply wants to find other atheists to socialize with.
I think I should also mention your posting a URL but not making it clickable You should put anchors in your site. For instance, there should at the very least be anchors at "New atheists&quo...
"Just a "Survival Guide for Atheists" "
Are you referring to the one by Hehmant Mehta?
"not-particularly-deep-thinking theist."
Typo?
I suppose this might be better place to ask than trying to resurrect a previous thread:
What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don't necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at leas...
"We're planning another one in Berkeley from May 2nd – July 24th."
Is that June 24th?
Isn't that fraud? That is, if you work for a company that matches donations, and I ask to give you money for you to give to MIRI, aren't I asking you to defraud your company?
It does mean that not-scams should find ways to signal that they aren't scams, and the fact that something does not signal not-scam is itself strong evidence of scam.
Isn't the whole concept of matching donations a bit irrational to begin with? If a company thinks that MIRI is a good cause, they should give money to MIRI. If they think that potential employees will be motivated by them giving money to MIRI, wouldn't a naive application of economics predict that employees would value a salary increase of a particular amount at a utility that is equal or greater than the utility of that particular amount being donated to MIRI? An employee can convert a $1000 salary increase to a $1000 MIRI donation, but not the reverse. Either the company is being irrational, or it is expecting its employees to be irrational.
Shouldn't we first determine whether the amount of effort needed to figure out the costs of the tests is less than the expected value of ((cost of doing tests - expected gain)|(cost of doing tests > expected gain))?
And if this is presented as some sort of "competition" to see whether LW is less susceptible than the general populace, then if anyone has fallen for it, that can further discourage them from reporting it. A lot of this is exploiting the banking system's lack of transparency as to just how "final" a transaction is; for instance, if you deposit a check, your account may be credited even if the check hasn't actually cleared. So scammers take advantage of the fact that most people are familiar with all the intricacies of banking, and think that when their account has been credited, it's safe to send money back.
It is somewhat confusing, but remember that srujectivity is defined with respect to a particular codomain; a function is surjective if its range is equal to its codomain, and thus whether it's surjective depends on what its codomain is considered to be; every function maps its domain onto its range. "f maps X onto Y" means that f is surjective with respect to Y". So, for instance, the exponential function maps the real numbers onto the positive real numbers. It's surjective with respect to positive real numbers*. Saying "the exponential...
So, we have
Your post seems to be a rather verbose way of showing something that can be shown in three lines. I guess you're trying to illustrate some larger framework, but it's rather unclear what it is or how it adds anything to the analysis, and you haven't given the reader much reason to look into it further.
The reason that someone might think an Ace would be a good...
I think that the first step is to unpack "annihilate". How does one "annihilate" a universe? You seem to be equivocating between destroying a universe, and putting it in a state inhospitable to consciousness.
It also seems to me that once we bring the anthropic principle in, that leads to Boltzmann brains.
Upvote for content, but I think that there's a typo in your second sentence
Variable schedules maximize what is known as resistance to extinction, the probability a behavior will decrease in frequency goes down. Perhaps a semicolon instead of a comma, or "as frequency of rewards ... " instead of "in frequency ...", was intended?
"show that one serving of Soylent 1.5 can expose a consumer to a concentration of lead that is 12 to 25 times above California's Safe Harbor level for reproductive health"
Concentration, or amount? it seems to me that that is a rather important distinction, and it worrying that As You Sow doesn't seem to recognize it.
I'm not sure you understand what "iid" means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term "iid" isn't doing any work in your statement; you could just same "It's not from the distribution you really want to sample", and it would be just as informative.
"This isn't an example of overfitting, but of the training set not being iid."
Upvote for the first half of that sentence, but I'm not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It's like having a drug trial where the placebos are one color and the real drug is another.
Perverse incentives.
I realize that no analogy is perfect, but I don't think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it's just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.
You paid a karma toll to comment on one of my most unpopular posts yet
My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.
to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?
I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as y...
The set of possible Turing Machines is infinite. Whether you consider that to satisfy your personal definition of "seen" or "in reality" isn't really relevant.
If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.
(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a "We should retrain the model" lobby group, and will it act faster than the prison lobby?
Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of...
An example of a sense would be to define some quantification of how good an algorithm is, and then show that a particular algorithm has a large value for that quantity, compared to SI. In order to rigorously state that X approaches Y "in the limit", you have to have some index n, and some metric M, such that |M(Xn)-M(Yn)| -> 0. Otherwise, you're simply making a subjective statement that you find X to be "good". So, for instance, if you can show that the loss in utility in using your algorithm rather than SI goes to zero as the size of the dataset goes to infinity, that would be an objective sense in which your algorithm approximates SI.
You can't do an exhaustive search on an infinite set.
Consequentialist thinking has a general tendency to get one labeled an asshole.
e.g.
"Hey man, can you spare a dollar?" "If I did have a dollar to spare, I strongly doubt that giving it to you would be the most effective use of it." "Asshole."
Although I think that it's dangerous to think that you can accurately estimate the cost/benefit of tact; I think most people underestimate how much effect it has.
There's a laundry section, with detergent, fabric softeners, and other laundry-related products. I don't think the backs generally say what the product is, and even if they do, that's not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the "detergent" label, which means that they don't notice when they're buying fabric softener or another product.
As I said, that is not what the sunk cost fallacy is. If you've spent $100, and your expected net returns are -$50, then the sunk cost fallacy would be to say "If I stop now, that $100 will be wasted. Therefore, I should keep going so that my $100 won't be wasted."
While it is a fallacy to just add sunk costs to future costs, it's not a fallacy to take them into account, as your scenario illustrates. I don't know of anyone who recommends completely ignoring sunk costs; as far as I can tell you are arguing against a straw man in that sense.
Also, it's "i.e.", rather than "i/e".
What's the deal with laundry detergent packaging? For instance, take a look at this http://news.pg.com/sites/pg.newshq.businesswire.com/files/image/image/Tide_Liquid_Detergent.jpg Nowhere on the package does it actually say it's detergent! I guess they're just relying on people knowing that Tide is a brand of detergent? Except that Tide also makes other products, such as fabric softener. And it's not just Tide. http://www.freestufffinder.com/wp-content/uploads/2014/03/all-laundry.jpg http://dgc.imageg.net/graphics/product_images/pDGC1-10603813v380.jpg htt...
Suppose we have a set S of n elements, and we ask people to memorize sequences of these elements, and we find that people can generally easily memorize sequences of length k (for some definition of "generally" and "easily"). If we then define a function f(S) := k log n, how will f depend on S? Have there been studies on this issue?
Estimator asked in what sense SI is approximated, not, given a sense, how SI is approximated in that sense. Can you give a metric for which the value is close to SI's value?
Suppose you were given two options, and told that whatever money results would be given to an EA charity. Would you find it difficult to choose a 1% shot at $1000 over a sure $5? What if you were told that there are a thousand people being given the same choice? What if you're not told how the gamble turns out? What if all the gambles are put in a pool, and you're told only how many worked out, not whether yours did?
No. If you forecast that the price of gold will go up, and the price instead goes down, then being honest about your forecast loses you money. Prediction markets reward people for making accurate predictions. Whether those predictions were an accurate reflection of beliefs is irrelevant.
Most people consider causality to be a rather serious argument. If you're going to unilaterally declare certain lines of argument illegitimate, and then criticize people for failing to present a "legitimate" argument, and declaring that any opinions that disagree with you don't improve the discussion, that's probably going to piss people off,
You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator's argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.
All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a differe...
Hopefully, I'm not just feeding the troll, but: just what exactly do you think "the sunk cost fallacy" is? Because it appears to me that you believe that it refers to the practice of adding expenses already paid to future expected expenses in a cost-benefit analysis, when in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.
"Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins"
Shouldn't that be "at most"?
What does that mean, "You're not going to just happen to be in one of the first twenty years"? There are people who have survived more than one billion seconds past their twenty first birthdays. And each one, at one point, was within twenty second of their twenty first birthday. What would you say to someone whose twenty first birthday was less than twenty seconds ago who says "I'm not going to just happen to be in the first twenty seconds"?
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."
Listing arguments that you find unconvincing, and simply declaring that you find your opponent's argument to be similar, is not a valid line of reasoning, isn't going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you thin...
There was no "mockery", just criticism and disagreement. It's rather disturbing that you saying that criticism and disagreement is "not acceptable" has been positively received. And estimator didn't say that the argument is closed, only that zie has a solid opinion about it.
"Countries with a lot of specialization are richer, therefore, within a country, the richest people should be people who specialize."
::sigh::
You said "More like the first definition." The first definition is "to name, write, or otherwise give the letters, in order, of (a word, syllable, etc.)". Thus, I conclude that you are saying that it is impossible to name, write, or otherwise give the letters, in order, of the word "complexity". I have repeatedly seen people in this community talk of "verified debating", in which it is important to communicate with other people what your understanding of their statements is, and ask them whether that is accurate. And yet when I do that, with an interpretation that looks quite straightforward to me, I get downvoted, and your only response is "no", with no explanation.
"Everyone on this site obviously has an interest in being, on a personal level, more rational."
Not in my experience. In fact, I was downvoted and harshly criticized for expressing confusion at gwern posting on this site and yet having no apparent interest in being rational.