All of ThisSpaceAvailable's Comments + Replies

"Everyone on this site obviously has an interest in being, on a personal level, more rational."

Not in my experience. In fact, I was downvoted and harshly criticized for expressing confusion at gwern posting on this site and yet having no apparent interest in being rational.

"Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me")."

Those aren't really mutually exclusive. "Talking like that just to confuse his listeners is just how he talks". It could be an attribution not of any specific malice, but generalized snootiness.

0Sable
True.

This may seem pedantic, but given that this post is on the importance of precision:

"Some likely died."

Should be

"Likely, some died".

Also, I think you should more clearly distinguish between the two means, such as saying "sample average" rather than "your average". Or use x bar and mu.

The whole concept of confidence is rather problematic, because it's on the one hand one of the most common statistical measures presented to the public, but on the other hand it's one of the most difficult concepts to understand.

What ma... (read more)

0Bobertron
I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.). "the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.
0Tyrrell_McAllister
Is it incorrect for a Bayesian to gloss this as follow? I could imagine a frequentist being uncomfortable with talk of the "chance" that the true mean (a certain fixed number) is between two other fixed numbers. "The true mean either is or is not in the CI. There's no chance about it." But is there a deeper reason why a Bayesian would also object to that formulation?
2waveman
Not just the public, but scientists and medical professionals have trouble with it. People tend to interpret frequentist statistics as if they were the Bayesian equivalents e.g. they interpret confidence intervals as Credible Intervals.

By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn't invalidate it. There's a rather narrow set of circumstances where your argument doesn't apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what... (read more)

0Jiro
Everyone plays Russian Roulette for $10 per day, assuming that probabilities lower than 1 out of 6 count as Russian Roulette. Just walking out of the house increases my chance of dying, never mind actually driving to some place that is not necessary for staying alive.
1The_Jaded_One
Back of the envelope I would say my chances of dying in the next 6 months and also being successfully cryopreserved (assuming I magically completed the signup process immediately) are about 1 in 10000. That trades off against using my time and money at a time when I'm short of both.
0The_Jaded_One
Since I was unemployed with no assets, I wasn't (until very recently, i.e. yesterday) eligible for any kind of personal loan. Mortality rate in your late 20s is low, and when you add that accidents, sudden deaths and murder are already very bad for cryo, that is further compounded. Then you have the problem that I'm not in the USA (I plan to eventually move, once my career is strong enough to score the relevant visa); being in the US is the best way to ensure a successful, timely suspension. If you are in Europe you have to both pay more for transport and you will be damaged more by the long journey, assuming you die unexpectedly in Europe. Well obviously it is worth more to mitigate death if your death is more likely. Especially when the kinds of ways you die when young are bad for yoir cryo chances.

"Also there are important risks that we are in simulation, but that it is created not by our possible ancestors"

Do you mean "descendants"?

0turchin
Yes, surely, thanks

What about after the program, if you don't get a job, or don't get a job in the data science field?

0The_Jaded_One
The deal I was given is that if you earn less than $40k for the next year, you get the whole program for free. If you earn $lots as a painter, porn star, film producer - whatever - you still pay your 10% of what you earn above $40k capped at $250k. But if you plan on having a very lucrative non-data science career in the next year, then why are you on the program?
0The_Jaded_One
No, it's not 1% of the bet. My income goes up in the future meaning that the utility of money goes down. My mortality rate goes up since I am young, so the value of cryonics goes up.

They should have some statistics, even if they're not completely conclusive.

As I understand it, the costs are:

$1400 for lodging (commuting would cost even more) $2500 deposit (not clear on the refund policy) 10% of next year's income (with deposit going towards this)

I wouldn't characterize that as "very little". It's enough to warrant asking a lot of questions.

How would you characterize the help you got getting a job? Getting an interview? Knowing what to say in an interview? Having verifiable skills?

0The_Jaded_One
Well, they taught me R and they helped me (along with some kind alumni) to go a bit further with neural networks than I otherwise would have. Having spent time hacking away at neural networks allowed me to pass the interview at the job I just got. Knowing R caused me to get another generous offer that I have had to turn down. Interview skills training with Robert was valuable, especially at the beginning. Robert seems to have a fairly sound understanding of how to optimise the process.
0The_Jaded_One
Well, that's only a cost if (as in my case) you had to keep your normal home empty amd thereby double pay accommodation for that period. Also some people on the course were local. I was told that this is fully refundable if you don't like the course within the first week, though I am not sure they would extend that to anyone (but you can ask).

Are your finances so dire that if someone offered you $1/day in exchange for playing Russian Roulette, you would accept? If not, aren't you being just as irrational as you are accusing those who fail to accept your argument of being?

0The_Jaded_One
No, that doesn't work if I expect to sign up soon.

You might want to consider what the objective is, and whether you should have different resources for different objectives. Someone who's in a deeply religious community who would be ostracized if people found out they're an atheist would need different resources than someone in a more secular environment who simply wants to find other atheists to socialize with.

I think I should also mention your posting a URL but not making it clickable You should put anchors in your site. For instance, there should at the very least be anchors at "New atheists&quo... (read more)

"Just a "Survival Guide for Atheists" "

Are you referring to the one by Hehmant Mehta?

"not-particularly-deep-thinking theist."

Typo?

I suppose this might be better place to ask than trying to resurrect a previous thread:

What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don't necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at leas... (read more)

0VipulNaik
One relevant consideration in such an evaluation is that Signal's policies with respect to various things (like percentage of income taken, initial deposit, length of program) may have changed since the program's inception. Of course, the program itself has changed since it started. Therefore, feedback or experiences from students in initial cohorts needs to be viewed in that light. Disclosure: I share an apartment with Jonah Sinick, co-founder of Signal. I have also talked extensively about Signal with Andrew J. Ho, one of its key team members, and somewhat less extensively with Bob Cordwell, the other co-founder. ETA: I also conducted a session on data science and machine learning engineering in the real world (drawing on my work experience) with Signal's third cohort on Saturday, August 20, 2016.
2JonahS
Hello! I'm a cofounder of Signal Data Science. Because our students have come into the program from very heterogeneous backgrounds (ranging from high school dropout to math PhD with years of experience as a software engineer), summary statistics along the lines that you're looking for are less informative than might seem to be the case prima facie. In particular, we don't yet have meaningfully large sample of students who don't fall into one of the categories of (i) people who would have gotten high paying jobs anyway and (ii) people who one wouldn't expect to have gotten high paying jobs by now, based on their backgrounds. If you're interested in the possibility of attending the program, we encourage you to fill out our short application form. If it seems like it might be a good fit for you, we'd be happy to provide detailed answers to any questions that you might have about job placement.
0The_Jaded_One
Well I got a job out of it. As for statistics, they're new enough that you'd want to wait a bit. IMO Signal is worth the ~very little that you have to pay for it unless you already are getting job offers or already are very good with R (but then why do you want a bootcamp?)
0The_Jaded_One
I can answer the deposit one: Signal told me personally that they'd refund it in the first week if I wanted to quit due to it being a bad program. In reality it was good. I cannot guarantee that they'd extend this to anyone but you can ask.

"We're planning another one in Berkeley from May 2nd – July 24th."

Is that June 24th?

0JonahS
Yes, that was supposed to be June 24th! We have a third one from July 5th – August 24th. There are still spaces in the program if you're interested in attending.

Isn't that fraud? That is, if you work for a company that matches donations, and I ask to give you money for you to give to MIRI, aren't I asking you to defraud your company?

6ChristianKl
There are a lot of frauds that built on the idea that the recipiant of the fraud also engages in an illegal action. If the victim of fraud himself thinks he acted illegally it's less likely that they will go to the authorities. That's why we have the saying "You can't cheat an honest man".
2Silver_Swift
Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.

It does mean that not-scams should find ways to signal that they aren't scams, and the fact that something does not signal not-scam is itself strong evidence of scam.

0Tem42
Surely scammers will be more motivated to find good signals, and will have more opportunity to experiment with what works and what does not. Someone effectively signaling that they are a non-scam should be a hallmark of a scam.... which is why smart people like us need a long thread like this to explain to us how the scam works.
0Silver_Swift
It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.

Isn't the whole concept of matching donations a bit irrational to begin with? If a company thinks that MIRI is a good cause, they should give money to MIRI. If they think that potential employees will be motivated by them giving money to MIRI, wouldn't a naive application of economics predict that employees would value a salary increase of a particular amount at a utility that is equal or greater than the utility of that particular amount being donated to MIRI? An employee can convert a $1000 salary increase to a $1000 MIRI donation, but not the reverse. Either the company is being irrational, or it is expecting its employees to be irrational.

0ChristianKl
There no reason to call a practice irrational simply because you don't understand it and it's not explained by naive application of classical economics. It seems to be good for branding. We know that spending money for others makes people happier. Employees who donate are happier than those who don't. That's valuable to the company.
4gjm
In some jurisdictions it may be cheaper for the company to donate a given amount to charity than to pay it to an employee (because of tax rules intended to incentivize charitable donations). The company may value both employee-motivation and helping charities. The company may value being seen as the sort of company that helps charities.
0CCC
An employee can convert a $1000 increase into a $1000 MIRI donation, but that requires the employee to get up and do something (i.e. log into his bank account and do a transfer). There's a chance for procrastination, laziness, and mental inertia to prevent that donation; employees who really want MIRI to get the donation might appreciate the company handling the actual getting up and doing it part (which the company can do rather more efficiently - making one big transfer instead of hundreds of little ones, with a corresponding decrease in both individual effort and bank fees). Also, since we're talking potential employees, then it might be a strategic move by the company to more strongly attract potential employees who strongly value MIRI donations and reduce the company's attraction to potential employees who do not value MIRI donations.

Shouldn't we first determine whether the amount of effort needed to figure out the costs of the tests is less than the expected value of ((cost of doing tests - expected gain)|(cost of doing tests > expected gain))?

0Jiro
No, because you don't want ot get into an infinite regress about figuring out the cost of figuring out the cost to, etc. So you need to have a general policy about when to stop the regress, and ensure that your policy is good in most of the cases you'll actually run into in practice, while acknowledging that it may not be optimal 100% of the time.

And if this is presented as some sort of "competition" to see whether LW is less susceptible than the general populace, then if anyone has fallen for it, that can further discourage them from reporting it. A lot of this is exploiting the banking system's lack of transparency as to just how "final" a transaction is; for instance, if you deposit a check, your account may be credited even if the check hasn't actually cleared. So scammers take advantage of the fact that most people are familiar with all the intricacies of banking, and think that when their account has been credited, it's safe to send money back.

It is somewhat confusing, but remember that srujectivity is defined with respect to a particular codomain; a function is surjective if its range is equal to its codomain, and thus whether it's surjective depends on what its codomain is considered to be; every function maps its domain onto its range. "f maps X onto Y" means that f is surjective with respect to Y". So, for instance, the exponential function maps the real numbers onto the positive real numbers. It's surjective with respect to positive real numbers*. Saying "the exponential... (read more)

So, we have

  1. We don't have both “Either K or A” and “Either Q or A”
  2. Therefore, we either have “Neither K nor A” or “Neither Q nor A”
  3. Since both of the possibilities involve “no A”, there can be no A.

Your post seems to be a rather verbose way of showing something that can be shown in three lines. I guess you're trying to illustrate some larger framework, but it's rather unclear what it is or how it adds anything to the analysis, and you haven't given the reader much reason to look into it further.

The reason that someone might think an Ace would be a good... (read more)

1ScottL
I have rewritten the header to this post to make it clear that you should read the post in main first and only look at this one if it is required. Technically the problem is very simple, but it does frequently fool people. If you write out the logic of it like in the above post, then people will very easily get the right answer. This post is meant to be a verbose explanation of the solution for people who don't believe that you should choose the king. You can read this post if you want to know why people get fooled by this simple problem. This is the example as it is written in the academic literature Only one statement about a hand of cards is true: 1. There is a King or Ace or both. 2. There is a Queen or Ace or both. Which is more likely, King or Ace? Most people say Ace.

I think that the first step is to unpack "annihilate". How does one "annihilate" a universe? You seem to be equivocating between destroying a universe, and putting it in a state inhospitable to consciousness.

It also seems to me that once we bring the anthropic principle in, that leads to Boltzmann brains.

0PeterCoin
So yes, annihilation refers specifically to any process that would at light speed render the universe lethal to life as we know it. I think of it sort of like living on a bubble that's always bursting (in timelines we don't observe). There's something left over but it's pretty unrecognizable. Any account of the origin of the universe is probably going to have some anthropic consideration, so Boltzmann brains are not a unique problem. But I think fragile universe hypothesis may be an asset in solving it. Conventional cosmology calls for a short lived active universe with an infinitely long lived remainant after heat death. Whereas in fragile universe that remainent dwarfed in scale by the outcomes of these shattering events which may well create intelligences that don't suffer the Boltzmann pathology.

Upvote for content, but I think that there's a typo in your second sentence

Variable schedules maximize what is known as resistance to extinction, the probability a behavior will decrease in frequency goes down. Perhaps a semicolon instead of a comma, or "as frequency of rewards ... " instead of "in frequency ...", was intended?

0[anonymous]
Fixed

"show that one serving of Soylent 1.5 can expose a consumer to a concentration of lead that is 12 to 25 times above California's Safe Harbor level for reproductive health"

Concentration, or amount? it seems to me that that is a rather important distinction, and it worrying that As You Sow doesn't seem to recognize it.

I'm not sure you understand what "iid" means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term "iid" isn't doing any work in your statement; you could just same "It's not from the distribution you really want to sample", and it would be just as informative.

"This isn't an example of overfitting, but of the training set not being iid."

Upvote for the first half of that sentence, but I'm not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It's like having a drug trial where the placebos are one color and the real drug is another.

0Houshalter
I guess I meant it's not iid from the distribution you really wanted to sample. The hypothetical training set of all possible pictures of tanks, but you just sampled the ones that were during daytime.

I realize that no analogy is perfect, but I don't think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it's just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.

-1ahbwramc
Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don't think it's exaggerating to say that the sleeper cell "already exists". I'm willing to own up to the analogy to that extent. As for Knightian uncertainty: either the AI will be an existential threat, or it won't. I already think that it will be (or could be), so I think I'm already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important. (Of course, I'm against wasting research money, so I pay attention to arguments for why AI won't be a threat. I just haven't been convinced yet)

You paid a karma toll to comment on one of my most unpopular posts yet

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as y... (read more)

-4OrphanWilde
You could be correct there. There's a conditional in the sentence that specifies "everybody". "So if I'm arguing against a straw man..." I don't think I -am- arguing against a straw man. As I wrote directly above that, I think your understanding is drawn entirely from the examples you've seen, rather than the definition, as written on various sites - you could try Wikipedia, if you like, but it's what I checked to verify that the definition I used was correct when you suggested it wasn't. I will note that the "Sunk Cost Dilemma" is not my own invention, and was noted as a potential issue with the fallacy as it pertains to game theory long before I wrote this post - and, indeed, shows up in the aforementioned Wikipedia. I can't actually hunt down the referenced paper, granted, so whether or not the author did a good job elaborating the problem is a matter I'm uninformed about. "Illogical" and "Absurd" are distinct, which is what permits common fallacies in the first place. Are you attempting to dissect what went wrong with this post? Well, initially, the fact that everybody fought the hypothetical. That was not unexpected. Indeed, if I include a hypothetical, odds are it anticipates being fought. It was still positive karma at that point, albeit modest. The negative karma came about because I built the post in such a way as to utilize the tendency on Less Wrong to fight hypotheticals, and then I called them out on it in a very rude and condescending way, and also because at least one individual came to the conclusion that I was actively attempting to make people less rational. Shrug It's not something I'm terribly concerned with, on account that, in spite of the way it went, I'm willing to bet those who participated learned more from this post than they otherwise would have. I'll merely note that your behavior changed. You shifted from a hit-and-run style of implication to over-specific elaboration and in-depth responses. This post appears designed to prove t

The set of possible Turing Machines is infinite. Whether you consider that to satisfy your personal definition of "seen" or "in reality" isn't really relevant.

If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.

-3OrphanWilde
Wait. You paid a karma toll to comment on one of my most unpopular posts yet to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"? Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now. Let's pull up the definition you offered. You're not even getting the -populist- definition of the fallacy right. Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money? Why didn't I bring that up before? Because I'm not here to score points in an argument. Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument, a trait which I think is overemphasized and over-rewarded on Less Wrong. I can't fix that, but I can express my disdain for the behavior: Your games of trivial social dominance bore me. I believe it's your turn. You're slated to deny that you're playing any such games. Since I've called your turn, I've changed it, of course; it's a chaotic system, after all. I believe the next standard response is to insult me. Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do, and then to say that by calling attention to it, I've left you in confusion as to what game you're even supposed to be playing against me. We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a "We should retrain the model" lobby group, and will it act faster than the prison lobby?

Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of... (read more)

2lululu
re: futures market in recidivism - http://freakonomics.com/2014/01/24/reducing-recidivism-through-incentives/

An example of a sense would be to define some quantification of how good an algorithm is, and then show that a particular algorithm has a large value for that quantity, compared to SI. In order to rigorously state that X approaches Y "in the limit", you have to have some index n, and some metric M, such that |M(Xn)-M(Yn)| -> 0. Otherwise, you're simply making a subjective statement that you find X to be "good". So, for instance, if you can show that the loss in utility in using your algorithm rather than SI goes to zero as the size of the dataset goes to infinity, that would be an objective sense in which your algorithm approximates SI.

2jacob_cannell
It should be obvious that SGD over an appropriately general model - with appropriate random inits and continuous restarts with keep the max score solution found - will eventually converge on the global optimum, and will do so in expected time similar or better to any naive brute force search such as SI. In particular SGD is good at exploiting any local smoothness in solution space.

You can't do an exhaustive search on an infinite set.

1Lumifer
I haven't seen any infinite sets in reality.

Consequentialist thinking has a general tendency to get one labeled an asshole.

e.g.

"Hey man, can you spare a dollar?" "If I did have a dollar to spare, I strongly doubt that giving it to you would be the most effective use of it." "Asshole."

Although I think that it's dangerous to think that you can accurately estimate the cost/benefit of tact; I think most people underestimate how much effect it has.

0[anonymous]
Except if the priority of consequences is ranked as 1. prevent x-risk 2. be popular and you act accordingly :)
0chaosmage
I agree and you shouldn't be downvoted. The flip side is that if you're not consequentialist, consequentialists will label you a fool. When I'm labeling myself, "fool" feels less punishing than "asshole", but I think when it's coming from others I'd rather look like an asshole than like a fool. I do wonder how much that is an influence on my consequentialist leanings.

There's a laundry section, with detergent, fabric softeners, and other laundry-related products. I don't think the backs generally say what the product is, and even if they do, that's not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the "detergent" label, which means that they don't notice when they're buying fabric softener or another product.

0bortels
Actually - I took a closer look. The explanation is perhaps simpler. Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus. So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!" To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.

As I said, that is not what the sunk cost fallacy is. If you've spent $100, and your expected net returns are -$50, then the sunk cost fallacy would be to say "If I stop now, that $100 will be wasted. Therefore, I should keep going so that my $100 won't be wasted."

While it is a fallacy to just add sunk costs to future costs, it's not a fallacy to take them into account, as your scenario illustrates. I don't know of anyone who recommends completely ignoring sunk costs; as far as I can tell you are arguing against a straw man in that sense.

Also, it's "i.e.", rather than "i/e".

0OrphanWilde
Taking them into account is exactly what the sunk cost fallacy is; including sunk costs with prospective costs for the purposes of making decisions. I think you confuse the most commonly used examples of the sunk cost fallacy with the sunk cost fallacy itself. (And it would be e.g. there, strictly speaking.) ETA: So if I'm arguing against a straw man, it's because everybody is silently ignoring what the fallacy actually refers to in favor of something related to the fallacy but not the fallacy entire.

What's the deal with laundry detergent packaging? For instance, take a look at this http://news.pg.com/sites/pg.newshq.businesswire.com/files/image/image/Tide_Liquid_Detergent.jpg Nowhere on the package does it actually say it's detergent! I guess they're just relying on people knowing that Tide is a brand of detergent? Except that Tide also makes other products, such as fabric softener. And it's not just Tide. http://www.freestufffinder.com/wp-content/uploads/2014/03/all-laundry.jpg http://dgc.imageg.net/graphics/product_images/pDGC1-10603813v380.jpg htt... (read more)

0Manfred
I dunno, the container I have says "detergent" on the front 3 times. In fact I think all of your pictures other than the Tide one contain "Detergent" in small print after the brand name, or at the bottom of the label.
5Alicorn
I had this problem with soap for a while (there was a "Dove isn't soap!" campaign that didn't say what it... like... was... and I switched to Ivory because I wanted soap.)
2bortels
There's a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes - you are expected to know it's detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the "laundry detergents" section, likely with labels as well, so it's not necessary on the front label.

Suppose we have a set S of n elements, and we ask people to memorize sequences of these elements, and we find that people can generally easily memorize sequences of length k (for some definition of "generally" and "easily"). If we then define a function f(S) := k log n, how will f depend on S? Have there been studies on this issue?

1[anonymous]
Why k log n? I imagine n would be largely independent of k, so f(S) would become arbitrarily large just by using bigger and bigger sets.
0[anonymous]
Sorry, this was an useless post so now it's gone

Estimator asked in what sense SI is approximated, not, given a sense, how SI is approximated in that sense. Can you give a metric for which the value is close to SI's value?

0Houshalter
I don't really understand the question. Or what "sense" means in this context. Are you asking how close the approximation is the ideal? I would say that depends on the amount of computing power available, and that it approaches it in the limit. But it gives reasonable answers on realistic computers, whereas SI does not. There is also some loss based on the information required to build a finite turing machine of the right size. As opposed to the infinite number of other structures you can build with logic gates. E.g. a machine that is exactly like a finite turing machine, but the 15th memory cell is corrupted, etc. I don't think this problem is unsolvable though. There are for example Neural Turing Machines which give it access to an infinite differentiable memory.

Suppose you were given two options, and told that whatever money results would be given to an EA charity. Would you find it difficult to choose a 1% shot at $1000 over a sure $5? What if you were told that there are a thousand people being given the same choice? What if you're not told how the gamble turns out? What if all the gambles are put in a pool, and you're told only how many worked out, not whether yours did?

No. If you forecast that the price of gold will go up, and the price instead goes down, then being honest about your forecast loses you money. Prediction markets reward people for making accurate predictions. Whether those predictions were an accurate reflection of beliefs is irrelevant.

-1Lumifer
Pretty much everything in life "reward[s] people for making accurate predictions". That's not the issue. The problem is that to "supply accurate information" you need to know what is "accurate" ex ante and you don't. At the time you submit your bet to a prediction market you're operating on the basis of expectations -- you have no access to the Truth about the outcome, you only have access to your own beliefs. Accordingly, you don't tell the prediction market what is the correct choice, you tell it what you believe is the correct choice. Prediction markets aggregate beliefs, not truth values.

Most people consider causality to be a rather serious argument. If you're going to unilaterally declare certain lines of argument illegitimate, and then criticize people for failing to present a "legitimate" argument, and declaring that any opinions that disagree with you don't improve the discussion, that's probably going to piss people off,

You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator's argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.

All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a differe... (read more)

Hopefully, I'm not just feeding the troll, but: just what exactly do you think "the sunk cost fallacy" is? Because it appears to me that you believe that it refers to the practice of adding expenses already paid to future expected expenses in a cost-benefit analysis, when in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

0OrphanWilde
The Sunk Cost Fallacy is the fallacy considering sunk costs (expenses already paid) when calculating expected returns. I/e, if I've already spent $100, and my expected returns are $50, then it would be the sunk cost fallacy to say it is no longer worth continuing, since my expected return is negative - I should instead, to avoid the fallacy, only consider the -remaining- expenses to get that return. Which is to say, to avoid the fallacy, sunk costs must be ignored. The post is about the scenario when prior-cost insensitivity (avoiding the sunk cost fallacy) opens you up to getting "mugged", a situation referred to as the Sunk Cost Dilemma, of which surprisingly little has been written; one hostile agent can extract additional value from another, sunk-cost-insensitive, agent by adding additional costs at the back-end. (There was no "trolling". Indeed, I wasn't even tricking anybody - my "mugging" of other people was conceptual, referring to the fact that any "victim" agent who continued to reason the way the people here were reasoning would continue to get mugged in a real-life analogue, again and again for each time they refused to update their approach or understanding of the problem.)

"Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins"

Shouldn't that be "at most"?

2Richard_Kennaway
Either could be used, with the same meaning. Humans are related at least as closely as that; humans are related at most as distantly as that. If you want the text to be more explicit, it is not enough to substitute one word for the other. One would have to include an indication of the direction one is facing, an indication of which side of that single point human relatedness is being said to lie on.
0chaosmage
Maybe. But the number is wrong anyway.

What does that mean, "You're not going to just happen to be in one of the first twenty years"? There are people who have survived more than one billion seconds past their twenty first birthdays. And each one, at one point, was within twenty second of their twenty first birthday. What would you say to someone whose twenty first birthday was less than twenty seconds ago who says "I'm not going to just happen to be in the first twenty seconds"?

2DanielLC
Yes, but at many more points they were not. I'd tell them that they're even less likely to hallucinate evidence that suggests they are. Every day, at some point it's noon, to the second. If you looked at your watch and it had a second hand, and it was noon to the second, you'd still find that a pretty big coincidence, wouldn't you?

Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.

"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."

Listing arguments that you find unconvincing, and simply declaring that you find your opponent's argument to be similar, is not a valid line of reasoning, isn't going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you thin... (read more)

1Fivehundred
Sure, but I found the analogy useful because it is literally the exact same thing. Both draw a line between a certain mechanism and a broader principle with which it appears to clash if the mechanism were applied universally. Both then claim that the principle is very well established and that they do not need to condescend to address my theory unless I completely debunk the principle, even though the theory is very straightforward. I was sort of hoping that he would see it for himself, and do better. This is a rationality site after all; I don't think that's a lot to ask.

There was no "mockery", just criticism and disagreement. It's rather disturbing that you saying that criticism and disagreement is "not acceptable" has been positively received. And estimator didn't say that the argument is closed, only that zie has a solid opinion about it.

-1Fivehundred
He didn't bother with a serious argument, only an appeal to "causality." I don't go around posting my opinions on random threads unless I really can improve the discussion.

"Countries with a lot of specialization are richer, therefore, within a country, the richest people should be people who specialize."

::sigh::

0[anonymous]
No. Specialization, as such, increases efficiency, both on the individual level and it also adds up collectively.

You said "More like the first definition." The first definition is "to name, write, or otherwise give the letters, in order, of (a word, syllable, etc.)". Thus, I conclude that you are saying that it is impossible to name, write, or otherwise give the letters, in order, of the word "complexity". I have repeatedly seen people in this community talk of "verified debating", in which it is important to communicate with other people what your understanding of their statements is, and ask them whether that is accurate. And yet when I do that, with an interpretation that looks quite straightforward to me, I get downvoted, and your only response is "no", with no explanation.

Load More