This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn't happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity.
It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.
It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
Yeah. But that's not really the point. The reason low carb diets lead to weight loss is because they restrict calories. I'm aware of many dieting tricks that can assist, but a calorie deficit must be created in order for weight to be lost.
This may seem self evident, but there is still debate about it. Carbs are not magically evil, they are just a macro nutrient that happens to be a large share of the calories in a typical Western diet. That's it. No magic.
If you don't eat after 6pm, never eat dessert or fast food, eat a larger breakfast, have a salad or X raw vegetables everyday, drink X water everyday—these can all help you lose weight. But there is nothing magical, or even scientific about any of these tactics. It's all stuff we've known for 100 years.
Likewise, if you walk 2 miles a day everyday for a year, you'll burn X calories that will lead to X weight loss. It's just math.
My point was to specifically disparage diets like the Atkins Diet. It does nothing apart from restricting calories, yet libraries have been written about the magic of how and why it works. It's all just noise aimed at selling books, etc. to people who are looking for help.
The reason low carb diets lead to weight loss is because they restrict calories. I'm aware of many dieting tricks that can assist, but a calorie deficit must be created in order for weight to be lost.
No one in this thread is disputing that you need a calorie deficit to lose weight. My contention is that this is merely the beginning, not the end. Let's refer to the following passage from the linked article:
Translation of our results to real-world weight-loss diets for treatment of obesity is limited since the experimental design and model simulations relied on strict control of food intake, which is unrealistic in free-living individuals.
A diet should be realistic for free-living individuals. An obese person who wants to lose 50+ lb. could expect to be at it for the better part of a year. A diet that leaves you hungry all day is doomed to fail: it's unrealistic to expect pure willpower to last that long. That is the point of my post about hunger control. Disregarding it or dismissing it as a mere trick is to ignore that a very important part of dieting is making sure the dieter sticks to the diet.
My point was to specifically disparage diets like the Atkins Diet. It does nothing apart from restricting calories, yet libraries have been written about the magic of how and why it works. It's all just noise aimed at selling books, etc. to people who are looking for help.
Quite the contrary. The Atkins Diet is not just about losing the weight. It also includes a plan to keep it off. Maintaining weight loss is generally harder than losing the weight in the first place. Yo-yo dieting) is a very real problem. The problem with naive calorie restriction is that it doesn't instill good eating habits that can be maintained once the weight-loss period ends. The Atkins Diet addresses this and is designed to ease one into eating habits that will maintain the weight loss.
This article is interesting to me because I have this belief that weight loss is basically about eating less (and exercising more). And some extremely high percentage of everything said about dieting, etc. beyond that is just irrational noise. And that the diets that work don't work because of the reasons their proponents say they work, but only because they end up restricting calories as a byproduct.
this study is NOT a blow to low-carb dieting, which can be quite effective due to factors such as typically higher protein and more limited junk food options.
This line is the funniest to me. This is why I think low carb diets work: Because if you eliminate the primary source of calories in a person's diet (carbs, which can be 50%+ of many people's diets), they will eat significant fewer calories overall restricting themselves to only protein and fat. But people have, instead, made up all sorts of fancy, science-y sounding reasons why carbs were evil.
Hunger is the big diet killer. It's very hard to maintain a diet if you walk around hungry all day and eat meals that fail to sate your appetite. Losing weight is a lot easier once you find a way to manage your hunger. One of the strengths of the low-carb diet is that fat and protein are a lot better than carbs at curbing hunger.
So how to solve the problem of scientific misconduct? I don't have any good answers. I can think of things like "Stop awarding people for mere number of publications" and "Gauge the actual impact of science rather than empty metrics like number of citations or impact factor." But I can't think of any good way to do these things. Some alternatives - like using, for instance, social media to gauge the importance of a scientific discovery - would almost certainly lead to a worse situation than we have now.
If you go up the administration, at some point you reach someone who simply isn't equipped to evaluate a scientist's work. This may even just be the department head not being familiar with some subfield. Or it might be the Dean, trying to evaluate the relative merits of a physicist and a chemist. It's the rare person who knows enough about both fields to render good judgment. That's where metrics come in. It's a lot easier if you can point to some number as the basis for a decision. Even if it's agreed that number of publications or impact factor aren't good numbers to use, they're still convenient.
As someone who doesn't know much beyond basic statistics, in what way are 0 or 1 probabilities? Isn't it just axiomatic truth at that point? In that sense saying zero and one are probabilities is just saying 'certain' or 'impossible' as far as I understand it. Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.
I suppose the alternative would be that we'd have to assume every mathematical proof has infinite evidence if we wanted to get anywhere productive- after all axioms are assumed to be true. It doesn't make much sense to need evidence in that scenario- except perhaps the probability of error and mistake? That isn't particularly calculable and would actually change from person to person.
Using one and zero makes sense to me as a matter of assumed or proven truths, but I'm still unsure how that makes it a probability.
Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.
"Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies.
If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable.
If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities.
For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").
Would I pay $24k to play a game where I had a 33/34 probability of winning an extra $3k? Let's consult our good friend the Kelly Criterion.
We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying around, I choose 1a.
In case 2, we come across the interesting question of how to analyze the costs and benefits of trading 2a for 2b. In other words, if I had a voucher to play 2a, when would I be willing to trade it for a voucher to play 2b? Unfortunately, I'm not experienced with such analyses. Qualitatively, it appears that if money is tight then one would prefer 2a for the greater chance of winning, while someone with a bigger bankroll would want the better returns on 2b. So, there's some amount of wealth where you begin to prefer 2b over 2a. I don't find it obvious that this should be the same as the boundary between 1a and 1b.
This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.
Equivalence is tricky business. If we look at the winnings distribution over several trials, the 1s look very different from the 2s and it's not just a matter of scale. The distributions corresponding to the 2s are much more diffuse.
Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?
A certain bet has zero volatility. Since much of the theory of gambling has to do with managing volatility, I'd say certainty counts for a lot.
It is possible that wearing uniforms not only is evidence that your group is a cult (though it might be weak evidence, as you say) but also may contribute to your group becoming a cult. I can think of at least three somewhat plausible mechanisms. (1) Having a uniform may attract people who want to be in a cult and/or scare off people who very much want not to be in one. (2) Having a uniform may foster a sense of unity and conformity that makes cultishness come more naturally. (3) Having agreed to do something silly (like wearing a uniform) may put you in a frame of mind where you're more likely to agree to other silly things the leader of the group asks you to do later.
(3) Having agreed to do something silly (like wearing a uniform) may put you in a frame of mind where you're more likely to agree to other silly things the leader of the group asks you to do later.
Why are uniforms necessarily silly? Let's take military dress uniforms. In the US, you can tell a military member's rank and branch of service, and even get an idea of their service record, just by looking at their dress uniform. To insiders, this can be rapidly gleaned looking at someone from across a room. With millions of members, individuals cannot possibly be expected to know everybody else and so the uniform serves a useful function.
Recently came across Valiant's A Theory of the Learnable. Basically, it covers a method of machine learning in the following way: if there's a collection of objects which either possess some property P or do not, then you can teach a machine to recognize this with arbitrarily small error simply by presenting it with randomly selected objects and saying whether they possess P. The learner may give false positives, but will not give a false negative. Perhaps the following passage best illustrates the concept:
Consider a world containing robots and elephants. Suppose that one of the robots has discovered a recognition algorithm for elephants that can be meaningfully expressed in k-conjunctive normal form. Our Theorem A implies that this robot can communicate its algorithm to the rest of the robot population by simply exclaiming "elephant" whenever one appears.
The mathematics are done in terms of Boolean functions and "k-conjunctive normal form" is a certain technical condition.
What struck me was that the learning could take place without the learner knowing the definition of the concept to be learned. That a thing could be identified with probability arbitrarily close to 1 without the learner necessarily being able to formulate a definition. I was reminded of the judge who said that he could not define pornography, but he knew it when he saw it. There are plenty of other concepts I can think of where identification is easy (most of the time at least) but which defy precise definition.
I'm usually wary of applying scientific results to philosophy, especially where I'm not an expert. Any expert input on whether this is a fair interpretation of the subject would be appreciated.
Contradiction is all well and good, but I think you can do better; can you name three examples of new technologies invented in the last 50 years and freely available to all civilian Americans each of which technologies causes up to 30,000 deaths and 2 million injuries annually?
High fructose corn syrup and its ilk have been rather devastating.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It isn't unrealistic to create a reasonable calorie deficit for awhile...and I have no idea what a "free-living" individual is. It may be difficult to lose weight, but it's like anything else that is difficult. It requires focused effort over time. Habits can be hard to change. There are plenty of tricks and hacks to help. Avoiding carbs is a good one becuase it will autmotically eliminate 25-60% of an individual's daily calorie consumption. That's all it will do. You could avoid fat, too. Same effect. Fat and carbs = calories. No magic.
Yeah, but strawman. Dieting involves some hunger. It's not going to kill you. It's just part of the adjustment to a more healthy level of consumption.
Naive calorie restriction is just regular calorie restriction with a negative name. Good eating habits entail calorie control. That's not naive. It's basic.
Weight loss is generally really simple. We should be grateful that this is so. Every discussion I've seen on LW makes dieting much more complicated than it need be. It's very hard for many people, but that doesn't mean it's complicated.
By "naive" I just mean calorie restriction without any other consideration. For example, a diet where one replaces a large pizza, a 2-Liter bottle of Coca-Cola, and a slice of chocolate cake with half a large pizza,1 Liter of Coca-Cola, and a smaller slice of chocolate cake is what I'd consider naive calorie restriction. I don't know that anyone would seriously argue that the restricted version even remotely resembles good eating habits.
Lest you accuse me of straw-manning, let it be noted that many obese people subsist on a diet consisting of fast food and junk food. In fact, malnutrition is a very real problem among the obese. That's right: you can eat 5k+ Calories a day and still exhibit signs of malnutrition if all you eat is junk. When I speak of instilling good eating habits, I have in mind people who exhibit severe ignorance or misconception of basic nutrition.
A low-carb diet is not just a matter of eating what you normally eat, minus the carbohydrates. That's going to end about as well as a vegetarian diet where you simply cut out the meat from your normal diet. You run into a micronutrient deficiency that can end up causing problems if the new diet is sustained for several months.
It's an empirical fact that some foods are more filling than others and keep you feeling full for a longer period of time, even if the number of calories consumed is the same. That's why people care about the glycemic index. I have tried losing weight several times over the last seven years or so. There are diets where you feel satisfied most of the time, then there are diets where you finish a meal feeling as hungry as you did when you started. The psychological difference between the two is quite profound and hardly warrants the charge of "strawman".