Comment author: erratim 07 January 2015 12:51:10PM *  1 point [-]

This is true in the short term, but in the long term, the dynamic changes for producers:

  • The producers that know how to make chickens for $8 scale up or their production strategy is replicated by others.
  • The marginal cost of production (and hence price) keeps falling until all producers are making no profit (relative to opportunity cost of capital)
  • The industry can scale up/down (in the long term) to meet changing demand, but it can't drive prices any lower. If prices were any higher the industry would scale up in the short term and keep expanding until the price fell back to the Cost in the long term.

The elasticity of the demand curve changes less than the supply curve in the super long term, but if you agree with me that the supply curve is virtually flat at that point, then the elasticity of the demand curve is negligible (because as the supply curve shifts left and right, the only point on the demand curve that matters is quantity @ price = Cost/Supply price).

Comment author: Furslid 07 January 2015 06:03:00PM 1 point [-]

No. It's true long term as well.

What you have listed are forces that drive the cost of production down. However, they cannot flatten all costs. For example, some locations are better for producing chickens than others. Better weather, cheaper labor market, ease of transportation to slaughter, etc. These factors cannot be cloned.

It's only the marginal producers that have costs at or just below the price.

Comment author: Furslid 06 January 2015 07:48:24PM 4 points [-]

Basic economics that explains why the cost of chicken will drop. You are ignoring supply curves, and these exist because not all producers are identical. The drive in change of costs is competition among chicken producers.

There is a price for chicken, say 10$ per unit. To make a profit, each producer must produce chicken at less than that price. However, not all producers are making chicken at the same cost. Some are more efficient than others. Some spend 9$ making a unit, some spend 8$. Some could produce chicken for 10$ a unit and don't.. When demand for chicken drops, the business with 9$ cost lowers production or leaves the industry before the business with 8$ cost. The drop in production is concentrated in the marginal producers. Similarly if the price rose, the potential producer with 10$ costs would start producing.

There is a mirror process among consumers.

Comment author: cousin_it 27 August 2014 12:39:26PM *  4 points [-]

Colloquial bets are offered by skeevy con artists who probably know something you don't. Bayesian bets, on the other hand, are offered by nature.

That distinction seems a bit unclear, since con artists are a part of nature, and nature certainly knows something you don't.

Here's a toy situation where a Bayesian is willing to state their beliefs, but isn't willing to accept bets on them. Imagine that I flip a coin, look at the result, but don't tell it to you. You believe that the coin came up heads with probability 1/2, but you don't want to make a standing offer to accept either side of the bet, because then I could just take your money.

In the general case, what should a Bayesian do when they're offered a bet? I think they should either accept it, or update to a state of belief that makes the bet unprofitable ("you offered the bet because you know the coin came up heads, so I won't take it"). That covers both bets offered by nature and bets offered by con artists. Also it's useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.

Comment author: Furslid 31 August 2014 03:02:51AM *  1 point [-]

No, the difference is that con artists are another intelligence, and you are in competition. Anytime you are in competition against a better more expert intelligence, it is an important difference.

The activities of others are important data, because they are often rationally motivated. If a con artist offers me a bet, that tells me that he values his side of the bet more. If an expert investor sells a stock, they must believe the stock is worth less than some alternate investment. So when playing assume that odds are bad enough to justify their actions.

Comment author: gwern 09 July 2014 04:01:52AM 5 points [-]

And we can’t explain away all of this low success rate as the result of illusory correlations being throw up by the standard statistical problems with findings such as small n, sampling error (A & B just happened to sync together due to randomness), selection bias, publication bias, etc. I’ve read about those problems at length, and despite knowing about all that, there still seems to be a problem: correlation too often ≠ causation.

Comment author: Furslid 09 July 2014 04:21:17AM 3 points [-]

I'm pointing out that your list isn't complete, and not considering this possibility when we see a correlation is irresponsible. There are a lot of apparent correlations, and your three possibilities provide no means to reject false positives.

Comment author: Furslid 09 July 2014 03:53:53AM 6 points [-]

You're missing a 4th possibility. A & B are not meaningfully linked. This is very important when dealing with large sets of variables. Your measure of correlation will have a certain percentage of false positives, and discounting the possibility of false positives is important. If the probability of false positives is 1/X you should expect one false correlation for every X comparisons.

XKCD provides an excellent example. jelly beans

Comment author: timtyler 10 November 2013 12:12:39PM *  9 points [-]

On the other hand, I think the evolutionary heuristic casts doubt on the value of many other proposals for improving rationality. Many such proposals seem like things that, if they worked, humans could have evolved to do already. So why haven't we?

Most such things would have had to evolve by cultural evolution. Organic evolution makes our hardware, cultural evolution makes our software. Rationality is mostly software - evolution can't program such things in at the hardware level very easily.

Cultural evolution has only just got started. Education is still showing good progress - as manifested in the Flynn effect. Our rationality software isn't up to speed yet - partly because is hasn't had enough time to culturally evolve its adaptations yet.

Comment author: Furslid 10 November 2013 08:02:46PM 5 points [-]

I think that this is an application of the changing circumstances argument to culture. For most of human history the challenges faced by cultures were along the lines of "How can we keep 90% of the population working hard at agriculture?" "How can we have a military ready to mobilize against threats?" "How can we maintain cultural unity with no printing press or mass media?" and "How can we prevent criminality within our culture?"

Individual rationality does not necessarily solve these problems in a pre-industrial society better than blind duty, conformity and superstitious dread. It's been less than 200 years since these problems stopped being the most pressing concerns, so it's not surprising that our culture hasn't evolved to create rational individuals.

Comment author: NancyLebovitz 09 November 2013 02:37:18PM 0 points [-]

Would 53 not being prime break mathematics?

Comment author: Furslid 09 November 2013 07:11:10PM 1 point [-]

It would more likely be user error. I believe 53 is prime. If it isn't then either mathematics is broken or I have messed up in my reasoning. It is much more likely that I made an error or accepted a bad argument.

53 not being prime while having no integer factors other than 1 and itself would break mathematics.

Comment author: somervta 08 November 2013 10:12:55PM 0 points [-]

LNC, not the law of identity, I think.

Comment author: Furslid 08 November 2013 10:59:37PM 0 points [-]

Oops, right. Non-contradiction.

Comment author: Furslid 08 November 2013 08:51:01PM *  4 points [-]

Your list actually doesn't go far enough. There is a fourth, and scarier category. Things which would, if possibly render probability useless as a model. "The chance that probabilities don't apply to anything." is in the fourth category. I would also place anything that violates such basic things as the consistency of physics, or the existence of the external world.

For really small probabilities, we have to take into account some sources of error that just aren't meaningful in more normal odds.

For instance, if I shuffle and draw one card from a new deck what is the chance of drawing the ace of spades? I disregard any chance of the deck being defective, any chance of my model of the universe being wrong, and any chance of laws of identity being violated. Any probabilities are eclipsed by the normal probabilities of drawing cards. (category 1)

If I shuffle and draw two cards without replacement from a new deck, what is the chance of them both being aces of spades? Now I have to consider other sources of error. There could have been a factory error or the deck may have been tampered with. (category 2)

If I shuffle and draw one card from a new deck, what is the chance of it being a live tiger? Now I have to consider my model of the universe being drastically wrong. (category 3)

If I shuffle and draw one card from a new deck, what is the chance of it being both the ace of spades and the two of clubs? Not a misprint, and not two cards, but somehow both at the same time. Now I have to consider the law of identity being violated. (category 4)

Comment author: Error 27 August 2013 09:30:14PM 0 points [-]

It has the weird aspect of putting consciousness on a continuum,

I find I feel less confused about consciousness when thinking of it as a continuum. I'm reminded of this, from Heinlein:

"Am not going to argue whether a machine can 'really' be alive, 'really' be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am."

Comment author: Furslid 29 August 2013 06:18:17AM 1 point [-]

Absolutely. I do too. I just realized that the continuum provides another interesting question.

Is the following scale of consciousness correct?

Human > Chimp > Dog > Toad > Any possible AI with no biological components

The biological requirement seems to imply this. It seems wrong to me.

View more: Prev | Next