Advocating that population control as the most important priority that there is damages efforts at vaccination.
If it's plausible that your morals are okay with giving vaccinations in a way to damage human reproductive capacity your effort of vaccination people against important diseases runs into trouble.
There are enough conspiracy theorists out there that claim that the UN cares about population control enough to vaccinate in a way reduces reproduction capacity that's an issue. It's valuable to signal that you care more about saving lives than you can abo...
Resources inside a light cone go according to T cubed, population growth is exponential: thus we see resource limitation ubiquitously: Malthus was (essentially) correct.
Maybe "T cubed" will turn out to be completely wrong, and there will be some way of getting hold of exponential resources - but few will be holding their breath for news of this.
The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the "safety zone" between expansion of carrying capacity and population growth.
The Jevons paradox: technological improvements make each unit of natural resources more useful, increasing the rate at which they are used up. (Though I'm not convinced that most environmentalists actually are opposed to all relevant technological improvements. I've definitely never heard any complain about so...
The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement
Individual technological advances can increase the efficiency of resource utilization, but presently and historically higher levels of technological development are correlated with higher per capita resource consumption.
Anyway, even if future technologies could lower per capita resource consumption, how do you accelerate the rate of technological advancement?
...Moreover, we are close to a level of technology th
Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone
I'd be suspicious of that 'many' unless you plan on moving lots of asteroids in-system. Earth is some prime real estate for humans.
To me this looks like a very familiar mulberry bush around which plenty of people have been going since the early 1970s.
Are you claiming something different from the classic population-bomb limits-to-growth arguments? Because if you do not, there seems little reason to revisit this well-trampled territory.
What conditions must functions f(m,t), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()?
Did you mean to ask "What conditions must functions f(m,t), k(x), and p(x) satisfy in order to insure that p(x) - f(m,k(x)) > 0 for all x > today()?"
If so, that still leaves m as a free variable.
There are quite a few who argue that we are already overshooting the carrying capacity. One way to measure it is the global acre http://en.wikipedia.org/wiki/Global_hectare
And according to the footprintnetwork we are already using up 150% of earths carrying capacity: http://www.footprintnetwork.org/en/index.php/GFN/page/basics_introduction/ thus using up the available resources faster than the are (re)generated.
We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
I think it's fair to say that the danger of grey goo is greater now than it was in the 1980. How well do the engineering mindset work for the problem.
On the other hand when it comes to overpopulation political solutions such as the Chinese one do massive amounts of progress.
...1: By
Oh, by the way, I thought of a few practical benefits I can hope to achieve with this discussion:
Next time some someone who has read enough of this post wanders into a debate about global warming or deforestation or whatever, they will be armed with a constructive alternative to the standard green vs blue talking points.
Conversely, you can find here arguments for full-steam-ahead technological progress that luddites won't be expecting because it follows directly from some of their favorite "we're all doomed" arguments. I even suspect the rea
You're asserting a highly nonobvious result (seven billion looks fine from here) as though it were obvious fact.
If eugenics = Nazi, it's time to re-evaluate all this talk of FAI and transhumanism.
Eugenics can be negative (breed out) or positive (breed in / maintain), and it can be state run or individual run. The line between birth control / family planning and eugenics is like the line between erotica and porn; the good things are good because they are good, not any quantifiale thing in the thing.
Your assumptions and questions point to a desire for future generations to be as or more healthy and happy as we are today, and there is a name for that. A name t...
What's the connection to re-evaluating FAI and transhumanism?
I didn't say I think eugenics = Nazi. I just said Nazis advocated a particularly murderous and arbitrary form of eugenics, so now that's all that comes to mind for most people today when they think about eugenics, if they do at all.
With a lot of work, though, we may eventually make that issue moot through in-vivo gene therapy.
So... a majority or at least a vocal plurality of us believe that technology is not necessary for preventing population from overshooting the planet's carrying capacity?
Or, you are so vehemently opposed to the very concept of limiting conditions that it discredits any argument it is part of, regardless of what the rest of the argument?
It seems like these discussions, even when they use biological terminology like "carrying capacity", never seem to take biology into account as anything but a static force.
Malthus assumed that agriculture only increased production arithmetically, something that the Green Revolution disproves as it continues to increase crop yields and the percentage of arable land worldwide much faster than our population has grown. And it's not exactly like we were in danger of hitting our upper limits before; even in the US you can see overgrown fallow fields ...
"the Green Revolution disproves"
"the technology to use their fields efficiently"
"developing plants and irrigation methods"
"with modern technology it is almost completely renewable"
This illustrates precisely what I'm trying to say. The reason we haven't experienced a Malthusian Crunch is not that the concept itself is impossible or absurd, but because we develop new technologies fast enough to continually postpone it.
This has some implications:
If technological development is derailed by cultural backlash, prolonged recession, or political lunacy, we may find ourselves having to cope with population overshoot on top of whatever the original problem was.
Responsible global citizens need to defend and promote technological progress with every bit of the same zeal they currently have for the natural environment.
Extrapolations of continued technological progress based on past performance are inherently unreliable. So if our extrapolations of not having to worry about overshoot are in effect extrapolations of extrapolations about technological progress, then those extrapolations are themselves not reliable and we cannot afford complacency.
I think that probably the most effective means of population control, historically speaking has been (in no particular order):
-Increased education (especially of females)
-Improved access to birth control
-Feminism, increased women's rights
-Creating a society where women are allowed and encouraged to work outside the home
-Improved economics; getting out of a third-world economic state is vital
-Lowered childhood mortality rates
-Longer life-spans in general
Top-down population controls (like China's) have much more severe side effects, and are probably less effective in the long run.
First, let me point out that I put a fair amount of work into pointing out all those flaws and holes in your last best citation, and I'm a little annoyed that you completely ignored all of them in favor of saying "but Buffett is so high-status and I like him so much".
If you want I'll go through them point by point.
I don't see any mention of how they were audited (Buffett merely says that they 'were audited', no mention of by whom, when, what the audits said, whether he saw the results, etc, and offers as reassurance that checks were paid for the appropriate amounts, which is not my problem here)
Presumably you can see the difference between your stating these are NOT audited, and then when pointed out that they are, you back off to this.
The results of the audit are the results in this article. That is, these are results reported which survived the audits.
In many of the cases, the audits are "typical" of the investment advisory business, but I do not know what that means exactly. But it is a level playing field against all other investment advisers.
Also for a few, not all, of the investors cited here, they run/ran for decades public investment businesses. Isn't the preponderance of your Bayesian a posts that if at least these members of this widely read, cited, and discussed "superinvestors" article was just wrong, that this would at least have lead to traceable reports of the discrepancies on the internet, finable with google search?
To the extent your objections amount to "Buffett could be an idiot and a fraud, either not knowing or not caring what it means to make these claims" my answers are going to be we have 5 decades of impeccable record, if you think Buffett is that unreliable then generally there is no arguing with you as anybody who says something with which you disagree you will question as an idiot or a fraud. If you cannot tell that Buffett is not an idiot or a fraud, or have not followed him well enough to be sure one way or the other, then I would suggest you have no business weighing in on the subtle subject of whether the market is so inefficient that the best investors in the world are just coin-flippers.
What you think about Buffett's "character" is irrelevant to me, and for me, further emphasizes your extremely poor reasoning in this area - that when pushed back, you resort to one man and your beliefs about his "character".
I suggest relying upon Buffett because you and everyone else out there who can read has infinitely more reason to rely upon Buffett than to rely upon me. And further, what is needed int he discussion of EMH vs non-EMH is not some brilliant new insight that I can provide that you haven't seen somewhere else already. EMH vs non-EMH is a subtle question, is the market so efficient that Buffett can't consistently beat it without committing a crime, either insider trading or some other information-twisting fraud, or is it just a little less efficient than that? The "insight" I have is that what pushes it towards efficiency is competing analyses on opposite sides of each trade. The "insight" I have is that every bit of evidence suggests that in business some people have superior skill or algorithms or SOMETHING and are more successful than others. And they can do it serially, command high prices in very competitive markets, blah blah blah, and show EVERY BIT as much evidence of being "real" as are great pitchers or tennis players or tenors or talk show hosts or porn stars. And your case is that no, with investing it is different, the people who do the work are so smart that they get it right in an unbeatable way, but so stupid that they don't even realize they would be better off free riding.
What is needed is not any great insight from one or the other of us, I don't think, but evidence that is hard to deny that yes, the market can be beat. I think that evidence would consist of market beaters coming from a narrowly defined group of people who set out to beat the market by studying it and allowing evidence to drive their future hypotheses and efforts. And what do we find in the market? Exactly that, market beaters are smart and talk in terms of causality, of what makes a business great, of where the momentum traders and the chartists missed the boat.
But my causal chain of how the market could be merely VERY efficient has been, I hope, presented by now. Let me know if it hasn't.
Markets are very different from electronic circuits or particle physics or philosophy or engineering. Circuits don't care if you found a more efficient way to design them. The properties of steel will not change when you discover it lets you build profitable bridges.
As much as you might hypothesize that we will not see securities markets make the same mistakes they have made in the past, does the evidence support that? And in any case, the idea that markets do learn or have learned SOMETHING supports only the VEMH, the very efficient market hypothesis, which is not controversial. By this I mean the hypothesis that it is hard to beat the market, because all the easy stuff has been figured out and is properly accounted for by the bulk of the traded money in the market.
I tracked Chiplotle stock on and off from around 2000 forward. There were two classes of shares, A, and B, with the B's trading at a very consistent 10% discount to the A's. I would check once or twice a year to see if this difference persisted, and it did. The thing that was surprising was that the documentation of the company explained that these shares had equal value, represented identical fractions of the total company. Why they traded at a 10% difference I never saw an explanation, and I always questioned whether there was some detail I was missing. Here, in late 2007 is documentary evidence that the difference persisted. Here, two years later, is Chipotle's report that they were eliminating the two classes in favor of one class, and that the exchange rate would be 1:1 just as I had always believed.
In my case, I am an electrical engineer/physicist, trying to concentrate on building new cell phone algorithms for at least a few hours a day. Instead of organizing the financing to exploit this weird inefficiency at low cost, I just checked in on it every year or two. Wanting to see if I was right. Had I been a professional trader, I would have looked more at creating an arbitrage on the A and B shares and capturing the collapse of the arbitrary pricing difference. As an amateur I didn't know if it would ever collapse, and the brokers are neither smart enough nor dumb enough to let me buy the As and short the Bs without a lot of capital in my account to anchor what they see as two uncorrelated risky bets.
My point here is this is just ONE of MANY possible stories of moderate sized inefficiencies I have seen with my own eyes. Others I have traded. Yes, every one of them is an anecdote. The plural of anecdote is not data. But a bunch of anecdotes like that creates, it would seem, market beating performance for many traders trading different stocks.
Maybe markets COULD be different than circuits and so on, and maybe as computers and AI takes over more and more, they will get more and more efficient. But even then, the most powerful AIs will be beating the market, even as they essentially set th prices at levels that make it incredibly hard for anybody else to beat the market. The thing that drives market makers is not their stupidity, but their intelligence and rationality. Seems to me.
Er, yes, there is. That's kind of the point of the efficient markets concept! Markets are unusual and special in that the attempt to find predictable regularities leads to the exploitation of the regularities and their disappearance.
THIS is a hypothesis. And the only word in that hypothesis I will argue with is the last one: disappearance. The predictable regularities don't disappear from the time-stream of prices, if there is a mispricing at 2:31 PM on Thursday it is frozen there in the permanent record. What changes is how long it takes for the record to close those various gaps. Maybe before computers a broad class of inefficient prices were never traded away. Maybe in the 1980s a broad class of inefficiencies were capitalized upon by people with computers over the course of a two week period. Maybe by the 2000s those same inefficiencies were traded away within hours or minutes.
But my points are: 1) we are not arguing efficiency vs inefficiency, we are arguing too efficient to beat vs nearly too efficient to beat and 2) without the inefficiencies, no one would be there to pay the actors making the market more efficient by trading the inefficiencies, and that no, it is not their stupidity that keeps them working for free.
I hope this is what you wanted when you suggested I was ignoring your point and merely arguing pro hominem, citing people who I thought should be much more believable than I am. If I missed anything that still seems critical, flag it to me and I'll answer it.
In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.