All of taw's Comments + Replies

taw00

Your argument depends on choosing what's "central" or "archetypal" example, and that's completely arbitrary, since this doesn't seem to mean "most common" or anything else objective.

It really falls apart on that.

taw10

Some counterpoints:

  • "Behavioural modernity" is a hypothesis which is very far from being universally accepted. Many features supposedly of behavioural modernity have some reasonable evidence of existence far earlier.
  • Any hypothesis linking behavioral modernity with language (the only plausible common cause) is on extremely shaky grounds since as far as we know Neanderthals had language just as well, and that pushes language to nearly 1mya.
  • Behavioural modernity without common cause like language, and without any definite characteristics that wer

... (read more)
taw00

Well, we know pretty well that even when societies were in very close contact, they rarely adopted each other's technology if it wasn't already similar to what they've been doing.

See this for example:

Agriculture probably initially expanded because farmers pressed north through the continent, not because hunter-gatherers adopted the practice on their own, Scandinavian scientists say.

If in this close contact scenario agriculture didn't spread, it's a huge stretch to expect very low level contact to make it happen.

0TheOtherDave
(nods) Yup, if that theory is true, then the observed multiple distinct onset points of agriculture becomes more mysterious.
taw-10

All theories of emergence of agriculture I'm aware of pretend it happened just once, which is totally wrong.

Is these any even vaguely plausible theory explaining how different populations, in very different climates, with pretty much no contact with each other, didn't develop anything like agriculture for very long time, and then in happened multiple times nearly simultaneously?

Any explanation involving "selection effects" is wrong, since these populations were not in any kind of significant genetic contact with each other for a very long time before that happened (and such explanations for culture are pretty much always wrong as a rule - it's second coming of "scientific racism").

1TheOtherDave
Backing up a step from this, actually... how confident are we of the "no contact with each other" condition? Speaking from near-complete ignorance, I can easily imagine how a level of contact sufficiently robust to support "hey, those guys over there are doing this nifty thing where they plant their own food, maybe we could try that!" once or twice a decade would be insufficient to otherwise leave a record (e.g., no commerce, no regular communication, no shared language, etc.), but there might exist plausible arguments for eliminating that possibility.
3DaFranker
Hmm. The more you know. Should I take this to imply that what I learned in high school and wikipedia is wrong, or very poorly understood? From what I know, throughout the paleolithic populations started developing the knowledge and techniques for sedentary lifestyles, food preservation, and growing plants, while at the same time spreading out across the globe. Then came the end of the ice age, and these populations started slowly applying this knowledge at various points in time, with a difference in the 10^4 order of magnitude between the earliest and slowest populations. That looks very much like the human species had "already been selected" before it was completely split into separate populations, though admittedly that alone as described in my previous comment isn't enough to explain how close they came to one another on the timeline (I would have expected a variance of ~50-80k years or so, if that were the only factor, rather than 10-11k). Edit: I only realized after posting both comments that I have a very derogatory / adversarial / accusational tone. This is not (consciously) intentional, and I'm really grateful you brought up this point. I'm learning a lot from these comments.
taw00

How does that reinforce Robin's model? It goes against it if anything. Imagine if humans, dolphins, bats, bears, and penguins nearly simultaneously developed language on separate continents. It would be a major unexplained WTF.

You can start here, but Wikipedia has pretty bad coverage of that.

8Stuart_Armstrong
Robin's model makes more sense if we think of it as "some process we don't understand is behind all these repeated patterns". If agriculture indeed arises at a specific point for reasons we don't understand, it makes Robin's model more likely - and it makes it harder for us to counterfactually mess with the data.
taw-10

Agriculture developed very far from regions most affected by glaciation, and in very diverse climates, so any climatic common cause is pretty dubious.

taw160

It seems rather easy to mess with the inputs T. Weather conditions or continental drifts could confine pre-agricultural humans to hunting essentially indefinitely

This is sort of amazing, but after a couple million years of hunting and gathering humans developed agriculture independently within a few thousand years in multiple locations (the count is at least 7, possibly more).

This really doesn't have a good explanation, it's too ridiculous to be a coincidence, and there's nothing remotely like a plausible common cause.

0johnlawrenceaspden
Hmm, apparently 'behavioural modernity', 'most recent common ancestor' and 'out of Africa' are all around 50 000 years ago. Until about 10 000 years ago a great deal of the world was under thick ice sheets, and probably a lot of the rest was cold, so there probably weren't that many humans alive. If you give each living person a tiny chance of 'inventing agriculture', then "multiple recent inventions thousands of years apart" sounds about right to me. I realize that that's a completely implausible model, but I'm not sure why a more realistic one would make it 'too ridiculous to be a coincidence', and if you require plant evolution as part of the scheme, that will push the expected dates later.
endoself260

There's a very plausible common cause. Humans likely developed the traits that allowed them to easily invent agriculture during the last glacial period. The glacial period ended 10 000 years ago, so that's when the climate became amenable to agriculture.

7DaFranker
Odd. Last I checked there were a dozen or two prominent theories on this, and at least twice as many hypotheses in general as for why we would observe this. Most of these I find plausible, and rather adequate considering the amount of information we have. One of my favorites is that long before this happened, some individuals learned how to do it, but could not transfer a sufficient portion of this knowledge to others, until selection effects made these individuals more frequent and improvements in communication crossed a threshold where it suddenly wasn't so prohibitively expensive anymore to teach others how to plant seeds and make sure they grew into harvestable plants. Once evolution had done its job and most ancestors were now capable of transmitting enough knowledge between eachother to learn basic agriculture, it seems almost inevitable that over several dozen generations, for any select tribe, there will be at least one individual that stumbles upon this knowledge and eventually teaches it to the rest of the tribe. Naturally, testing these hypotheses isn't exactly easy, so one could reasonably claim that there is no "good" explanation here. However, I wouldn't go cry "Amazing Anthropomorphic Coincidence That Trumps Great Filter!" at all either, as you say, and I'm not quite sure where you were going with this other than "oooh, shiny unanswered question!", if anywhere.
6Stuart_Armstrong
Interesting! That certainly reinforces Robin's model. Do you have source for that?
taw20

But in a field like AI prediction, where experts lack feed back for their pronouncements, we should expect them to perform poorly, and for biases to dominate their thinking.

And that's pretty much the key sentence.

There is little difference between experts and non-experts.

Except there's no such thing as AGI expert.

6Stuart_Armstrong
There are classes of individuals that might be plausibly effective at predicting AGI - but this now appears to not be the case.
taw10

This is far more sensible judgement of Kurzweil's prediction than OP's.

taw10

Your judgment on all of these is ridiculously positive. Just about everything you claim as true or partly true seems to be mostly false to totally false to me.

5Stuart_Armstrong
Interesting. I was convinced I was erring on the other side. Which is another indication of how bloody subjective assessing these predictions is.
7satt
I kind of agree. I ticked off the predictions in my own head before scrolling down to see everyone else's assessment, and here's what I decided. (I didn't consult Google or look things up, so take this with a pinch of AFAIK.) * 5: false. Wired mice, displays, and printers remain common, more common than their wireless equivalents in my experience. In absolute terms there are surely more wired computer components out there than in 1999. * 7: false. Even if the technology exists, I'm almost certain more text is still created by typing than CSR. And CSR is still less accurate than (sufficiently careful) human transcription; I vaguely remember Google recently beating Siri on this count. * 8: false. Even if one counts these LUIs as ubiquitous, they aren't frequently combined with animated personalities, and interacting with Siri et al. isn't much like talking to a person through video conferencing. I can't recall using LUIs or anything like them for simple business transactions (when I call businesses on the phone, for instance, it's usually a human, a recording, or a press-one-for-this-press-two-for-that menu that answers). Worse, calling my local cinema and navigating their non-CSR LUI shows that even when recognising simple phrases (like my town's name or a film's name) from a circumscribed list of possibilities some LUIs remain unresponsive and imprecise. * 18: weakly true. Computers aren't used in every classroom lesson and they're not in every classroom, but they're in almost every school and kids routinely use them for writing essays, learning through educational games, and doing research. Nowadays, they probably do learn more from school computers than home computers. * 20: false. Students now typically have a computer of their own but they aren't all smartphones. Those who do interact with smartphones don't mainly rely on styluses or speech and most of the text they enter is done with a keyboard (whether real or displayed virtually on-screen). * 26:
taw-30

... and then it becomes incomputable in both theory perfectly (even given unbounded resources) and in practice via any kind of realistic approximation.

It's a dead end. The only interesting thing about it is realizing why precisely it is a dead end.

-1Abe Dillon
This is a pretty lame attitude towards mathematics. If William Rowan Hamilton showed you his discovery of quaternions, you'd probably scoff and say "yeah, but what can that do for ME?". Occam's razor has been a guiding principal for science for centuries without having any proof for why it's a good policy, Now Solomonoff comes along and provides a proof and you're unimpressed. Great.
taw20

Bushmen lived in contact with pastoralist and then agricultural societies nearby for millennia. The idea that they represent some kind of pre-contact human nature is baseless.

"Industrialized" or not isn't relevant.

taw10

People make all kinds of stuff about how humans supposedly lived in "natural state" with absolute certainty, and we know just about nothing abut it, other than some extremely dubious extrapolations.

A fairly safe extrapolation is that human were always able to live in very diverse environments, so even if we somehow find one unpolluted sample somehow (by time travel most likely...), it will give us zero knowledge of "typical" Paleolithic humans.

The label has also been used on countless modern and fairly recent historical societies which ... (read more)

4wedrifid
Assuming your premises, how the heck would you know?
taw30

Dear everyone, please stop talking about "hunter gatherers". We have precisely zero samples of any real Paleolithic societies unaffected by extensive contact with Neolithic cultures.

-2stcredzero
http://www.crinfo.org/articlesummary/10594/ Please explain to me how Bushmen picked up the above from industrialized society. It strikes me as highly unlikely that this pattern of behavior didn't predate the industrial era. Did you consider precisely what you were objecting to, or was this a knee-jerk reaction to a general category?
5Nisan
Can you elaborate on this? I mean, can you give me a reason that using the phrase "hunter-gatherer" is a mistake? I understand your second sentence but I don't understand why that's a reason.
taw50

It's not at all obvious if they really believed it. People say stuff they don't believe all the time.

taw00

I probably have very different sense what's moral and what isn't from the author (who claims to be American liberal), but I agree with pretty much everything the author says about meta-morality.

1prase
The author doesn't claim to be American and in fact is, as far as I know, Finnish.
taw20

That's a difficult question to answer since amount of Internet use correlated with age, wealth, education level, location, language used, employment status, and a lot of things which might have very big impact on people's happiness.

I could give the cached answer that "if it didn't make them happier they wouldn't be using Internet", but there are obvious problems with this line of reasoning.

2BlazeOrangeDeer
Especially since the chached answer fails to explain addiction, which is quite possible with the internet
taw40

I actually know various chans quite well, and they all pretend to be those totally ridiculous everything goes places, but when you actually look at them >90% of threads are perfectly reasonable discussions of perfectly ordinary subjects. Especially outside /b/. This generated far more interest on 4chan than all gore threads put together.

2Strange7
That's still not the same thing as a "total lack of interest."
taw60

Total number of hours per lifetime people in every literally utopia ever printed spend watching videos of kittens doing cute things: 0.

Total number of hours per lifetime people in any real utopia would want to spend watching videos of kittens doing cute things: 100s or more.

Anecdotal evidence: Have you seen internet?

More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more tha... (read more)

Gastogh220

More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more than one standard deviation outside of sexual norms of the society, and none of these due to lack of availability.

Eh? Total lack of interest? Have you ever been on 4chan? Realistic violence threads crop up regularly over there, and it's notorious for catering to almost any kind of sexual deviance the average person c... (read more)

0John_Maxwell
Would you expect that people who use the Internet more also tend to be happier?
taw190

Both this post and the one linked seem to be both about fictional utopias for literature, and actual optimal future utopias. These are completely unrelated issues the same way good fictional international conflict resolution is WW3, and good real world international conflict resolution is months of WTO negotiations over details of some boring legal document between 120+ countries.

0John_Maxwell
Care to provide more than an argument by analogy to support that? What are specific mistakes you suspect fictional utopia designers would make? By the way, if this is really true then instead of designing Fun Theory, we should work on Eutopia Interim Protocol, which could be something like everyone getting split into lots of little parallel universes to see what seems to be the most fun empirically. Or things change slowly and only by majority vote. Or something like that.
taw-30

Well, then I'm puzzled why you didn't reply to these misguided assertions.

Sadly there are many blind spots here where groupthink rules, and people will just happily downvote anybody who has a different opinion. They are not worth replying to. I see the downvote brigade found this thread as well.

Well, then I'm puzzled why you didn't reply to these misguided assertions.

In any case, the paper you cite may well be correct point-by-point, but on the whole, it's a lawyerly argument that tries to overwhelm and misguide the readers by amassing a pile of hand-picked one-way evidence that will dazzle them and make them lose sight of the overall balance of evidence. As I wrote in that earlier comment thread in response to similar points:

As for heritability studies, you are certainly right that there is a lot of shoddy work, and by necessity they make a w

... (read more)
taw-40

You're too lazy, no shortcuts this time.

Caplan's claim doesn't depend on this line of argumentation, but if it was true (which it's not) it would make his point extremely strongly. Weaker claim that normal parenting styles don't affect outcomes much, because the rest of environment (and genes) together have much greater impact is perfectly defensible.

taw20

As we know from natural experiment of Dutch famine of 1944 mother's health is extremely important. This brief event had significant effects on two generations.

0NancyLebovitz
I get the impression that multi-generational effects don't get into the popular press much. I'm guessing that people don't want to think about problems which would take a long time to get better. Do you know whether two generations was enough to undo all the effects of the famine?
taw00

Caplan's arguments are totally wrong, it doesn't make his thesis wrong. I'd expect his thesis to be very likely to be at least mostly correct.

7Vladimir_M
I recommend Neven Sesardic's book Making Sense of Heritability for a good treatment of this issue by an analytic philosopher. Sesardic shows pretty convincingly, in my opinion at least, that the intellectual shenanigans that have so badly confused and poisoned the debates about these issues have been mainly committed by anti-hereditarians. The book is short and very well written, although unfortunately it's a small academic edition that's likely to be quite expensive unless you're lucky to find a cheap used copy or have access to a university library. Regarding the specific points made by taw, some of them have already been answered in the discussion thread following this post.
taw50

The way I see it all heredity studies (adoption, twins etc.) are pretty much universally worthless due to ridiculously wrong methodology (see this for details).

It is trivially observable that populations change drastically in every conceivable way without any genetic change, including along every single behavioral axis claimed to be "highly hereditary" (and the same even applies to many physical features like height, but not others like skin or eye color). Heredity studies are entirely incompatible with this macro reality, regardless of their (un... (read more)

0jsalvatier
Do you just mean that if a feature is close to 100% heritable, then there shouldn't be big differences in that feature? Or do you have something else in mind?
3gjm
Looks (though I've barely skimmed it) like good evidence that twin studies say less than one might naively think. Doesn't say anything about Caplan. Care to say a thing or two about what Caplan thinks twin studies say and how it differs from what analysis like that reveals that they say? (Perhaps I'm just unduly lazy; I was hoping to find an easier way of assessing your claim versus Caplan's than by procuring a copy of Caplan's book, reading it carefully, reading a technical paper on twin studies, examining the particular studies on which Caplan's claims depend, and comparing his use of them with the analysis in the aforementioned technical paper. Of course that's the only way if I want to be really sure, but ... well, I'm lazy and was hoping there might be a shortcut :-).)
6NancyLebovitz
Also, twins share their uterine environment. This wouldn't apply to IVF twins reared apart, but I doubt there's much of that in the studies.
taw30

Caplan is drastically overinterpretting evidence for heredity of features, and his main thesis relies on them far too much.

1jsalvatier
As I understand it, the strongest evidence for his thesis comes from adoption studies, do you disagree?
2gjm
This seems plausible on the face of it, but do you have some evidence or argument to back it up?
taw00

Solar panel prices are on long term downward trend, but in the short term they were very far from smooth over the last few years, having very rapid increases and decreases as demand and production capacity mismatched both ways.

This issue isn't specific to solar panels, all commodities from oil to metals to food to RAM chips had massive price swings over the last few years.

There's no long term problem since we can make solar panels from just about anything - materials like silicon are available in essentially infinite quantities (manufacturing capacity is the issue, not raw materials), and for thin film you need small amounts of materials.

-2MugaSofer
So, you consider this notion of "causality" more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating? Also, dammit, karma toll. Sorry, anyone who wants to answer me.
Wei Dai100

I actually have some sympathy for your position that Prisoner's Dilemma is useful to study, but Newcomb's Paradox isn't. The way I would put it is, as the problems we study increase in abstraction from real world problems, there's the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)

Given that we have little more than intuiti... (read more)

Prisoner's Dilemma relies on causality, Newcomb's Paradox is anti-causality.

The contents of Newcomb's boxes are caused by the kind of agent you are -- which are (effectively by definition of what 'kind of agent' means) mapped directly to what decision you will take.

Newcomb's paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore "the kind of agent you are" must be opposed to "the decisions you make" -- instead of absolutely correlating to them.

Ezekiel110

In what way is Newcomb's Problem "anti-causality"?

If you don't like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don't have cash on you, so you tell the shopkeeper you'll pay him tomorrow. If he thinks you're telling the truth, he'll give you the item now and let you come back tomorrow. If not, you lose a day's worth of use, and so some utility.

So your best bet (if you're selfish) is to tell him you'll pay tomorrow, take the item, and never come back. But what if you're a bad liar? Then you... (read more)

taw-40

Philosophy contains some useful parts, but it also contains massive amounts of bullshit. Starting let's say here.

Decision theory is studied very seriously by mathematicians and others, and they don't care at all for Newcomb's Paradox.

taw-10

Not counting philosophers, where's this academic interest in Newcomb's paradox?

7Douglas_Knight
Newcomb himself was not a philosopher. I think Newcomb introduced it as a simplification of the prisoner's dilemma. The game theory party line is that you should 2-box and defect. But the same logic says that you should defect in iterated PD, if the number of rounds is known. This third problem is popular in academia, outside of philosophy. It is not so popular in game theory, but the game theorists admit that it is problematic.
APMason100

Why are we not counting philosophers? Isn't that like saying, "Not counting physicists, where's this supposed interest in gravity?"

-2[anonymous]
CDT eats the donut "just this once" every time and gets fat. TDT says "I shouldn't eat donuts" and does not get fat.
0A1987dM
Yeah, assuming an universe where causality only goes forward in time and where your decision processes are completely hidden from outside, CDT works; but humans are not perfect liars, so they leak out information about the decision they're about to make before they start to consciously act upon it, so the assumptions of CDT are only approximately true, and in some cases TDT may return better results.

Err, this would also predict no academic interest in Newcomb's Problem, and that isn't so.

taw20

The diagram comes from Wikipedia (tineye says this) but it seems they recently started merging and reshuffling content in all energy-related articles, so I can no longer find it there.

That's total energy available of course, not any 5 year projection.

3Mercurial
Thank you! Do you happen to know anything about the claim that we're running out of the supplies we need to build solar panels needed to tap into all that wonderful sunlight?
taw90

Wikipedia didn't get hundreds of millions of visitors until after it got so big.

I know it's hard to believe, but when we started in 2001, it was a very tiny very obscure website people were commonly making fun of, and we were excited with any coverage we could get (and getting omg slashdotted - that was like news of the month).

taw40

No, humans living in very poor countries or in remote past also always tried to have at least basic understanding of neighbouring tribe's language. It's hard to come with hard data but modern nation states might probably be about the only large monolingual societies in history, other than small and very isolated places.

In modern Africa it's entirely normal for people to speak 3+ languages. (not necessarily to a very high standard, just to get by)

taw150

Evidence that this works better than other methods being...

Seriously, with such a huge number of people trying to learn a second language (like 90% of all humans) we should have some proper studies by now.

-13[anonymous]
Mercurial120

Can you pretty, pretty please tell me where this graph gets its information from? I've seen similar graphs that basically permute the cubes' labels. It would also be wonderful to unpack what they mean by "solar" since the raw amount of sunlight power hitting the Earth's surface is a very different amount than the energy we can actually harness as an engineering feat over the next, say, five years (due to materials needed to build solar panels, efficiency of solar panels, etc.).

And just to reiterate, I'm really not arguing here. I'm honestly confu... (read more)

taw-10

Strong orthogonality hypothesis is definitely wrong - not being openly hostile to most other agents has enormous instrumental advantage. That's what's holding modern human societies together - agents like humans, corporations, states etc. - have mostly managed to keep their hostility low. Those that are particularly belligerent (and historical median has been far more belligerent towards strangers than all but the most extreme cases today) don't do well by instrumental standards at all.

Of course you can make a complicated argument why it doesn't matter (so... (read more)

1Kindly
I actually think this "complicated argument", either made or refuted, is the core of this orthogonality business. If you ask the question "Okay, now that we've made a really powerful AI somehow, should we check if it's Friendly before giving it control over the world?" then you can't answer it just based on what you think the AI would do in a position roughly equal to humans. Of course, you can just argue that this doesn't matter because we're unlikely to face really powerful AIs at all. But that's also complicated. If the orthogonality thesis is truly wrong, on the other hand, then the answer to the question above is "Of course, let's give the AI control over the world, it's not going to hurt humans and in the best case it might help us."
taw-20

Everything you say is ahistorical nonsense, transatlantic trade on a massive was happening back in 19th century, so wood import from the New World (or Scandinavia, or any other place) could have easily happened. Energy density of charcoal and of coal are very similar, so one could just as easily be imported as the other.

Or industries could have been located closer to major sources of wood, the same way they were located closer to major sources of coal. This was entirely possible.

4Furslid
Would you mind explaining how what I have said is ahistorical nonsense? Yes, at the end of the 18th century there was transatlantic trade. However, it was not cheap. It was sail powered and relatively expensive compared to modern shipping. Coal was generally not part of this trade. Shipping was too expensive. English industry used English mined coal. Same with American and German industry. If shipping coal was too expensive, why would charcoal be economical? You have jumped from "transportation existed" to "the costs of transportation can be ignored." As for why industries weren't located by sources of wood. I can think of several reasons. First is that they were sometimes located by sources of wood, and that contributed to the deforestation. The second is that there aren't sources of wood as geographically concentrated as sources of coal. There is 10 mile square of wood producing district that can provide as much energy consistently over time as a 10 mile square of coal mining district. Third is that timber was inconveniently located. There were coal producing areas that were better located for shipping and labor than timber producing areas. Are you seriously suggesting that an English owned factory with English labor might have set up in rural Sweden rather than Birmingham as an almost as good alternative? I thought that we would have been total idiots to leave a resource like coal unused.
taw20

The disease killed an estimated 400,000 Europeans per year during the closing years of the 18th century

So? 400,000 people a year is what % of total mortality?

As recently as 1967, the World Health Organization (WHO) estimated that 15 million people contracted the disease and that two million died in that year.

In an important way diseases don't kill people, poverty, hunger, and lack of sanitation kills people. The deaths were almost all happening in the poorest, and the most abused parts of the world - India and Africa.

3Alsadius
World population in 1800 was about a billion, and we'll ballpark 1/5th of the population being in Europe and 1/40th of them dying per year(which is probably better life expectancy than the world had, but about right for Europe). That means about 5 million deaths per year, so 400k would be 8%. And it's not like smallpox was the only plague around, either. In an even more important way, diseases kill people. Yes, if smallpox came back today(or a non-vaccinatible equivalent) it'd kill a lot fewer people than it used to because of better quarantine, sanitation, and all that fun stuff. Same way AIDS is a minor problem here and a world-ender in sub-Saharan Africa. But it's not like we lack for infectious disease in the developed world.
taw00

Wood ran out because forests weren't properly managed, not because photosynthesis is somehow insufficiently fast at growing forest - and in any case there are countless agricultural alternative energy sources like ethanol from sugar cane.

In 1990 3.5 billion m^3 of wood were harvested. With density of about 0.9kg/cubic meter, and energy of about 15 MJ/kg, that's about 47 trillion MJ (if we burned it all, which we're not going to).

All coal produced in 1905 was about 0.9 billion tons, or about 20 trillion MJ.

In 2010 worldwide biofuel production reached 105 bi... (read more)

5Furslid
The key point of economics you are missing here is the price of wood was driven up by increased demand. Wood never ran out, but it did become so expensive that some uses became uneconomical. This allowed substitution of the previously more expensive coal. This did not happen because of poor management of forests. Good management of forests might have encouraged it, by limiting the amount of wood taken for burning. This is especially true because we are not talking about a modern globalized economy where cheap sugar from Brazil, corn from Kansas, or pine from the Rockies can come into play. We are talking about the 19th century in industrializing Europe. The energy use of England could not have been met by better forestry. All stats from 200 years later are a red herring. If there were other alternatives that were almost as good, please produce them. Not now, but at the time being discussed.
taw00

Which part of "Europe" are you talking about? Western peripheries of Roman Empire got somewhat backwards, and that was after massive demographic collapse of late Antiquity, the rest of Europe didn't really change all that drastically, or even progressed quite a lot.

taw70

This argument is only convincing to people who never bothered to look at timeline of historical events in technology. No country had any significant amount of coal mining before let's say UK in 1790-ish and forwards, and even there it was primarily to replace wood and charcoal.

Technologies we managed to build by then were absolutely amazing. Until 1870 the majority of locomotives in the USA operated on wood, canal transport was as important as railroads and was even less dependent on dense fuels, so transportation was perfectly fine.

Entire industries opera... (read more)

7JoshuaZ
Most of your analysis seems accurate, but there do seem to be some issues. While you are correct that the until 1870 the majority of locomotives in the USA operated on wood, the same article you linked to notes that this was phased out as the major forests were cut down and demand went up. This is not a long-term sustainable process that was converted over to coal simply because it was more efficient. Even if one had forests grow back to pre-industrial levels (a not completely unlikely possibility if most of humanity has been wipe out), you don't have that much time to use wood on a large scale before you need to switch over. You also are underestimating the transformation that occurred in the second half of the 19th century. In particular, while it is true that industries operated on water power, the total number of industries, and the energy demands they made were much smaller. Consider for example chip making plants which have massive energy needs. One can't run a modern economy on water power because there wouldn't be nearly enough water power to go around. This is connected to how while in the US in the 1870s and 1880s many of the first power plants were hydroelectric, support of a substantial grid required the switch to coal which could both provide more power and could have plants built at the most convenient location. This is discussed in Maggie Koerth-Baker's book "Before the Lights Go Out" which has a detailed discussion about the history of the US electric grids. And while it is true that no country had major coal mining before 1790 by modern standards, again the replacement of wood and charcoal occurred to a large extent because they were running out of cheap wood, and because increased industry substantially benefited from the increased energy density. And even well before that, coal was used already in the late Middle Ages for speciaized purposes, such as metal working with metals that required high temperatures. While not a large industry, it was l
taw90

That reasoning is just extremely unconvincing, essentially 100% wrong and backwards.

Renewable energy available annually is many orders of magnitude greater than all fossil fuels we're using, and it has been used as primary source of energy for almost the entire history up to industrial revolution. Biomass for everything, animal muscle power, wind and gravity for water transport, charcoal for melting etc. were used successfully at massive scale before anybody even thought of oil or gas or made much use of coal.

Other than energy, most other resources - like... (read more)

7Mercurial
Okay, this has been driving me bonkers for years now. I keep encountering blatantly contradictory claims about what is "obviously" true about the territory. taw, you said: And you might well be right. But the people involved in transition towns insist quite the opposite: I've been explicitly told, for one example, that it would take the equivalent of building five Three Gorges Dams every year for the next 50 years to keep up with the energy requirements provided by fossil fuels. By my reading, these two facts cannot both be correct. One of them says that civilization can rebuild just fine if we run out of fossil fuels, and the other says that we may well hit something dangerously close to a whimper. I'm not asking for a historical analysis here about whether we needed fossil fuels to get to where we are. I'd like clarification on a fact about the territory: is it the case that renewable forms of energy can replace fossil fuels without modern civilization having to power down? I'm asking this as an engineering question, not a political one.
3JoshuaZ
Right, and the energy demands of those societies were substantially lower than those later societies which used oil and coal. The industrial revolution would likely not have been possible without the presence of oil and coal in easily accessible locations. Total energy isn't all that matters- the efficiency of the energy, ease of transport, and energy density all matter a lot also. In those cases, fossil fuels are substantially better and more versatile.
0A1987dM
I'm a bit sceptical about that. Compare the technological level of Europe in AD 100 with that of Europe in AD 700.
taw40

The thing is countries would not really be poorer. Properly treated HIV isn't much worse than smoking (I mean the part before lung cancer) or diabetes for most of people's lives. Countries differ a lot on these already, without any apparent drastic differences in economic outcomes.

By the time people are already very old they might live a few years less, but they're not really terribly productive at that point anyway.

taw30

That's already old data by standards of modern progress of medicine, and groups that tend to get HIV are highly non-random and are typically engaged in other risky activities like unprotected promiscuous sex and intravenous drug use, and are poorer and blacker than average, so their baseline life expectancy is already much lower than population average.

taw20

Smallpox wasn't that bad if you look at statistics, and spanish flu happened at a time when humans have been murdering each other at unprecedented rate and normal society was either suspended or collapsed altogether everywhere.

Usually the chance of getting infected is inversely correlated with severity of symptoms (by laws of epidemiology), and nastiness is inversely correlated with broad range (by laws of biology), so you have diseases that are really extreme by any one criterion, but they tend to be really weak by some other criterion.

And in any case we're getting amazingly better at this.

3Alsadius
Not that bad? I agree that there were aggravating factors, particularly in the Spanish flu case, and that tradeoffs between impact and spread generally form a brake. But nasty diseases do exist, and our medical science is sufficiently imperfect that the possibility of one slipping through even in the modern world is not to be ignored. Fortunately, it's a field we're already pouring some pretty stupendous sums of money into, so it's not a risk we're likely to be totally blindsided by, but it's one to keep in mind.
taw70

There's no particular reason to believe this is going to make global thermonuclear war any less likely. Russia and United States aren't particularly likely to start a global thermonuclear warfare anytime soon, and in longer perspective any major developed country, if it wanted, could build nuclear arsenals sufficient to make a continent uninhabitable within a few years.

There's also this argument that mutually assured destruction was somehow stabilizing and preventing nuclear warfare - the only use of nuclear weapons so far happened when the other side had ... (read more)

Load More