Posts

Sorted by New

Wiki Contributions

Comments

According to Wikipedia, the threshold for fibrillation is 60 mA for AC, 300-500 mA for DC. On reflection, it seems I'd previously cached the AC value as the value for all currents, so that was skewing my argument.

Given these figures, a 1k Ohm total resistance (internal plus skin plus body) would lead to a 12 mA current (painful but not fibrillation-inducing), whereas 200 Ohms / 40 Ohms total resistance would be required for 12 VAC / VDC to be potentially lethal. So, yeah, now that I think about it, a car battery probably couldn't be lethal unless conductors were actually puncturing the skin and touching the bloodstream directly (or covering a HUGE amount of surface area). I retract my claim.

Edit: OH! Except that Wikipedia says the threshold for fibrillation is a mere 10 µA if the current is from electrodes that establish a circuit through the heart. THAT's the figure I'd seen before and cached in my head. Still, that's not a likely situation to arise when using jumper cables, so my claim remains retracted.

It's worth noting that the reason we use clamps on the ends of the jumper cables is because pressure increases surface area in contact, which decreases resistance for the simple reason of Ohm's law applied to parallel resistors. (Three 1k Ohm resistors have a parallel resistance of only 333 Ohms. It's meaningless to give a single figure for copper -> wet skin resistance without also giving the surface area for which the figure is valid.)

This means that incidental touching of metal is extremely unlikely to kill anyone, but accidentally clamping your finger, gripping metal tightly, or anything else that applies pressure to your skin will dramatically raise the risk.

It does if the skin is wet. Once you're through the skin, the human body's resistance is quite low, in the single-digit kiloohm range at most, because the human body is mostly salt water (a fantastically good conductor by non-metallic standards). The biggest barrier to current is the upper layer of dead, dry cells on the epidermis. And lead-acid batteries have a fairly low internal resistance, which allows them to produce high currents if the load is also low resistance (a required feature when cranking the engine).

It's worth noting that, while 12 volts won't normally penetrate dry skin under most humidity conditions, you really do need to be careful. Pressure increases surface-to-surface contact, which decreases resistance, which lowers the voltage threshold. So can moisture, like even small amounts of sweat. And a car battery does have sufficient current to injure or kill a human being quite easily. (Voltage penetrates insulators, current actually does damage. The zap you get from static electricity is in the range of thousands of volts, but the current is negligible.)

I was taught a slightly different procedure, which is the same as the one listed as the first result on Google for "jumper cables":

  1. Line up the cars, pop the hood on both cars, get out the jumper cables, make sure both cars have their engines turned off, check that the dead battery looks safe (no cracks, leaks, or swelling), and try to scrape off any corrosion on the terminals.
  2. Connect one red clip to the positive (+) terminal of the dead battery.
  3. Connect the other red clip to the positive (+) terminal of the good battery.
  4. Connect one black clip to the negative (-) terminal of the good battery.
  5. Connect the other black clip to the exposed metal of the engine or chassis of the car with the dead battery. The chassis is connected to the negative terminal ("grounded"), so this will complete the circuit while minimizing sparks near the battery itself. A malfunctioning battery might be venting fumes of flammable/explosive hydrogen gas, so don't risk sparks near the battery.
  6. Start the "donor" car. Let it run for a minute or two.
  7. Start the "acceptor" car. It should crank and run normally.
  8. Disconnect the cables in the reverse order (undo steps 5, 4, 3, 2). If the order is reversed exactly, then the cables can be disconnected from the two running cars with no sparking near the battery. You'll get some sparks when you disconnect from the chassis, but that's OK.
  9. Wait a few minutes (3 to 5). The acceptor car should continue to run. If it dies a few minutes after disconnecting the cables, then it's a problem with the alternator and not just the battery.
  10. Put the cables away, close the hoods, and thank the owner of the donor car (who can now leave).
  11. Leave the acceptor running for a while. You can drive it as much as you like during this period; just don't shut off the engine until the alternator has had time to recharge the battery (say, 10 to 15 additional minutes).

The site I linked to makes the point that steps 6-7-8 in my procedure can damage the acceptor's alternator. It recommends letting the donor run for a bit longer than my step 6 requires, then (8a) shutting off the donor, (8b) disconnecting the cables entirely, and only then (7) starting the acceptor. Whether or not this method works would depend on the state of the battery (it may fail for a poor but working battery) and the weather (it may fail below, say, 10°F / -12°C).

(Note: lead-acid batteries are damaged by letting them discharge fully, because the cathodes and anodes are both transformed into the same material, lead sulfate. Once that happens, it becomes far more difficult to recharge the battery and you're better off just buying a new one. Even if your battery won't take a charge, a jump start can get you to a store that sells new automotive batteries -- the battery is only needed to turn the engine through the first few cycles, and the alternator will provide all needed electricity once the engine is turning fast enough.)

Washing bacteria down the drain is certainly the primary purpose for using soap, by far, but surfactants like soap also kill a few bacteria by lysis (disruption of the cell membrane, causing the cells to rapidly swell with water and burst). In practice, this is so minor it's not worth paying attention to: bacteria have a surrounding cell wall made of a sugar-protein polymer that resists surfactants (among other things), dramatically slowing down the process to the point that it's not practical to make use of it.

(Some bacteria are more vulnerable to surfactant lysis than others. Gram-negative bacteria have a much thinner cell wall, which is itself surrounded by a second, more exposed membrane. But gram-positive bacteria have a thick wall with nothing particularly vulnerable on the outside, and even with gram-negative bacteria the scope of the effect is minor.)

In practice, the big benefit of soap is (#1) washing away oils, especially skin oils, and (#2) dissolving the biofilms produced by the bacteria to anchor themselves to each other and to biological surfaces (like skin and wooden cutting boards). Killing the bacteria directly with soap is a distant third priority.

For handwashing, hot water is in a similar boat: even the hottest water your hands can stand is merely enough to speed up surfactant action, not to kill bacteria directly. For cleaning inanimate surfaces, sufficiently hot water is quite effective at killing bacteria, but most people's hot water only goes up to 135°F or thereabouts, which is not scaldingly hot enough to do the job instantly.

For directly killing bacteria via non-heat means, alcohol and bleach are both far more effective than soap. Alcohol very rapidly strips off the cell wall and triggers immediate lysis, while bleach acts both as a saponifier (it turns fatty acids into soap) and a strong oxidizer (directly attacking the chemical structure of the cell wall and membrane, ripping it apart like a rapid-action biological parallel to rusting iron).

Fun trivia: your hand feels slippery or "bleachy" after handling bleach (or any reasonably strong base) because the outermost layer of your skin has been converted into soap.

I'm a bit irked by the continued persistence of "LHC might destroy the world" noise. Given no evidence, the prior probability that microscopic black holes can form at all, across all possible systems of physics, is extremely small. The same theory (String Theory[1]) that has led us to suggest that microscopic black holes might form at all is also quite adamant that all black holes evaporate, and equally adamant that microscopic ones evaporate faster than larger ones by a precise factor of the mass ratio cubed. If we think the theory is talking complete nonsense, then the posterior probability of an LHC disaster goes down, because we favor the ignorant prior of a universe where microscopic black holes don't exist at all.

Thus, the "LHC might destroy the world" noise boils down to the possibility that (A) there is some mathematically consistent post-GR, microscopic-black-hole-predicting theory that has massively slower evaporation, (B) this unnamed and possibly non-existent theory is less Kolmogorov-complex and hence more posterior-probable than the one that scientists are currently using[2], and (C) scientists have completely overlooked this unnamed and possibly non-existent theory for decades, strongly suggesting that it has a large Levenshtein distance from the currently favored theory. The simultaneous satisfaction of these three criteria seems... pretty f-ing unlikely, since each tends to reject the others. A/B: it's hard to imagine a theory that predicts post-GR physics with LHC-scale microscopic black holes that's more Kolmogorov-simple than String Theory, which can actually be specified pretty damn compactly. B/C: people already have explored the Kolmogorov-simple space of post-Newtonian theories pretty heavily, and even the simple post-GR theories are pretty well explored, making it unlikely that even a theory with large edit distance from either ST or SM+GR has been overlooked. C/A: it seems like a hell of a coincidence that a large-edit-distance theory, i.e. one extremely dissimilar to ST, would just happen to also predict the formation of LHC-scale microscopic black holes, then go on to predict that they're stable on the order of hours or more by throwing out the mass-cubed rule[3], then go on to explain why we don't see them by the billions despite their claimed stability. (If the ones from cosmic rays are so fast that the resulting black holes zip through Earth, why haven't they eaten Jupiter, the Sun, or other nearby stars yet? Bombardment by cosmic rays is not unique to Earth, and there are plenty of celestial bodies that would be heavy enough to capture the products.)

[1] It's worth noting that our best theory, the Standard Model with General Relativity, does not predict microscopic black holes at LHC energies. Only String Theory does: ST's 11-dimensional compactified space is supposed to suddenly decompactify at high energy scales, making gravity much more powerful at small scales than GR predicts, thus allowing black hole formation at abnormally low energies, i.e. those accessible to LHC. And naked GR (minus the SM) doesn't predict microscopic black holes. At all. Instead, naked GR only predicts supernova-sized black holes and larger.

[2] The biggest pain of SM+GR is that, even though we're pretty damn sure that that train wreck can't be right, we haven't been able to find any disconfirming data that would lead the way to a better theory. This means that, if the correct theory were more Kolmogorov-complex than SM+GR, then we would still be forced as rationalists to trust SM+GR over the correct theory, because there wouldn't be enough Bayesian evidence to discriminate the complex-but-correct theory from the countless complex-but-wrong theories. Thus, if we are to be convinced by some alternative to SM+GR, either that alternative must be Kolmogorov-simpler (like String Theory, if that pans out), or that alternative must suggest a clear experiment that leads to a direct disconfirmation of SM+GR. (The more-complex alternative must also somehow attract our attention, and also hint that it's worth our time to calculate what the clear experiment would be. Simple theories get eyeballs, but there are lots of more-complex theories that we never bother to ponder because that solution-space doesn't look like it's worth our time.)

[3] Even if they were stable on the order of seconds to minutes, they wouldn't destroy the Earth: the resulting black holes would be smaller than an atom, in fact smaller than a proton, and since atoms are mostly empty space the black hole would sail through atoms with low probability of collision. I recall that someone familiar with the physics did the math and calculated that an LHC-sized black hole could swing like a pendulum through the Earth at least a hundred times before gobbling up even a single proton, and the same calculation showed it would take over 100 years before the black hole grew large enough to start collapsing the Earth due to tidal forces, assuming zero evaporation. Keep in mind that the relevant computation, t = (5120 × π × G^2 × M^3) ÷ (ℏ × c^4), shows that a 1-second evaporation time is equal to 2.28e8 grams[3a] i.e. 250 tons, and the resulting radius is r = 2 × G × M ÷ c^2 is 3.39e-22 meters[3b], or about 0.4 millionths of a proton radius[3c]. That one-second-duration black hole, despite being tiny, is vastly larger than the ones that might be created by LHC -- 10^28 larger by mass, in fact[3d]. (FWIW, the Schwarzschild radius calculation relies only on GR, with no quantum stuff, while the time-to-evaporate calculation depends on some basic QM as well. String Theory and the Standard Model both leave that particular bit of QM untouched.)

[3a] Google Calculator: "(((1 s) h c^4) / (2pi 5120pi G^2)) ^ (1/3) in grams"

[3b] Google Calculator: "2 G 2.28e8 grams / c^2 in meters"

[3c] Google Calculator: "3.3856695e-22 m / 0.8768 femtometers", where 0.8768 femtometers is the experimentally accepted charge radius of a proton

[3d] Google Calculator: "(2.28e8 g * c^2) / 14 TeV", where 14 TeV is the LHC's maximum energy (7 TeV per beam in a head-on proton-proton collision)

I'm afraid I can't say much beyond what I've already said, except that Google places a fairly high value on detecting fraudulent activity.

I'd be surprised if I discovered that no bad guys have ever tried to simulate the search behavior of unique users. But (a) assuming those bad guys are a problem, I strongly suspect that the folks worried about search result quality are already on to them; and (b) I suspect bad guys who try such techniques give up in favor of the low hanging fruit of more traditional bad-guy SEO techniques.

I think it's interesting to note that this is the precise reason why Google is so insistent on defending its retention of user activity logs. The logs contain proxies under control of the end user, rather than the content producer, and thus allow a clean estimate of (the end user's opinion of) search result quality. This lets Google spot manipulation after-the-fact, and thus experiment with new algorithm tweaks that would have counterfactually improved the quality of results.

(Disclaimer: I currently work at Google, but not on search or anything like it, and this is a pretty straightforward interpretation starting from Google's public statements about logging and data retention.)

And while some of their costs are borne by others, a lot of their taxes going to roads are also wasted.

This doesn't make sense, because dollars are fungible. If WM reaps a greater monetary value from the highway system than it spends on the highway system via taxes, WM comes out ahead.

So I don't see how this is an indictment of WM -- the harm lies in the shift of the structure of production to a less efficient one, not in a transfer of wealth to the Waltons.

Then we're in violent agreement. I didn't intend the highway bit to be an indictment of WM, but a rebuttal of taw's comment:

"And yet, in spite of the genuine diseconomies of scale which you mention, economies of scale for Wall-Mart seem ever larger, as it successfully competes in open market"

I was attempting to convey the idea that that Wal-mart's current (but quite likely ephemeral) success is due to political accidents moreso than "economies of scale". The only "economy of scale" operating at Wal-mart is logistics and trucking, which doesn't scale very much: the planning scales somewhat, the trucking has already scaled as far as it can, and the trucking is on more precarious footing than it looks.

Labor doesn't scale: making a Wal-mart store twice as big requires twice as many workers to keep the shelves full.

Sales don't scale: selling twice as many goods provides economies of scale to the manufacturers, not to Wal-mart itself. If manufacturing economies of scale were at play, all retail prices would fall to equal those of Wal-mart: with their new infrastructure paid for, the manufacturers can turn around and sell their cheaper products to Wal-mart's competitors just as easily as they can sell to Wal-mart.

The oligopsony price bullying (i.e. the Vlasic example) is not a proper "economy of scale" in this sense. If Wal-mart had a competitor of equal size, but Wal-mart's size remained unchanged, Wal-mart's economies of scale would be unchanged but its power to bully costs down would weaken. An economy of scale depends on size, not on market power.

Load More