Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

It's probably one of the many useful functions of the court jester :)

Even a more sane and more continuously distributed measure could yield that result, depending on how you fit the scale. If you measure the likelihood of making a mistake (so zero would be a perfect driver, and one a rabid lemur), I expect the distribution to be hella skewed. Most people drive in a sane way most of the time. But it's the few reckless idiots you remember - and so does every single one of the thousand other drivers who had the misfortune to encounter them. It would not surprise me if driving mistakes followed more-or-less a Pareto distribution.

There probably was a time when killing Hitler had a significant chance of ending the war by enabling peace talks (allowing some high-ranking German generals/politicians to seize power while plausibly denying having wanted this outcome). The window might have been short, and probably a bit after '42, though. I'd guess any time between the Battle of Stalingrad (where Germany stopped winning) and the Battle of Kursk (which made Soviet victory inevitable) should've worked - everyone involved should rationally prefer white peace to the very real possibility of a bloody stalemate. Before, Germany would not accept. Afterwards, the Soviets wouldn't.

Yup. Layer 8 issues are a lot harder to prevent than even Layer 1 issues :)

While air gaps are probably the closest thing to actual computer security I can imagine, even that didn't work out so well for the guys at Natanz... And once you have systems on both sides of the air gap infected, you can even use esoteric techniques like ultrasound from the internal speaker to open up a low bandwith connection to the outside.

And some people would like to make it sit down and write "I will not conjure up what I can't control" a thousand times for this. But I, for one, welcome our efficient market overlords!

Where did you get the impression that European countries do this on a large enough scale to matter*? There are separate bike roads in some cities, but they tend to end abruptly and lead straight into traffic at places where nobody expects cyclists to appear or show similar acts of genius in their design. If you photograph just the right sections, they definitely look neat. But integrating car and bike traffic in a crowded city is a non-trivial problem; especially in Europe where roads tend to follow winding goat paths from the Dark Ages and are way too narrow for today's traffic levels already.

While the plural of anecdote is not data, two of my friends suffered serious head trauma in a bicycle accident they never fully recovered from (without a helmet, they'd likely be dead), while nobody I know personally ever was in a severe car accident. And quick search also seems to indicate that cycling is about as dangerous as driving (with both of them paling by comparison to motorcycles...).

*with the possible exception of the Netherlands, but even for them I'm not sure.

I know you intended your comment to be a little tongue-in-cheek, but it is actual energy, measured in Joules, we're talking about. Exerting willpower drains blood glucose levels.

I don't know of studies that indicate intraverts would drain glucose faster than extraverts when socializing, but that seems to be a pretty straightforward thing to measure, and I'd look forward to the results. At least, i can tell from personal experience that I need to exert willpower to stay in social situations (especially when there are lots of people close by or when it's loud), and I'm a hardcore intravert. Also, I can conclude from the observation that there are actually lots of people who like to go to these places, while very few people enjoy activities that force them to exert willpower, that not everyone feels about it the way I do.

There's another argument I think you might have missed:

Utilitarism is about being optimal. Instinctive morality is about being failsafe.

Implicit in all decisions is a nonzero possibility that you are wrong. Once you take that into account, having some "hard" rules like not agreeing to torture here (or in other dilemmas), not pushing the fat guy on the tracks in the trolley problem, etc, can save you from making horrible mistakes at the cost of slightly suboptimal decisions. Which is, incidentally, how I would want a friendly AI to decide as well - losing a bit in the average case to prevent a really horrible worst case.

That rule alone would, of course, make you vulnerable to Pascal's Mugging. I think the way to go here is to have some threshold at which you round very low (or very high) probabilities off to zero (or one) when the difference is small against the probability of you being wrong. Not only will this protect you against getting your decisions hacked, it will also stop you from wasting computing power on improbable outcomes. This seems to be the reason why Pascal's Mugging usually fails on humans.

Both of these are necessary patches because we operate on opaque, faulty and potentially hostile hardware. One without the other is vulnerable to hacks and catastrophic failure modes, but both taken together are a pretty strong base for decisions that, so far, have served us humans pretty well. In two rules:

1) Ignore outcomes to which you assign a lower probability than to you being wrong/mistaken about the situation. 2) Ignore decisions with horrible worst case scenarios if there are options with a less horrible worst case and still acceptable acceptable average case.

When both of these apply to the same thing, or this process eliminates all options, you have a dilemma. Try to reduce your uncertainty about 1) and start looking for other options in 2). If that is impossible, shut up and do it anyway.

Exactly. Stocks are almost always better long-term investments than anything else (if mixed properly; single points of failure are stupid). The point of mixing in "slow" options like bonds or real estate is that it gives you something to take money out of when the stocks are low (and replenish it when the stocks are high). That may look suboptimal, but still beats the alternatives of borrowing money to live from or selling off stocks you expect to rise mid-term. The simulation probably does a poor job of reflecting that.

Load More