Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: adamzerner 11 June 2017 04:59:12PM 0 points [-]

I don't. I'm not scope sensitive. The alarm system is working fine, it's just that it's sensitive to people who are cooking (I think). I'm eager to move out ASAP though.

Comment author: evand 20 June 2017 05:32:49AM 0 points [-]

I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.

Comment author: chaosmage 08 June 2017 10:32:27AM *  0 points [-]

Is there a way to get the benefit of including betting into settling arguments, without the shady associations (and possible legal ramifications) of it being gambling?

Comment author: evand 11 June 2017 04:54:39PM 1 point [-]

I'm not aware of any legal implications in the US. US gambling laws basically only apply when there is a "house" taking a cut or betting to their own advantage or similar. Bets between friends where someone wins the whole stake are permitted.

As for the shady implications... spend more time hanging out with aspiring rationalists and their ilk?

Comment author: Lumifer 07 June 2017 06:56:49PM 0 points [-]

expected utility maximization

You are just rearranging the problem without solving it. Can my utility function include risk aversion? If it can, we're back to the square one: a risk-averse Bayesian rational agent.

And that's even besides the observation that being Bayesian and being committed to expected utility maximization are orthogonal things.

The kind of meta-uncertainty you seem to want, that gets you out of uncomfortable bets, doesn't exist for Bayesians.

I have no need for something that can get me out of uncomfortable bets since I'm perfectly fine with not betting at all. What I want is a representation for probability that is more rich than a simple scalar.

In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.

Comment author: evand 08 June 2017 02:24:10PM 0 points [-]

The richer structure you seek for those two coins is your distribution over their probabilities. They're both 50% likely to come up heads, given the information you have. You should be willing to make exactly the same bets about them, assuming the person offering you the bet has no more information than you do. However, if you flip each coin once and observe the results, your new probability estimate for next flips are now different.

For example, for the second coin you might have a uniform distribution (ignorance prior) over the set of all possible probabilities. In that case, if you observe a single flip that comes up heads, your probability that the next flip will be heads is now 2/3.

Comment author: JacekLach 30 May 2017 05:35:23PM 0 points [-]

Do you have examples of systems that reach this kind of reliabilty internally?

Most high-9 systems work by taking lots of low-9 components, and relying on not all of them failing at the same time. I.e. if you have 10 95% systems that fail completely independently, and you only need one of them to work, that gets you like eleven nines (99.9{11}%).

Expecting a person to be 99% reliable is ridiculous. That's like two sick days per year, ignoring all other possible causes of failing to make a task. Instead you should build systems and organisations that have slack, so that one person failing at a particular point in time doesn't make a project/org fail.

Comment author: evand 30 May 2017 07:57:56PM 1 point [-]

Well, in general, I'd say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.

At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it's not like the underlying bolts are being replaced because they failed with any regularity.

I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.

Comment author: Duncan_Sabien 26 May 2017 10:46:49AM 3 points [-]

Yeah, I think notes saying "do not eat" will suffice; the key is just to get people to use that coin only when it's for a specific plan.

Comment author: evand 26 May 2017 01:11:22PM 5 points [-]

You might also want a mechanism to handle "staples" that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I'd have no objections to other people eating them, but if they did I'd want them to take responsibility for never leaving the house in a state of "no X on hand".

Comment author: Duncan_Sabien 26 May 2017 11:06:14AM 1 point [-]

Y'know, that was the section I was least confident in. I think I'm updating my assertion to something like "will have logged an initial 20 hours, enough to understand the territory and not feel identity-blocked from moving forward if desired."

I suspect you're looking at at least 100 hours to even begin to be competent to do informal contract work in any of those fields, and properly more like 1000+ hours' training. Some of them require certification, as well.

Comment author: evand 26 May 2017 01:06:27PM 1 point [-]

Those numbers sound like reasonable estimates and goals. Having taught classes at TechShop, that first handful of hours is important. 20 hours of welding instruction ought to be enough that you know whether you like it and can build some useful things, but probably not enough to get even an intro-level job. It should give you a clue as to whether signing up for a community college class is a good idea or not.

Also I'm really confused by your inclusion of EE in that list; I'd have put it on the other one.

Comment author: PeterBorah 25 May 2017 10:11:36PM 3 points [-]

Those sound like good ideas for mitigating the corrosive effects I'm worried about.

My personal aesthetic vastly prefers opportunity framings over obligation framings, so my hypothetical version of the dragon army would present things as ideals to aspire to, rather than a code that must not be violated. (Eliezer's Twelve Virtues of Rationality might be a reasonable model.) I think this would have less chance of being corrosive in the way I'm concerned about. However, for the same reason, it would likely have less force.

Re: absolute. I agree that there can be a qualitative difference between 99% and 99.99%. However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully. (Again, this may still be just an aesthetic difference, since your proposed system does seem to have fault-tolerance and graceful degradation built in.)

Comment author: evand 25 May 2017 11:54:06PM 6 points [-]

However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully.

On the other hand... look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn't stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It's what makes the 21st century work.

I'd be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn't work, maybe the analogy fails. But I want to see the answer!

Comment author: Stuart_Armstrong 13 May 2017 07:26:08AM 0 points [-]

The rational reasons to go to war are to prevent a future competitor and to gain resources. Scorched Earth removes both of those reasons: if you can destroy your own resources AND inflict some damage on the enemy at the same time, then no-one has rational reasons to go to war. Because even a future competitor won't be able to profit from fighting you.

If advanced civilisations have automated disagreement resolving processes, I expect them to quickly reach equilibrium solutions with semi-capable opponents.

Comment author: evand 16 May 2017 05:29:19PM 0 points [-]

What happens when the committed scorched-earth-defender meets the committed extortionist? Surely a strong precommitment to extortion by a powerful attacker can defeat a weak commitment to scorched earth by a defender?

It seems to me this bears a resemblence to Chicken or something, and that on a large scale we might reasonably expect to see both sets of outcomes.

Comment author: evand 28 April 2017 06:11:17PM 2 points [-]

What's that? If I don't give into your threat, you'll shoot me in the foot? Well, two can play at that game. If you shoot me in the foot, just watch, I'll shoot my other foot in revenge.

Comment author: evand 27 April 2017 04:27:38PM 1 point [-]

On the other hand... what level do you want to examine this at?

We actually have pretty good control of our web browsers. We load random untrusted programs, and they mostly behave ok.

It's far from perfect, but it's a lot better than the desktop OS case. Asking why one case seems to be so much farther along than the other might be instructive.

View more: Next