Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 13 May 2017 07:26:08AM 0 points [-]

The rational reasons to go to war are to prevent a future competitor and to gain resources. Scorched Earth removes both of those reasons: if you can destroy your own resources AND inflict some damage on the enemy at the same time, then no-one has rational reasons to go to war. Because even a future competitor won't be able to profit from fighting you.

If advanced civilisations have automated disagreement resolving processes, I expect them to quickly reach equilibrium solutions with semi-capable opponents.

Comment author: evand 16 May 2017 05:29:19PM 0 points [-]

What happens when the committed scorched-earth-defender meets the committed extortionist? Surely a strong precommitment to extortion by a powerful attacker can defeat a weak commitment to scorched earth by a defender?

It seems to me this bears a resemblence to Chicken or something, and that on a large scale we might reasonably expect to see both sets of outcomes.

Comment author: evand 28 April 2017 06:11:17PM 2 points [-]

What's that? If I don't give into your threat, you'll shoot me in the foot? Well, two can play at that game. If you shoot me in the foot, just watch, I'll shoot my other foot in revenge.

Comment author: evand 27 April 2017 04:27:38PM 1 point [-]

On the other hand... what level do you want to examine this at?

We actually have pretty good control of our web browsers. We load random untrusted programs, and they mostly behave ok.

It's far from perfect, but it's a lot better than the desktop OS case. Asking why one case seems to be so much farther along than the other might be instructive.

Comment author: Lumifer 27 April 2017 03:40:33PM 0 points [-]

We've mostly solved that problem.

Not quite. We mostly know how to go about it, but we didn't actually solve it -- otherwise there would be no need for QC and no industrial accidents.

It's precisely what's required to solve the problem of a hammer that bends nails and leaves dents, isn't it?

Still nope. The nails come in different shapes and sizes, the materials can be of different density and hardness, the space to swing a hammer can vary, etc. Replicating a fixed set of actions does not solve the general "control of the tool" problem.

I think that's outside the scope of the "hammer control problem"

I don't think it is. If you are operating in the real world you have to deal with anything which affects the real-life outcomes, regardless of whether it fits your models and frameworks. The Iranians probably thought that malware was "outside the scope" of running the centrifuges -- it didn't work out well for them.

they're control problems in the most classical engineering sense

Yes, they are. So if you treat the whole thing as an exercise in proper engineering, it's not that hard (by making-an-AI standards :-D) However the point of "agenty" tools is to be able to let the tool find a solution or achieve an outcome without you needing to specify precisely how to do it. In that sense the classic engineering control is all about specifying precise actions and "punishing" all deviations from them via feedback loops.

Comment author: evand 27 April 2017 03:58:28PM 0 points [-]

Again, I'm going to import the "normal computer control" problem assumptions by analogy:

  • The normal control problem allows minor misbehaviour, but that it should not persist over time

Take a modern milling machine. Modern CNC mills can include a lot of QC. They can probe part locations, so that the setup can be imperfect. They can measure part features, in case a raw casting isn't perfectly consistent. They can measure the part after rough machining, so that the finish pass can account for imperfections from things like temperature variation. They can measure the finished part, and reject or warn if there are errors. They can measure their cutting tools, and respond correctly to variation in tool installation. They can measure their cutting tools to compensate for wear, detect broken tools, switch to the spare cutting bit, and stop work and wait for new tools when needed.

Again, I say: we've solved the problem, for things literally as simple as pounding a nail, and a good deal more complicated. Including variation in the nails, the wood, and the hammer. Obviously the solution doesn't look like a fixed set of voltages sent to servo motors. It does look like a fixed set of parts that get made.

How involved in the field of factory automation are you? I suspect the problem here may simply be that the field is more advanced than you give it credit for.

Yes, the solutions are expensive. We don't always use these solutions, and often it's because using the solution would cost more and take more time than not using it, especially for small quantity production. But the trend is toward more of this sort of stuff being implemented in more areas.

The "normal computer control problem" permits some defects, and a greater than 0% error rate, provided things don't completely fall apart. I think a good definition of the "hammer control problem" is similar.

Comment author: Lumifer 27 April 2017 02:59:31PM 0 points [-]

We've (mostly) solved the hammer control problem in a restricted domain.

The "mostly" part is important -- everyone still has QC departments which are quite busy.

Also, I'm not sure that being able to nearly perfectly replicate a fixed set of physical actions is the same thing as solving a control problem.

Air-gapped CNC machinery running embedded OSes (or none at all) is pretty well behaved.

In theory. In practice you still have cosmic rays flipping bits in memory and Stuxnet-type attacks.

However the real issue here is the distinction between "agenty" and "un-agenty". It is worth noting that the type of control that you mention (e.g. "computer-controlled robots") is all about getting as far from "agenty" as possible.

Comment author: evand 27 April 2017 03:25:36PM *  0 points [-]

It bends the nails, leaves dents in the surface and given the slightest chance will even attack your fingers!

We've mostly solved that problem.

I'm not sure that being able to nearly perfectly replicate a fixed set of physical actions is the same thing as solving a control problem.

It's precisely what's required to solve the problem of a hammer that bends nails and leaves dents, isn't it?

Stuxnet-type attacks

I think that's outside the scope of the "hammer control problem" for the same reasons that "an unfriendly AI convinced my co-worker to sabotage my computer" is outside the scope of the "normal computer control problem" or "powerful space aliens messed with my FAI safety code" is outside the scope of the "AI control problem".

It is worth noting that the type of control that you mention (e.g. "computer-controlled robots") is all about getting as far from "agenty" as possible.

I don't think it is, or at least not exactly. Many of the hammer failures you mentioned aren't "agenty" problems, they're control problems in the most classical engineering sense: the feedback loop my brain implements between hammer state and muscle output is incorrect. The problem exists with humans, but also with shoddily-built nail guns. Solving it isn't about removing "agency" from the bad nail gun.

Sure, if agency gets involved in your hammer control problem you might have other problems too. But if the "hammer control problem" is to be a useful problem, you need to define it as not including all of the "normal computer control problem" or "AI control problem"! It's exactly the same situation as the original post:

  • The normal control problem assumes that no specific agency in the programs (especially not super-intelligent agency)
Comment author: eternal_neophyte 27 April 2017 06:52:05AM *  0 points [-]

They usually don't have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.

There is a complication involved since its possible to increase the cost to others of not doing business with you in "fair" ways. E.g. the invention of the fax machine reduced effective demand for message boys to run between office buildings, hence increasing their cost and the operating costs of anyone who refused to buy a fax machine.

Though I don't believe any company long held a monopoly on the fax market, if a company did establish such a monopoly in order to control prices that again may be construed as extortion.

Comment author: evand 27 April 2017 02:57:59PM 3 points [-]

They usually don't have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.

Modern social networks and messaging networks would seem to be a strong counterexample. Any software with both network effects and intentional lock-in mechanisms, really.

And honestly, calling such products a blend of extortion and trade seems intuitively about right.

To try to get at the extortion / trade distinction a bit better:

Schelling gives us definitions of promises and threats, and also observes there are things that are a blend of the two. The blend is actually fairly common! I expect there's something analogous with extortion and trade: you can probably come up with pure examples of both, but in practice a lot of examples will be a blend. And a lot of the 'things we want to allow' will look like 'mostly trade with a dash of extortion' or 'mostly trade but both sides also seem to be doing some extortion'.

Comment author: Lumifer 26 April 2017 03:11:40PM 1 point [-]

however we currently can't even control our un-agenty computers very well

Hah, computers. We can't control anything very well. Take a hammer -- you might think it's amenable to driving nails in straight, but noooo... It bends the nails, leaves dents in the surface and given the slightest chance will even attack your fingers!

How about we solve the hammer control problem first?

This is operant conditioning, but it has not been applied to a whole computer system with arbitrary programs in.

Applying operant conditioning to malware is problematic for the same reason horses have difficulties learning not to walk into electic fences with a few thousands volts applied to the wires...

Comment author: evand 26 April 2017 11:58:48PM 2 points [-]

We've (mostly) solved the hammer control problem in a restricted domain. It looks like computer-controlled robots. With effort, we can produce an entire car or similar machine without mistakes.

Obviously we haven't solved the control problem for those computers: we don't know how to produce that car without mistakes on the first try, or with major changes. We have to be exceedingly detailed in expressing our desires. Etc.

This may seem like we've just transformed it into the normal computer control problem, but I'm not entirely sure. Air-gapped CNC machinery running embedded OSes (or none at all) is pretty well behaved. It seems to me more like "we don't know how to write programs without testing them" than the "normal computer control problem".

Comment author: evand 26 April 2017 11:47:11PM 1 point [-]

You May Not Believe In Guess[Infer] Culture But It Believes In You

I think this comment is the citation you're looking for.

Comment author: freyley 17 March 2017 11:10:59AM *  17 points [-]

Cohousing, in the US, is the term of art. I spent a while about a decade ago attempting to build a cohousing community, and it's tremendously hard. In the last few months I've moved, with my kids, into a house on a block with friends with kids, and I can now say that it's tremendously worthwhile.

Cohousings in the US are typically built in one of three ways:

  • Condo buildings, each condo sold as a condominium
  • Condo/apartment buildings, each apartment sold as a coop share
  • Separate houses.

The third one doesn't really work in major cities unless you get tremendously lucky.

The major problem with the first plan is, due to the Fair Housing Act in the 1960s, which was passed because at the time realtors literally would not show black people houses in white neighborhoods, you cannot pick your buyers. Any attempt to enforce rationalists moving in is illegal. Cohousings get around this by having voluntary things, but also by accepting that they'll get freeriders and have to live with it. Some cohousings I know of have had major problems with investors deciding cohousing is a good investment, buying condos, and renting them to whoever while they wait for the community to make their investment more valuable.

The major problem with the coop share approach is that, outside of New York City, it's tremendously hard to get a loan to buy a coop share. Very few banks do these, and usually at terrible interest rates.

Some places have gotten around this by having a rich benefactor who buys a big building and rents it, but individuals lose out on the financial benefits of homeownership. In addition, it is probably also illegal under the Fair Housing Act to choose your renters if there are separate units.

The other difficulties with cohousing are largely around community building, which you've probably seen plenty of with rationalist houses, so I won't belabor the point on that.

Comment author: evand 18 March 2017 10:22:17PM 1 point [-]

On the legality of selecting your buyers: What if you simply had a HOA (or equivelent) with high dues, that did rationalist-y things with the dues? Is that legal, and do you think it would provide a relevant selection effect?

Comment author: gjm 17 February 2017 01:33:25PM 11 points [-]

I thought the usual claim was not "immigration increases total GDP" but "immigration increases per-capita GDP". Random example: this paper which, full disclosure, I have not read but only looked at the abstract linked.

I'm not sure any measure of GDP (total or per capita) is a great way of assessing immigration. Consider the following scenarios:

  • A scenario like Phil's: An immigrant moves from a poor country to a rich country. They somehow become less productive when they do this, and earn less than they did before. But their income is positive, so the total GDP of the rich country goes up.
  • A scenario showing the opposite problem: An immigrant moves from a poor country to a rich country. In the rich country they earn less than the average person there, but more than they did before. The immigrant is better off. The other inhabitants of the rich country are (in total) better off. But per capita GDP has gone down.

It seems to me that to avoid Simpson's-Paradox-like confusion what we really want to know is: when some people migrate from country A to country B, what happens to (1) the total "GDP" of just the people who were already in country B and (2) the total "GDP" of just the people who moved from country A? We might also care about (3) the total "GDP" of those who remain in country B. My guess, FWIW, is that #1 and #2 both go up while #3 goes down.

Comment author: evand 19 February 2017 06:35:19PM 1 point [-]

We might also want to compute the sum of the GDP of A and B: does that person moving cause more net productivity growth in B than loss in A?

View more: Next