Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: turchin 14 July 2017 05:26:25PM 1 point [-]
Comment author: satt 15 July 2017 12:23:13PM 0 points [-]

I found the same article on an ad-blocker-friendly website. And here's a direct link to the academic article in Complexity.

Comment author: waveman 11 July 2017 01:01:53AM 0 points [-]

Trump was saying he would increase trade barriers, so current levels are not the point.

Comment author: satt 15 July 2017 11:47:13AM 0 points [-]

I think in January I read you as amplifying James_Miller's point, giving "tariff and other barriers" as an example of something to slot into his "Government regulations" claim (hence why I thought my comment was germane). But in light of your new comment I probably got your original intent backwards? In which case, fair enough!

Comment author: Lumifer 07 July 2017 01:02:18AM 2 points [-]

Fresh meat (note: fresh) has enough vitamin C to stave off scurvy.

Comment author: satt 08 July 2017 10:34:20AM 0 points [-]

Fair!

Comment author: Zarm 29 June 2017 09:34:54PM 1 point [-]

I hope this a joke. This is low brow even for a response from an average person.

"canine teeth tho"

C'mon, I was expecting more from lesswrong community.

I'm not saying there is objective morality. If you think its subjective, I'm not addressing you here.

Comment author: satt 29 June 2017 09:48:44PM 0 points [-]

I hope this a joke.

Yeah — scurvy's no fun!

Comment author: Jayson_Virissimo 29 June 2017 04:44:25AM *  0 points [-]

You've already been scooped. The "research programme" that Lakatos talks about was designed to synthesize the views of Kuhn and Popper, but Kuhn himself modeled his revolutionary science after constitutional crises, and his paradigm shifts after political revolutions (and, perhaps more annoyingly to scientists, religious conversions). Also, part of what was so controversial (at the time) about Kuhn, was the prominence he gave to non-epistemic (normative, aesthetic, and even nationalistic) factors in the history of science.

Comment author: satt 29 June 2017 09:33:46PM 0 points [-]

Did Kuhn (or Popper or Lakatos) spell out substantial implications of the analogy? A lot of the interest would come from that, rather than the fact of the analogy in itself.

Comment author: cousin_it 28 June 2017 09:02:16PM *  0 points [-]

That's a good question and I'm not sure my thinking is right. Let's say two AIs want to go to war for whatever reason. Then they can agree to some other procedure that predicts the outcome of war (e.g. war in 1% of the universe, or simulated war) and precommit to accept it as binding. It seems like both would benefit from that.

That said I agree that bargaining is very tricky. Coming up with an extensive form game might not help, because what if the AIs use a different extensive form game? There's been pretty much no progress on this for a decade, I don't see any viable attack.

Comment author: satt 28 June 2017 11:11:11PM 1 point [-]

Let's say two AIs want to go to war for whatever reason. Then they can agree to some other procedure that predicts the outcome of war (e.g. war in 1% of the universe, or simulated war) and precommit to accept the outcome as binding. It seems like both would benefit from that.

My (amateur!) hunch is that an information deficit bad enough to motivate agents to sometimes fight instead of bargain might be an information deficit bad enough to motivate agents to sometimes fight instead of precommitting to exchange info and then bargain.

Coming up with an extensive form game might not help, because what if the AIs use a different extensive form game?

Certainly, any formal model is going to be an oversimplification, but models can be useful checks on intuitive hunches like mine. If I spent a long time formalizing different toy games to try to represent the situation we're talking about, and I found that none of my games had (a positive probability of) war as an equilibrium strategy, I'd have good evidence that your view was more correct than mine.

There's been pretty much no progress on this in a decade, I don't see any viable attack.

There might be some analogous results in the post-Fearon, rational-choice political science literature, I don't know it well enough to say. And even if not, it might be possible to build a relevant game incrementally.

Start with a take-it-or-leave-it game. Nature samples a player's cost of war from some distribution and reveals it only to that player. (Or, alternatively, Nature randomly assigns a discrete, privately known type to a player, where the type reflects the player's cost of war.) That player then chooses between (1) initiating a bargaining sub-game and (2) issuing a demand to the other player, triggering war if the demand is rejected. This should be tractable, since standard, solvable models exist for two-player bargaining.

So far we have private information, but no precommitment. But we could bring precommitment in by adding extra moves to the game: before making the bargain-or-demand choice, players can mutually agree to some information-revealing procedure followed by bargaining with the newly revealed information in hand. Solving this expanded game could be informative.

Comment author: username2 28 June 2017 10:41:49AM 1 point [-]
  1. The amount of wastage from bitcoin mining pales compared to the GDP spent on traditional forms of trust. Think banking isn't contributing to global warming? Well all those office buildings have lights and electricity and back-room servers, not to mention the opportunity costs.

  2. If you want to reduce the need for bitcoin, then reduce the need for trustless solutions. This is an open-ended political and social problem, but not one that is likely to remain unsolved forever.

Comment author: satt 28 June 2017 09:51:28PM 4 points [-]

The amount of wastage from bitcoin mining pales compared to the GDP spent on traditional forms of trust. Think banking isn't contributing to global warming? Well all those office buildings have lights and electricity and back-room servers, not to mention the opportunity costs.

That provoked me to do a Fermi estimate comparing banking's power consumption to Bitcoin's. Posting it in case anyone cares.

Estimated energy use of banking

The service sector uses 7% of global power and produces 68% of global GDP. Financial services make up about 17% of global GDP, hence about 25% of global services' contribution to GDP. If financial services have the same energy intensity as services in general, financial services use about 25% × 7% = 1.8% of global power. World energy consumption is of order 15 TW, so financial services use about 260 GW. Rounding that down semi-arbitrarily (because financial services include things like insurance & pension services, as well as banking), the relevant power consumption number might be something like 200 GW.

Estimated energy use of Bitcoin

A March blog post estimates that the Bitcoin network uses 0.774 GW to do 3250 petahashes per second. Scaling the power estimate up to the network's current hash rate (5000 petahashes/s, give or take) makes it 1.19 GW. So Bitcoin is a couple of orders of magnitude short of overtaking banking.

Comment author: satt 28 June 2017 08:23:34PM 0 points [-]

You reminded me of a tangentially related post idea I want someone to steal: "Ideologies as Lakatosian Research Programmes".

Just as people doing science can see themselves as working within a scientific research programme, people doing politics can see themselves as working within a political research programme. Political research programmes are scientific/Lakatosian research programmes generalized to include normative claims as well as empirical ones.

I expect this to have some (mildly) interesting implications, but I haven't got round to extracting them.

Comment author: cousin_it 27 June 2017 08:24:46AM *  2 points [-]

I don't believe it. War wastes resources. The only reason war happens is because two agents have different beliefs about the likely outcome of war, which means at least one of them has wrong and self-harming beliefs. Sufficiently rational agents will never go to war, instead they'll agree about the likely outcome of war, and trade resources in that proportion. Maybe you can't think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest. That's one reason why I'm interested in AI cooperation and bargaining.

Comment author: satt 28 June 2017 08:06:18PM 4 points [-]

I'm flashing back to reading Jim Fearon!

Fearon's paper concludes that pretty much only two mechanisms can explain "why rationally led states" would go to war instead of striking a peaceful bargain: private information, and commitment problems.

Your comment brushes off commitment problems in the case of superintelligences, which might turn out to be right. (It's not clear to me that superintelligence entails commitment ability, but nor is it clear that it doesn't entail commitment ability.) I'm less comfortable with setting aside the issue of private information, though.

Assuming rational choice, competing agents are only going to truthfully share information if they have incentives to do so, or at least no incentive not to do so, but in cases where war is a real possibility, I'd expect the incentives to actively encourage secrecy: exaggerating war-making power and/or resolve could allow an agent to drive a harder potential bargain.

You suggest that the ability to precommit could guarantee information sharing, but I feel unease about assuming that without a systematic argument or model. Did Schelling or anybody else formally analyze how that would work? My gut has the sinking feeling that drawing up the implied extensive-form game and solving for equilibrium would produce a non-zero probability of non-commitment, imperfect information exchange, and conflict.

Finally I'll bring in a new point: Fearon's analysis explicitly relies on assuming unitary states. In practice, though, states are multipartite, and if the war-choosing bit of the state can grab most of the benefits from a potential war, while dumping most of the potential costs on another bit of the state, that can enable war. I expect something analogous could produce war between superintelligences, as I don't see why superintelligences have to be unitary agents.

In response to Any Christians Here?
Comment author: lmn 15 June 2017 05:23:12AM 2 points [-]

I’m currently atheist; my deconversion was quite the unremarkable event. September 2015 (I discovered HPMOR in February and RAZ then or in March), I was doing research on logical fallacies to better argue my points for a manga forum, when I came across Rational Wiki; for several of the logical fallacies, they tended to use creationists as examples. One thing lead to another (I was curious why Christianity was being so hated, and researched more on the site)

So you came to a pseudo-rationalist cite, (you will find the opinion of Rational Wiki around here is much lower than the of Christianity) discovered that your beliefs are unpopular in certain circles, and decided to change them to fit in.

Honestly, why does it seem like every deconversion narrative I've read always has the stupidest reasons for it?

In response to comment by lmn on Any Christians Here?
Comment author: satt 26 June 2017 12:05:53AM 1 point [-]

(you will find the opinion of Rational Wiki around here is much lower than the of Christianity)

Plausibly people around here talk more smack about RW than about Christianity, but I'm doubtful that we actually think RW worse than Christianity!

View more: Next