Comment author: knb 04 October 2010 10:26:41PM *  1 point [-]

I thought there were a lot of libertarians on LW! I'm stunned by how unsuccessful this one was!

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, (in both more and less libertarian) economies that would be very surprising to me.

A good example, in spite of the fact that Somalia has effectively no government services (not even private property protections or enforcement of contracts), its economy has generally grown year by year.

In response to comment by knb on The Irrationality Game
Comment author: mattnewport 04 October 2010 10:51:41PM 0 points [-]

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, in both more and less libertarian economies that would be very surprising to me.

I wondered about this as well. It seems an extremely strong and unlikely claim if it is intended to mean an absolute decrease in GDP per capita.

Comment author: Vladimir_M 04 October 2010 10:27:41PM 2 points [-]

Perplexed:

It doesn't have to. That is a problem you made up. Other people don't have to buy in to your view on the proper relationship between numbers and physical reality.

You probably wouldn't buy that same argument if it came from a numerologist, though. I don't think I hold any unusual and exotic views on this relationship, and in fact, I don't think I have made any philosophical assumptions in this discussion beyond the basic common-sense observation that if you want to use numbers to talk about the real world, they should have a clear connection with something that can be measured or counted to make any sense. I don't see any relevance of these (otherwise highly interesting) deep questions of the philosophy of math for any of my arguments.

Comment author: mattnewport 04 October 2010 10:50:08PM 1 point [-]

Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet? If you're not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?

In response to Why not be awful?
Comment author: mattnewport 04 October 2010 10:41:20PM *  1 point [-]

The two things that seem to work for me most of the time: "will I feel proud / good about myself for doing this?" or, if that fails, "would person X (whose opinion of me is generally important to me) be impressed or disgusted with this behaviour if they knew about it?". Essentially, "is this behaviour consistent with the kind of person I wish myself and (particular) others to perceive me to be?".

Comment author: wedrifid 04 October 2010 04:38:46AM 3 points [-]

I want to upvote each of these points a dozen times. Then another few for the first.

A Singleton AI is not a stable equilibrium

It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.

Comment author: mattnewport 04 October 2010 04:53:52AM 2 points [-]

I guess I'm playing the game right then :)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Comment author: orthonormal 04 October 2010 02:47:39AM 2 points [-]

Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.

Comment author: mattnewport 04 October 2010 04:32:14AM *  0 points [-]

I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen).

The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance.

The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.

Comment author: [deleted] 03 October 2010 09:29:45PM 0 points [-]

I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?

In response to comment by [deleted] on The Irrationality Game
Comment author: mattnewport 03 October 2010 09:43:56PM *  6 points [-]

Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment.

Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton).

Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton.

Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders.

Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.

Comment author: Eugine_Nier 03 October 2010 08:36:49PM 1 point [-]

I agree with your first two, but am dubious about your third.

Comment author: mattnewport 03 October 2010 08:58:04PM *  3 points [-]

Two points that influence my thinking on that claim:

  1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents.
  2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.
Comment author: Vladimir_M 03 October 2010 08:28:13PM -2 points [-]

I addressed this point in another comment in this thread:

http://lesswrong.com/lw/2sl/the_irrationality_game/2qgm

Comment author: mattnewport 03 October 2010 08:44:50PM *  3 points [-]

I agree with most of what you're saying (in that comment and this one) but I still think that the ability to give well calibrated probability estimates for a particular prediction is instrumentally useful and that it is fairly likely that this is an ability that can be improved with practice. I don't take this to imply anything about humans performing actual Bayesian calculations either implicitly or explicitly.

Comment author: mattnewport 03 October 2010 08:24:34PM 0 points [-]

Are we only supposed to upvote this post if we think it is irrational?

Comment author: mattnewport 03 October 2010 08:21:27PM 42 points [-]
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

View more: Prev | Next