Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: KatjaGrace 18 November 2014 02:02:24AM 1 point [-]

Note that self-preservation is really a sub-class of goal-content integrity, and is worthless without it.

Comment author: Larks 19 January 2015 04:00:04AM 0 points [-]

This is a total nit pick, but:

Suppose your AI's goal was "preserve myself". Ignoring any philosophical issues about denotation, here self-preservation is worthwhile even if the goal changed. If the AI, by changing itself into a paperclip maximizer, could maximize its chances of survival (say because of the threat of other Clippies) then it would do so. Because self-preservation is a instrumentally convergent goal, it would probably survive for quite a long time as a paperclipper - maybe much longer than as an enemy of Clippy.

Comment author: KatjaGrace 23 December 2014 02:04:02AM 2 points [-]

Was there anything you didn't understand this week?

Comment author: Larks 15 January 2015 03:00:03AM 0 points [-]

I don't really understand why AGI is so different from currently existing software. Current software seems docile - we worry more about getting it to do anything in the first place, and less about it accidentally doing totally unrelated things. Yet AGI seems to be the exact opposite. It seems we think of AGI as being 'like humans, only more so' rather than 'like software, only more so'. Indeed, in many cases it seems that knowing about conventional software actually inhibits one's ability to think about AGI. Yet I don't really understand why this should be the case.

In response to comment by Larks on Ethical Diets
Comment author: FrameBenignly 13 January 2015 04:12:02AM 0 points [-]

Eliminating redistribution to ems will have little impact. As long as labor has a significant value which can be used to purchase capital (ie money), ems will be able to produce so much more labor than humans that they will quickly grow to dominate society. They don't need our charity to crush us like bugs.

Comment author: Larks 15 January 2015 02:53:46AM 0 points [-]

If humans kept significant wealth we could live off the interest. There's a big difference between 'no longer dominate society' and 'all die of starvation after having our wealth stripped from us'.

Comment author: gedymin 23 December 2014 08:33:40PM *  3 points [-]

It would be interesting to see more examples of modern-day non-superintelligent domain-specific analogues of genies, sovereigns and oracles, and to look at their risks and failure modes. Admittedly, this is only an inductive evidence that does not take into account the qualitative leap between them and superintelligence, but it may be better than nothing. Here are some quick ideas (do you agree with the classification?):

  • Oracles - pocket calculators (Bostrom's example); Google search engine; decision support systems.

  • Genies - industrial robots; GPS driving assistants.

  • Sovereigns - automated trading systems; self-driving cars.

The failures of automated trading systems are well-known and have cost hundreds of millions of dollars. On the other hand, the failures of human bankers who used ill-suited mathematical models for financial risk estimation are also well-known (the recent global crisis), and may have host hundreds of billions of dollars.

Comment author: Larks 15 January 2015 02:51:52AM 0 points [-]

The failures of automated trading systems are well-known and have cost hundreds of millions of dollars. On the other hand, the failures of human bankers

I think a better comparison would be with old-fashioned open-outcry pits. These were inefficient and failed frequently in opaque ways. Going electronic has made errors less frequent but also more noticeable, which means we under-appreciate the improvement.

In response to Ethical Diets
Comment author: Larks 13 January 2015 02:17:15AM 5 points [-]

While I think trying to set up equilibria that will be robust against a multipolar takeoff is interesting, I don't think your example is a conclusion you would come to if you weren't already concerned about animal rights.

Much more plausible is that we should start the strict enforcement of property rights. Any level of redistribution could result in all wealth held by flesh-and-blood humans being 'redistributed' to uploads in light of their huge number.

Comment author: KatjaGrace 09 January 2015 03:03:16AM 1 point [-]

Which 'option' do you mean?

Comment author: Larks 12 January 2015 04:44:24AM 0 points [-]

'option' in the sense of a financial derivative - the right to buy an underlying security for a certain strike price in the future. In this case, it would be the chance to continue humanity if the future was bright, or commit racial-suicide if the future was dim. In general the asymmetrical payoff function means that options become more valuable the more volatile the underlying is. However, it seems that in a bad multipolar future we would not actually be able to (choose not to buy the security because it was below the strike price / choose to destroy the world) so we don't benefit from the option value.

Comment author: KatjaGrace 06 January 2015 06:46:35AM *  3 points [-]

Are you convinced that absent a singleton or some other powerful forces, human wages will go below subsistance in the long run? (p160-161)

Comment author: Larks 08 January 2015 04:45:59AM 3 points [-]

One question is the derivative of fertility with respect to falling wages. If we start to enter a Malthusian society, will people react by reducing fertility because they "can't afford kids", or increase, as many historically did?

Comment author: Larks 08 January 2015 04:32:08AM 4 points [-]

Yay, I finally caught up!

I think the section on resource allocation and Malthusian limits was interesting. Many people seem to think that

  • people with no capital should be given some
  • the repugnant conclusion is bad

Yet by continually 'redistributing' away from groups whole restrict their fertility, we actually ensure the latter.

Comment author: Larks 08 January 2015 04:44:02AM 0 points [-]

Continual redistribution & population growth might ensure individual capital levels fell so low that no-one had the spare capital to invest in hard AI research, hindering any second transition.

Comment author: Larks 08 January 2015 04:42:03AM 3 points [-]

Bostrom argues that much of human art, etc. is actually just signalling wealth, and could be eventually replaced with auditing. But that seems possible at the moment - why don't men trying to attract women just show off the Ernst&Young Ap on their phone, which would vouch for their wealth, fitness, social skills etc.?

Comment author: Larks 08 January 2015 04:36:06AM 1 point [-]

Bostrom makes an interesting point that multipolar scenarios are likely to be extremely high variance: either very good or (assuming you believe in additivity of value) or very bad. Unfortunately it seems unlikely that any oversight could remain in such a scenario that could enable us to exercise or not this option in a utility-maximizing way.

View more: Next