Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: James_Miller 23 February 2015 06:29:52PM 3 points [-]

We have told him that other people will think less of him if he swears, that some words are attacks on groups and hearing these words will cause emotional discomfort to members of these groups, and that his swearing causes his mom some discomfort. I have told him that while I am not inherently bothered by him swearing, I don't want him to do it around me because it will make it more likely that he will swear at school, but he claims that his swearing at home doesn't increase the likelihood of him swearing at school.

Comment author: Larks 24 February 2015 02:07:26AM 0 points [-]

but he claims that his swearing at home doesn't increase the likelihood of him swearing at school

Tough! Many adults fail to understand that they're not perfect rational agents, so instead that their habits really matter. I guess on the bright side this could be a good opportunity to teach him that he should not encultivate habits that raise the psychic cost of virtuous behaviour, even if those habits are themselves not inherently vices.

Comment author: Larks 24 February 2015 01:53:08AM 2 points [-]

they are not Buffet type investors either, they keep owning the same shares

Buffet famously doesn't sell shares - this is one feature that makes him very unusual among investors.

Comment author: James_Miller 23 February 2015 05:52:00PM 7 points [-]

How do you get a high verbal IQ, boundary-testing, 10-year-old child not to swear? Saying "don't swear" causes him to gleefully list words asking if they count as swear words. Telling him a word counts as profanity causes him to ask why that specific word is bad. Saying a word doesn't count causes him to use it extra amounts if he perceives it is bad, and he will happily combine different "legal" words trying to come up with something offensive. All of this is made more difficult by the binding constraint that you absolutely must make sure he doesn't say certain words at school, so in terms of marginal deterrence you need the highest punishment for him saying these words.

Comment author: Larks 23 February 2015 06:17:19PM 1 point [-]

What is the reason you don't want him to swear? Maybe you could tell him that.

Comment author: ciphergoth 10 February 2015 01:32:20AM *  5 points [-]

I found this exercise surprising and useful. Supposing we accept the standard model that our utility is logarithmic in money. Let's suppose we're paid $100,000 a year, and somewhat arbitrarily use that as the baseline for our utility calculations. We go out for a meal with 10 people where each spends $20 on food. At the end of the meal, we can either all put in $20 or we can randomize it and have one person pay $200. All other things being equal, how much should we be prepared to pay to avoid randomization?

Take a guess at the rough order of magnitude. Then look at this short Python program until you're happy that it's calculating the amount that you were trying to estimate, and then run it to see how accurate your estimate was.

from math import exp, log
w = 100000
b = 20
k = 10
print w - b - exp(log(w-k*b)/k + log(w)*(1-1.0/k))

Incidentally I discovered this while working out the (trivial) formula for an approximation to this following conversations with Paul Christiano and Benja Fallenstein.

EDITED TO ADD: If you liked this, check out Expectorant by Bethany Soule of Beeminder fame.

Comment author: Larks 10 February 2015 01:50:17AM 0 points [-]

I got within 10% of the correct answer!

Yeah, people often run arguments like this without actually considering the magnitude.

Comment author: Larks 30 January 2015 01:06:00AM *  22 points [-]

Donated. Go CFAR!

Comment author: KatjaGrace 18 November 2014 02:02:24AM 1 point [-]

Note that self-preservation is really a sub-class of goal-content integrity, and is worthless without it.

Comment author: Larks 19 January 2015 04:00:04AM 0 points [-]

This is a total nit pick, but:

Suppose your AI's goal was "preserve myself". Ignoring any philosophical issues about denotation, here self-preservation is worthwhile even if the goal changed. If the AI, by changing itself into a paperclip maximizer, could maximize its chances of survival (say because of the threat of other Clippies) then it would do so. Because self-preservation is a instrumentally convergent goal, it would probably survive for quite a long time as a paperclipper - maybe much longer than as an enemy of Clippy.

Comment author: KatjaGrace 23 December 2014 02:04:02AM 2 points [-]

Was there anything you didn't understand this week?

Comment author: Larks 15 January 2015 03:00:03AM 0 points [-]

I don't really understand why AGI is so different from currently existing software. Current software seems docile - we worry more about getting it to do anything in the first place, and less about it accidentally doing totally unrelated things. Yet AGI seems to be the exact opposite. It seems we think of AGI as being 'like humans, only more so' rather than 'like software, only more so'. Indeed, in many cases it seems that knowing about conventional software actually inhibits one's ability to think about AGI. Yet I don't really understand why this should be the case.

In response to comment by Larks on Ethical Diets
Comment author: FrameBenignly 13 January 2015 04:12:02AM 0 points [-]

Eliminating redistribution to ems will have little impact. As long as labor has a significant value which can be used to purchase capital (ie money), ems will be able to produce so much more labor than humans that they will quickly grow to dominate society. They don't need our charity to crush us like bugs.

Comment author: Larks 15 January 2015 02:53:46AM 0 points [-]

If humans kept significant wealth we could live off the interest. There's a big difference between 'no longer dominate society' and 'all die of starvation after having our wealth stripped from us'.

Comment author: gedymin 23 December 2014 08:33:40PM *  3 points [-]

It would be interesting to see more examples of modern-day non-superintelligent domain-specific analogues of genies, sovereigns and oracles, and to look at their risks and failure modes. Admittedly, this is only an inductive evidence that does not take into account the qualitative leap between them and superintelligence, but it may be better than nothing. Here are some quick ideas (do you agree with the classification?):

  • Oracles - pocket calculators (Bostrom's example); Google search engine; decision support systems.

  • Genies - industrial robots; GPS driving assistants.

  • Sovereigns - automated trading systems; self-driving cars.

The failures of automated trading systems are well-known and have cost hundreds of millions of dollars. On the other hand, the failures of human bankers who used ill-suited mathematical models for financial risk estimation are also well-known (the recent global crisis), and may have host hundreds of billions of dollars.

Comment author: Larks 15 January 2015 02:51:52AM 1 point [-]

The failures of automated trading systems are well-known and have cost hundreds of millions of dollars. On the other hand, the failures of human bankers

I think a better comparison would be with old-fashioned open-outcry pits. These were inefficient and failed frequently in opaque ways. Going electronic has made errors less frequent but also more noticeable, which means we under-appreciate the improvement.

In response to Ethical Diets
Comment author: Larks 13 January 2015 02:17:15AM 5 points [-]

While I think trying to set up equilibria that will be robust against a multipolar takeoff is interesting, I don't think your example is a conclusion you would come to if you weren't already concerned about animal rights.

Much more plausible is that we should start the strict enforcement of property rights. Any level of redistribution could result in all wealth held by flesh-and-blood humans being 'redistributed' to uploads in light of their huge number.

View more: Next