Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwern 23 July 2015 06:22:01PM *  1 point [-]

Probability and Statistics for Business Decisions, Robert Schlaifer 1959. Surprisingly expensive used, and unfortunately for such a foundational text in Bayesian decision theory, doesn't seem to be available online. If you can't get a digital copy, does anyone know of a good service or group which would produce a high-quality digital copy given a print edition?

Comment author: SimonF 23 July 2015 10:20:08PM 0 points [-]

They have a copy at our university library. I would need to investigate how to scan it efficiently, but I'm up for it if there isn't an easier way and noone else finds a digital copy.

Comment author: JoshuaZ 19 April 2015 07:28:29PM 3 points [-]

I'm not sure if this piece should go here or in Main (opinions welcome).

Thanks to Mass_Driver, CellBioGuy, and Sniffnoy for looking at drafts, as well as Josh Mascoop, and J. Vinson. Any mistakes or errors are my own fault.

Comment author: SimonF 18 May 2015 09:27:10PM 1 point [-]

Definitely Main, I found your post (including the many references) and the discussion very interesting.

Comment author: Richard_Loosemore 05 May 2015 09:45:38PM 1 point [-]

The paper's goal is not to discuss "basic UFAI doomsday scenarios" in the general sense, but to discuss the particular case where the AI goes all pear-shaped EVEN IF it is programmed to be friendly to humans.

That last part (even if it is programmed to be friendly to humans) is the critical qualifier that narrows down the discussion to those particular doomsday scenarios in which the AI does claim to be trying to be friendly to humans - it claims to be maximizing human happiness - but in spite of that it does something insanely wicked.

So, Eli says:

The basic UFAI doomsday scenario is: the AI has vast powers of learning and inference with respect to its world-model, but has its utility function (value system) hardcoded. Since the hardcoded utility function does not specify a naturalization of morality, or CEV, or whatever, the UFAI proceeds to tile the universe in whatever it happens to like (which are things we people don't like), precisely because it has no motivation to "fix" its hardcoded utility function

... and this clearly says that the type of AI he has in mind is one that is not even trying to be friendly. Rather, he talks about how its

hardcoded utility function does not specify a naturalization of morality, or CEV, or whatever

And then he adds that

the UFAI proceeds to tile the universe in whatever it happens to like

... which has nothing to do with the cases that the entire paper is about, namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend.

If you read the paper all of this is obvious pretty quickly, but perhaps if you only skim-read a few paragraphs you might get the wrong impression. I suspect that is what happened.

Comment author: SimonF 05 May 2015 10:11:05PM *  1 point [-]

I still agree with Eli and think you're "really failing to clarify the issue", and claiming that xyz is not the issue does not resolve anything. Disengaging.

Comment author: Richard_Loosemore 05 May 2015 06:32:43PM 4 points [-]

The paper had nothing to do with what you talked about in your opening paragraph, and your comment:

Please go read some actual scientific material rather than assuming that The Metamorphosis of Prime Intellect is up-to-date with the current literature

... was extremely rude.

I build AI systems, and I have been working in the field (and reading the literature) since the early 1980s.

Even so, I would be happy to answer questions if you could read the paper carefully enough to see that it was not about the topic you thought it was about.

Comment author: SimonF 05 May 2015 09:12:50PM *  3 points [-]

The paper had nothing to do with what you talked about in your opening paragraph

What? Your post starts with:

My goal in this essay is to analyze some widely discussed scenarios that predict dire and almost unavoidable negative behavior from future artificial general intelligences, even if they are programmed to be friendly to humans.

Eli's opening paragraph explains the "basic UFAI doomsday scenario". How is this not what you talked about?

Comment author: Vaniver 14 January 2014 04:54:08PM 13 points [-]

These days, I ignore recommendations about new TV shows and books, preferring not even to learn the premises, thus dodging the temptation entirely.

This may only work if you have the values I do, but I've found that I now view "X show/book is so good" as being an anti-recommendation after reading Game of Thrones and Worm and starting to watch Breaking Bad. Generally, what people mean by "good" is "engaging," and "engaging" is orthogonal to what I want from the fiction I consume. If you combine "engaging" with "depressing" or "exasperating," that is enough to make it negative value for me.

Comment author: SimonF 16 January 2014 01:59:02PM 5 points [-]

What's Worm? Oh, wait..

Comment author: wallowinmaya 16 October 2013 10:23:27AM 1 point [-]

That's great, at least there are three of us then :)

Comment author: SimonF 24 October 2013 10:09:47AM 1 point [-]

Awesome, a meetup in Cologne. I'll try to be there, too. :)

Comment author: jkaufman 15 August 2013 07:01:19PM 2 points [-]

How well does this map to the human experience of the game? Do two experienced players need the swap rule for the game to remain interesting?

Comment author: SimonF 20 August 2013 04:04:52PM *  2 points [-]

It depends on the skill difference and the size of the board, on smaller boards the advantage is probably pretty large: Discussion on LittleGolem

Comment author: SimonF 15 August 2013 11:43:21AM 11 points [-]

66$, with some help of a friend.

Comment author: gwern 24 July 2013 09:13:08PM *  41 points [-]

I've been reading Tyler and I read McAfee.

Cowen says some interesting things but I don't think he makes the best case for technological unemployment; not sure what you mean by McAfee - Brynjolfsson is the lead author on Race Against the Machine, not McAfee.

I'm not sure you really address the central point either; why can't the disemployed people find new jobs like in the last four centuries,

As my initial comment implies, I think the last century is qualitatively different automation than before: before, the machines began handling brute force things, replacing things which offered only brute force & not intelligence like horses or watermills. But now they are slowly absorbing intelligence, and this seems to be the final province of humans. In Hanson's terms, I think machines switched from being complements to being substitutes in some sectors a while ago.

and why did unemployment drop in Germany once they fixed their labor market, and why hasn't employment dropped in Australia, etcetera?

I don't know nearly enough about Germany to say. They seem to be in a weird position in Europe, which might explain it. I'd guess that Australia seems to owe its success to avoiding a resource curse & profiting heavily off China in extractive industries, along with restricting its supply of labor.

(And note that anything along the lines of 'regional boom' contradicts ZMP and completely outcompeted humans and other explanations which postulate unemployability, not 'unemployable unless regional boom'.)

ZMP is 'marginal'; if the margin changes, ZMPers may change. During booms, a lot of margins might change. And even factors like human capital can change in importance: you can hire more dishonest employees if you switch to automated cash registers which they can't easily steal from. Or even the most dishonest evil wretch can be profitable to hire to stand on the sidewalk in a costume if you're in the middle of a real estate bubble.

Why is the IQ 70 kid not able to do laundry as so many others once did earlier, if the economy is so productive - shouldn't someone be able to hire him in his area of Ricardian comparative advantage?

Ricardian comparative advantage isn't magic pixie dust; it doesn't guarantee there's anything worth hiring him for. Another example: imagine you have this IQ 70 kid who can do laundry - I personally don't know how to do laundry well for anything but my own clothes and would ruin someone else's stuff, but let's assume you spent a few weeks training this kid how to do laundry, how to read the tags, separate clothes correctly, treat lingerie differently, not to mix bleach and chlorine, properly treat the different kinds of stains etc* - what makes you trust him with your laundry? He can be impulsive, short-sighted, not understand other peoples' emotions or responses. Well, what can he do with your laundry besides clean it that's so bad? Here's a random thought: he could masturbate with your underwear. Question: how much money do you think a random woman would pay to know that the guy doing her laundry is not fishing out her lady-things and masturbating with them? Ask the nearest women, if you dare, how much they would pay. Even allowing for CFAR/MIRI people almost completely lacking the purity moral axis and reasoning consequentially and being highly deviant compared to the general population, I bet the figure is non-zero...

* and until you've actually tried this, don't assume I'm exaggerating here. You live in a high IQ bubble.

Again, the economy of 1920 seemed to do quite well handling disemployment pressures like this with reemployment, so what changed?

People had many fewer clothes in 1920, for starters: the task was intrinsically simpler. Here's an interesting quote:

In 2008, Americans owned an average of 92 items of clothing, not counting underwear, bras and pajamas, according to Cotton Inc.'s Lifestyle Monitor survey, which includes consumers, age 13 to 70. The typical wardrobe contained, among other garments, 16 T-shirts, 12 casual shirts, seven dress shirts, seven pairs of jeans, five pairs of casual slacks, four pairs of dress pants, and two suits—a clothing cornucopia. Then the economy crashed. Consumers drew down their inventories instead of replacing clothes that wore out or no longer fit. In the 2009 survey, the average wardrobe had shrunk—to a still-abundant 88 items. We may not be shopping like we used to, but we aren't exactly going threadbare. Bad news for customer-hungry retailers, and perhaps for economic recovery, is good news for our standard of living. By contrast, consider a middle-class worker's wardrobe during the Great Depression. Instead of roughly 90 items, it contained fewer than 15. For the typical white-collar clerk in the San Francisco Bay Area, those garments included three suits, eight shirts (of all types), and one extra pair of pants. A unionized streetcar operator would own a uniform, a suit, six shirts, an extra pair of pants, and a set of overalls. Their wives and children had similarly spare wardrobes. Based on how rarely items were replaced, a 1933 study concluded that this "clothing must have been worn until it was fairly shabby." Cutting a wardrobe like that by four items—from six shirts to two, for instance—would cause real pain. And these were middle-class wage earners with fairly secure jobs.

There were many more jobs suitable for the mentally handicapped, like agriculture, which was far less automated and scientific than it is now.

Maybe eventually AI will disemploy that kid but right now humans are still doing laundry!

Certainly, but to compare with 1920, laundry got way easier with the invention of washing & drying machines (I spend more time folding my clothes and putting them away than I do 'washing' or 'drying'), and we value our privacy way more than we used to, one of the luxuries of the rich. Even drycleaning is more complex than it used to be, as the process is evolved to be more environmentally friendly, among other things.

Quick question: To what extent are you playing Devil's Advocate above and to what extent do you actually think that the robotic disemployment thesis is correct, a primary cause of current unemployment, not solvable with NGDP level targeting and unfixably due to some humans being too-much-outcompeted

See the sibling comment's link. I am of mixed minds about it, but I think your counter-arguments are bad. I don't know how much of current American unemployment is due to it but if it exists, I think it's pretty much insoluble since there are no more remaining IQ boosts left like iodine, the Flynn effect seems to be hollow gains, and so on. We're basically stuck until some miracle happens (AI? Hsu's embryo selection?), and so America would benefit from serious discussion of things like Basic Income and consolidating the current patch-work of welfare which encourages things like fraudulent disability.

Comment author: SimonF 26 July 2013 09:32:16PM *  9 points [-]

Regarding the drop of unemployment in Germany, I've heard it claimed that it is mainly due to changing the way the unemployment statististics are done, e.g. people who are in temporary, 1€/h jobs and still receiving benefits are counted als employed. If this point is still important, I can look for more details and translate.

EDIT: Some details are here:

It is possible to earn income from a job and receive Arbeitslosengeld II benefits at the same time. [...] There are criticisms that this defies competition and leads to a downward spiral in wages and the loss of full-time jobs. [...]

The Hartz IV reforms continue to attract criticism in Germany, despite a considerable reduction in short and long term unemployment. This reduction has led to some claims of success for the Hartz reforms. Others say the actual unemployment figures are not comparable because many people work part-time or are not included in the statistics for other reasons, such as the number of children that live in Hartz IV households, which has risen to record numbers.

Comment author: Kaj_Sotala 03 January 2013 10:55:22AM 0 points [-]

Fixed, thanks.

Comment author: SimonF 03 January 2013 02:13:25PM 2 points [-]

Nope, it's still broken.

View more: Next