Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Larks 01 October 2014 02:24:33AM 2 points [-]

I wonder if advances in embryo selection might reduce fertility. At the moment I think one of the few things keeping the birth rate up is people thinking of it as the default. But embryo selection seems like it could interrupt this: a clearly superior option might make having children naturally less of a default option, but with using embryo selection novel enough to seem like a discretionary choice, and therefor not replacing it.

Comment author: KatjaGrace 30 September 2014 12:38:40PM 1 point [-]

What did you find least persuasive in this week's reading?

Comment author: Larks 01 October 2014 02:21:48AM 4 points [-]

With the full development of the genetic technologies described above .... , it might be possible to ensure that new individuals are on average smarter han any human who has yet existed

  • p43

This would be huge! Assuming IQ is normally distributed, there have been, there have been more than a billion people in history, so we could reasonably expect to have seen a 6-sigma IQ person - a person with an IQ of 190. So we'd need to see a 90point gain in average IQ.

Ok, now I see that Nick claims that 10 generations of 1 in 10 selection could give us that. So perhaps this should be a "part you find most surprising" rather than "part you find least persuasive". I certainly would have spent a little bit more time on this possibility!

Comment author: KatjaGrace 30 September 2014 01:09:30AM 2 points [-]

Did you change your mind about anything as a result of this week's reading?

Comment author: Larks 01 October 2014 02:13:55AM 4 points [-]

I changed my mind on the impact of embryo selection on my personal reproductive plans. Bostrom gives

"five to ten years to gather the information needed for significantly effective selection among a set of IVF embryos."

That's soon enough that it's plausible I could have some of my later children using it. I wonder what the impact would be on family dynamics, especially if anything like the 20 IQ points could be realized.

Comment author: KatjaGrace 23 September 2014 01:14:15AM 4 points [-]

How would you like this reading group to be different in future weeks?

Comment author: Larks 23 September 2014 01:27:50AM 9 points [-]

I think this is pretty good at the moment. Thanks very much for organizing this, Katja - it looks like a lot of effort went into this, and I think it will significantly increase the amount the book gets read, and dramatically increase the extent to which people really interact with the ideas.

I eagerly await chapter II, which I think is a major step up in terms of new material for LW readers.

Comment author: KatjaGrace 23 September 2014 01:02:15AM 1 point [-]

Given all this inaccuracy, and potential for bias, what should we make of the predictions of AI experts? Should we take them at face value? Try to correct them for biases we think they might have, then listen to them? Treat them as completely uninformative?

Comment author: Larks 23 September 2014 01:25:07AM 2 points [-]

I guess we risk double-correcting: presumably Eliezer has already thought about the impact of biases on his forecast, and adjusted it accordingly. (Ok, there is no need really to presume, as he has written at length about doing so).

Comment author: KatjaGrace 23 September 2014 01:06:57AM 6 points [-]

If a person wanted to make their prediction of human-level AI entirely based on what was best for them, without regard to truth, when would be the best time? Is twenty years really the sweetest spot?

I think this kind of exercise is helpful for judging the extent to which people's predictions really are influenced by other motives - I fear it's tempting to look at whatever people predict and see a story about the incentives that would drive them there, and take their predictions as evidence that they are driven by ulterior motives.

Comment author: Larks 23 September 2014 01:23:02AM 4 points [-]

Stock market analysts are in a somewhat similar boat. Strategists and Macro economists make long-term predictions, though their performance is rarely tracked. Short-term forecasts (one month, or one quarter) are centrally compiled, and statistics are assembled on which economists are good forecasters, but this is not the case for long-term predictions. As such, long-term predictions seem to fall into much the same camp as AI predictions here. And many bank analysts explicitly think about the non-epistemic incentives they personally face, in terms of wanting to make an impact, wanting a defensible position, and so on.

However, with economists we see much shorter forecast time horizon. A long-term forecast would be 5 years; many give less. I have never seen an explicit forecast out longer than 10 years. Perhaps this is because they don't think people would assign any credibility to such forecasts; perhaps the remaining career duration of a macro economist is shorter than those of AI researchers. However, making several incorrect predictions is rarely very damaging to their career. Indeed, because they can selectively emphasis their ex post correct predictions, they're incentivized to make many short-term predictions.

Prima facie much of this applies to AI commentators as well.

Comment author: Larks 19 September 2014 01:01:31AM 2 points [-]

In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.

(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.

Comment author: Kaj_Sotala 16 September 2014 07:27:04PM 5 points [-]

For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.

Agreed. It's not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a "top 16 in Europe"-level human player after only a "few months" of work.

The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.

Comment author: Larks 19 September 2014 12:58:06AM 3 points [-]

most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.

The popular AI mods for Civ actually tend to make the AIs less thematic - they're less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.

Comment author: KatjaGrace 16 September 2014 04:11:40AM 3 points [-]

Did you change your mind about anything as a result of this week's reading?

Comment author: Larks 19 September 2014 12:55:26AM 4 points [-]

This is an excellent question, and it is a shame (perhaps slightly damning) that no-one has answered it. On the other hand, much of this chapter will have been old material for many LW members. I am ashamed that I couldn't think of anything either, so I went back again looking for things I had actually changed my opinion about, even a little, and not merely because I hadn't previously thought about it.

  • p6 I hadn't realised how important combinatorial explosion was for early AI approaches.
  • p8 I hadn't realised, though I should have been able to work it out, that the difficulties in coming up with a language which matched the structure of the domain was a large part of the problem with evolutionary algorithms. Once you have done that you're halfway to solving it by conventional means.
  • p17 I hadn't realised about how high volume could have this sort of reflexive effect.
Comment author: spxtr 16 September 2014 04:46:44PM 34 points [-]

[Please read the OP before voting. Special voting rules apply.]

Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.

Comment author: Larks 19 September 2014 12:36:57AM 3 points [-]

According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view.

If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.

View more: Next