Comment author: turchin 03 June 2015 12:42:09PM *  3 points [-]

If we have 200-300 years before well proved catastrophe, this technic may work. But in 10-50 years time scale it is better to search good clever students and pay them to work on x-risks.

Comment author: Gondolinian 05 June 2015 04:47:04PM *  0 points [-]

If we have 200-300 years before well proved catastrophe, this technic may work.

If you're talking about significant population changes in IQ, then I agree, it would take a while to make that happen with only reproduction incentives. However, I was thinking more along the lines of just having a few thousands or tens of thousands of >145 IQ people more than we would otherwise, and that could be achieved in as little as one or two generations (< 50 years) if the program were successful enough.

Now for a slightly crazier idea. (Again, I'm just thinking out loud.) You take the children and send them to be unschooled by middle-class foster families, both to save money, and to make sure they are not getting the intellectual stimulation they need from their environment alone, which they might if you sent them to upper-class private schools, for example. But, you make sure they have Internet access, and you gradually introduce them to appropriately challenging MOOCs on math and philosophy specially made for them, designed to teach them a) the ethics of why they should want to save the world (think some of Nate's posts) and b) the skills they would need to do it (e.g., they should be up to speed on what MIRI recommends for aspiring AI researchers before they graduate high school).

The point of separating them from other smart people is that smart people tend to be mostly interested in money, power, status, etc., and that could spread to them if they are immersed in it. If their focus growing up is simply to find intellectual stimulation, then they would be essentially blank slates* and when they're introduced to problems that are very challenging and stimulating, have other smart people working on them, and are really, really important, they might be more likely to take them seriously.

*Please see my clarification below.

Comment author: Drahflow 03 June 2015 09:08:14AM 2 points [-]
  • Install a smoke detector

  • Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.

Comment author: Gondolinian 05 June 2015 01:46:06AM 0 points [-]

Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.

As someone with ~3 years of aikido experience, I second this.

Comment author: Gondolinian 05 June 2015 01:05:34AM 0 points [-]

What's the easiest way to put a poll in a top-level article?

Comment author: Gram_Stone 04 June 2015 05:25:27AM 1 point [-]

To elaborate on existing comments, a fourth alternative to FAI theory, Earning To Give, and popularization is strategy research. (That could include research on other risks besides AI.) I find that the fruit in this area is not merely low-hanging but rotting on the ground. I've read in old comment threads that Eliezer and Carl Shulman in particular have done a lot of thinking about strategy but very little of it has been written down, and they are very busy people. Circumstances may well dictate retracing a lot of their steps.

You've said elsewhere that you have a low estimate of your innate mathematical ability, which would preclude FAI research, but presumably strategy research would require lower aptitude. Things like statistics would be invaluable, but strategy research would also involve a lot of comparatively less technical work, like historical and philosophical analysis, experiments and surveys, literature reviews, lots and lots of reading, etc. Also, you've already done a bit of strategizing; if you are fulfilled by thinking about those things and you think your abilities meet the task, then it might be a good alternative.

Some strategy research resources:

Comment author: Gondolinian 04 June 2015 04:04:04PM 1 point [-]

Thanks for taking the time to put all that together! I'll keep it in mind.

Comment author: [deleted] 03 June 2015 02:26:01PM 2 points [-]

Talk about inferential distances... I was thinking from the title must be about the Oracle Financials software package or at least the database.

In response to comment by [deleted] on An Oracle standard trick
Comment author: Gondolinian 03 June 2015 03:55:34PM *  2 points [-]

In the interest of helping to bridge the inferential distance of others reading this, here's a link to the wiki page for Oracle AI.

Comment author: Sly 25 May 2015 04:55:51PM *  7 points [-]

Watch Ex Machina. This is pretty close to what you are talking about, and I was it was well done.

Comment author: Gondolinian 03 June 2015 02:29:44PM 0 points [-]

Thanks; I've put a library request in for it, though it'll probably be a few months until I get it.

Comment author: Vladimir_Nesov 05 May 2015 10:36:54AM 4 points [-]

I would recommend trying these books (at high school level or earlier, depending on when it becomes possible to follow them):

  • H. Rademacher & O. Toeplitz (1967). The Enjoyment of Math.
  • J. R. Weeks (2001). The Shape of Space.
  • R. Courant & H. Robbins (1996). What Is Mathematics?
Comment author: Gondolinian 03 June 2015 02:22:49PM *  1 point [-]

What Is Mathematics? was the only one I was able to find from a local library. I've put a request in for it and I should be getting it soon. Thanks for the recommendation; if it helps me to not hate math then I might be able to do something actually useful for existential risk reduction.

Comment author: Gondolinian 03 June 2015 12:34:30PM *  4 points [-]

(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)

You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sperm more frequently than women can donate eggs, so the higher levels of women would not be enough to match the higher levels of men, and you'd have to bring in the next-highest level), and then just having the children of the two groups be born from surrogates, whose IQ AFAIK should not have any effect on the child's, and can therefore be selected based on how cheaply they can be hired?

Comment author: Gondolinian 02 June 2015 01:58:27AM -1 points [-]

Deterrent effects would fall under "things present and to come".

Fair enough, but there's also a sense in which deterrence is acausal. In order to make a truly credible threat of retaliation for defection, you have to be completely willing to follow through with the retaliation if they defect, even if, after the defection, following through does not seem to have any future benefits.

Comment author: Gondolinian 02 June 2015 02:13:07AM 2 points [-]

I shouldn't have phrased that so confidently; I was essentially just thinking out loud. Would anyone who knows more about decision theory mind explaining where I went wrong?

Comment author: Epictetus 02 June 2015 01:04:39AM 2 points [-]

Deterrent effects would fall under "things present and to come". If you expect some kind of future benefit from a retaliatory act, that's one thing. On the other hand, if you seek vengeance because you're outraged that someone would dare wrong you, then you're mentally living in the past.

Comment author: Gondolinian 02 June 2015 01:58:27AM -1 points [-]

Deterrent effects would fall under "things present and to come".

Fair enough, but there's also a sense in which deterrence is acausal. In order to make a truly credible threat of retaliation for defection, you have to be completely willing to follow through with the retaliation if they defect, even if, after the defection, following through does not seem to have any future benefits.

View more: Prev | Next