If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Question: Say someone dramatically increased the rate at which humans can learn mathematics (over, say, the Internet). Assume also that an intelligence explosion is likely to occur in the next century, it will be a singleton, and the way it is constructed determines the future for earth-originating life. Does the increase in math learning ability make that intelligence explosion more or less likely to be friendly?
Responses I've heard to questions of the form, "Does solving problem X help or hinder safe AGI vs. unsafe AGI?":
Improvements in rationality help safe AI, because sufficiently rational humans usually become unlikely to create unsafe AI. Most other improvements are a wash, because they help safe AI and unsafe AI equally.
Almost any improvement in productivity will slightly help safe AI, because more productive humans have more unconstrained time (i.e. time not spent paying the bills). Humans tend to do more good things and move towards rationality in their less constrained time, so increasing that time is a net win.
Not sure how I feel about these responses. But neither of them directly answers the question about math.
One answer would be that improving higher math education would be a net win because safe AI will definitely require hard math, whereas improving all math education would be a net loss because, like Moore's Law, it would increase cognitive resources across the board, pushing the timeline further up. Note that if we ignore network effects (researchers talking to researchers, convincing them to not work on unsafe AI), the question becomes: Is the effect of improving X more like shifting the timeline forward by Y years, as in increasing computing power, or is it more like stretching the timeline by some linear factor, as in increasing human productivity? Thoughts?
I would think that FAI requires mathematics a lot more than does UFAI, which can be created through trial and error.