I don't understand this question. The best time for the emergence of a great optimizer would be shortly after you were born (earlier if your existence were assured somehow).
If an AI is a friendly optimizer, then you want it as soon as possible. If it is randomly friendly or unfriendly, then you don't want it at all (the quandary we all face). Seems like asking "when" is a lot less relevant than asking "what". "What" I want is a friendly AI. "When" I get it is of little relevance, so long as it is long enough before my death to grant me "immortality" while maximally (or sufficiently) fulfilling my values.
The question isn't asking when the best time for the AI to be created is. It's asking what the best time to predict the AI will be created is. E.g. What prediction sounds close enough to be exciting and to get me that book deal, but far enough away as to be not obviously wrong and so that people will have forgotten about my prediction by the time it hasn't actually come true. This is an attempt to determine how much the predictions may be influenced by self-interest bias, etc.
With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.
Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.
The more we ignore the cognitive barriers we have placed upon the world, the greater our ability to problem solve. My own lab group (chemists and materials scientists) recently collaborated with microbiologists, civil engineers, and industrial engineers to produce a feed spacer which blocks biofouling in RO membranes using technology originally developed for use in medical devices using a catalytic process discovered by a nutritionist. It reduces biofouling by 3-5 logs, which is unprecedented, using a process that engineers never could have come up with ...
So I recently found LessWrong after seeing a link to the Harry Patter fanfiction, and I have been enthralled with the concept of rationalism since. The concepts are not foreign to me as I am a chemist by training, but the systematization and focus on psychology keep me interested. I am working my way through the sequences now.
As for my biography, I am a 29 year old laboratory manager trained as a chemist. My lab develops and tests antimicrobial materials and drugs based on selenium's oxygen radical producing catalysis. It is rewarding work if you can g...
Who is the audience and what are the desired outcomes from playing the game?
I can guess, but it is better to outline these things explicitly at the start, lest you wind up with something that no-one wants to play.
As for genre, I immediately think of something like a role playing game. I think back to FFVIII, which allowed you gain, increasing your salary when you took in-game tests. More recent RPGs that have huge volumes of information in in-game encyclopedias (games like KOTOR 1 and 2, Mass Effect, Xenosaga, etc).
I would consider using a plot similar t...
Perhaps it is because they actually married them when they were young, but at any given time your average "trophy wife" will by in her 30's?
This is a complex problem. I'm not sure the answer can be found without some in-depth statistical analysis.
Also, Rubix seems to be looking at things from the wrong perspective. It's not that women don't get with older men, it is likely the case that all the older men, and the men of status are taken. The younger men are not. Looking at it from the older men's point of view: what is the likelihood of an older, successful, single man getting together with women of a given age? I would guess it is much higher for younger women.
It seems to me that emergence is the opposite of rigorous structure. Take human brain function (similar to your intelligence comment in the article). Claiming that brain function is emergent versus rigorously ordered allows you to make a prediction, namely that a child who has a portion of their brain removed will retain all or a large portion of the functionality of the removed portion, or they will not. A child with half of their brain missing would be expected to be extraordinarily impaired. A simple search of the literature should prove it one way ...
Sometime after the Singularity. We already have AI that surpasses humans in several areas of human endeavor, such as chess and trivia. What do you define as "human level"? The AIs we have now are like extremely autistic savants, exceptional in some areas where most people are deficient, but deficient to the point of not even trying in the thousands of others. Eventually, there will (in theory) be AIs that are like that with most aspects of human existence, yet remain far inferior in others, and perhaps shortly after that point is reached, AIs... (read more)