High functioning autism might in part be caused by an "overclocking" of the brain.
My evidence:
(1) Autistic children have on average larger brains than neurotypical children do. (2) High IQ parents are more likely than average to have autistic children. (3) An extremely disproportionate number of mathematical geniuses have been autistic. (4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.
Eliezer-
“What justifies the right of your past self to exert coercive control over your future self? There may be overlap of interests, which is one of the typical de facto criteria for coercive intervention; but can your past self have an epistemic vantage point over your future self?”
In general I agree. But werewolf contracts protect against temporary lapses in rationality. My level of rationality varies. Even assuming that I remain in good health for eternity there will almost certainly exist some hour in the future in which my rationality is much lo...
ShardPhoenix wrote "Doesn't the choice of a perfect external regulator amount to the same thing as directly imposing restrictions on yourself, thereby going back to the original problem?"
No because if there are many possible future states of the world it wouldn't be practical for you in advance to specify what restrictions you will have in every possible future state. It's much more practical for you to appoint a guardian who will make decisions after it has observed what state of the world has come to pass. Also, you might pick a regulator who...
You are forgetting about "Werewolf Contracts" in the Golden Age. Under these contracts you can appoint someone who can "use force, if necessary, to keep the subscribing party away from addictions, bad nanomachines, bad dreams or other self-imposed mental alterations."
If you sign such a contract then, unlike what you wrote, it's not true that "one moment of weakness is enough to betray you."
I have signed up with Alcor. When I suggest to other people that they should sign up the common response has been that they wouldn't want to be brought back to life after they died.
I don't understand this response. I'm almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of...
You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won't get a dominant position. You are saying that post-singularity (or at least post one day before the singularity) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a singularity.
I wrote in this post that such a gap is likely: http://www.overcomingbias.com/2008/11/billion-dollar.html
Have you ever had a job where your boss yelled at you if you weren't continually working? If not consider getting a part-time job at a fast food restaurant where you work maybe one day a week for eight hours at a time. Fast food restaurant managers are quite skilled at motivating (and please forgive this word) "lazy" youths.
Think of willpower as a muscle. And think of the fast food manager as your personal trainer.
My guess is your problem arises from never having had to stay up all night doing homework that you found boring, pointless, tedious, and very difficult.
"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."
If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.
Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?
Economists do look at innovation. See my working paper "Teaching Innovation in principles of microeconomics classes."
The Real Ultimate Power: Reproduction.
Two compatible users of this ability can create new life forms which possess many of the traits of the two users. And many of these new life forms will themselves be able to reproduce, leading to a potential exponential spreading of the users' traits. Through reproduction users can obtain a kind of immortality.
An unusual case of this power allows one person with access to enormous computing power to form it into a person. This results in a very alien entity, which may have its own powers. It's resulting moral system can't be predicted, but it can be controlled to some extent. This power takes decades to activate, almost inevitably leads to failure, and has the potential to fail catastrophically, but it also can succeed amazingly, and is considered worth the risk.
We should take into account the costs to a scientist of being wrong. Assume that the first scientist would pay a high price if the second ten data points didn't support his theory. In this case he would only propose the theory if he was confident it was correct. This confidence might come from his intuitive understanding of the theory and so wouldn't be captured by us if we just observed the 20 data points.
In contrast, if there will be no more data the second scientist knows his theory will never be proved wrong.
Carl Shulman,
Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.
The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.
The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.
Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Goo...
If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.
Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.
"What takes real courage is braving the outright incomprehension of the people around you,"
I suspect that autistics are far more willing than neurotypicals to be true iconoclast because many neurotypicals find autistics incomprehensible regardless of what the autistics believe. So the price of being an intellectual iconoclast is lower for autistics than for most other people.
Yes -- I was going to reply to "There are certain people who have no fear of departing the pack" with "there are some people who can't stay with the pack!".
These (not just the autistics, but also other neurodiverse folks) are the true "natural outsiders". As demonstrated by the OP's comments, their presence in a group (or contrariwise their exclusion) has nontrivial effects on how a group acts, and especially how it deals with challenges.
The Soviet new "man" that Stalin wanted to create was a half-ape, half-man super-warrior.
See http://news.scotsman.com/ViewArticle.aspx?articleid=2688011
I think that militarily President Bush under-reacted to 9/11. The U.S. faces a tremendous future threat of being attacked by weapons of mass destruction. Unfortunately, before 9/11 it was politically difficult for the President to preemptively use the military to reduce such threats. 9/11 gave President Bush more political freedom and he did use it to some extent. But I fear he has not done enough. I would have preferred, for example, that the U.S., Russia, China, UK, Israel and perhaps France announced that in one year they will declare war an any ot...
Eliezer, you wrote:
"Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?"
Won't our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?
Torture,
Consider three possibilities:
(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.
Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).
The main purpose of medical tort law is to enrich trial lawyers. Doctors and trial lawyers play legal games in which the doctors try to minimize their liability with disclosures and the trial lawyers argue that the disclosures don't offer legal protection for the doctors. There is little political incentive for anyone to care about the value of disclosure to patients. This is especially true since patients who care about such information will do their own research.
I find certain types of video games far more addictive than the average human does. This, however, reduces my demand for these video games. I have never bought or played World of Warcraft because I strongly suspect that I would become addicted to the game. If enough potential addicts are like me then games that become too addictive will suffer in the marketplace.
The safest investment is Treasury Inflation Protected Securities (TIPS). Ordinary investors should avoid investing in derivative securities such as options. If you are rationally pessimistic go with TIPS.
Also, you would never get the 1/100 odds because in a sense money is more valuable in the state in which the economy is doing poorly. So say there are two bonds, each in 30 years have a 99% chance of paying 0 and a 1% chance of paying $1,000. The first bond pays off in a state in which the economy has done very poorly, the second in a state in which th... (read more)