All of James_D._Miller's Comments + Replies

The safest investment is Treasury Inflation Protected Securities (TIPS). Ordinary investors should avoid investing in derivative securities such as options. If you are rationally pessimistic go with TIPS.

Also, you would never get the 1/100 odds because in a sense money is more valuable in the state in which the economy is doing poorly. So say there are two bonds, each in 30 years have a 99% chance of paying 0 and a 1% chance of paying $1,000. The first bond pays off in a state in which the economy has done very poorly, the second in a state in which th... (read more)

Doug S.

I'm interested in learning more about extremely early readers. I would be grateful if you contacted me at

EconomicProf@Yahoo.com

High functioning autism might in part be caused by an "overclocking" of the brain.

My evidence:

(1) Autistic children have on average larger brains than neurotypical children do. (2) High IQ parents are more likely than average to have autistic children. (3) An extremely disproportionate number of mathematical geniuses have been autistic. (4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.

Eliezer-

“What justifies the right of your past self to exert coercive control over your future self? There may be overlap of interests, which is one of the typical de facto criteria for coercive intervention; but can your past self have an epistemic vantage point over your future self?”

In general I agree. But werewolf contracts protect against temporary lapses in rationality. My level of rationality varies. Even assuming that I remain in good health for eternity there will almost certainly exist some hour in the future in which my rationality is much lo... (read more)

ShardPhoenix wrote "Doesn't the choice of a perfect external regulator amount to the same thing as directly imposing restrictions on yourself, thereby going back to the original problem?"

No because if there are many possible future states of the world it wouldn't be practical for you in advance to specify what restrictions you will have in every possible future state. It's much more practical for you to appoint a guardian who will make decisions after it has observed what state of the world has come to pass. Also, you might pick a regulator who... (read more)

You are forgetting about "Werewolf Contracts" in the Golden Age. Under these contracts you can appoint someone who can "use force, if necessary, to keep the subscribing party away from addictions, bad nanomachines, bad dreams or other self-imposed mental alterations."

If you sign such a contract then, unlike what you wrote, it's not true that "one moment of weakness is enough to betray you."

1Felix C.
I think the general point he's making still stands. You can always choose to remove the Werewolf Contract of your own volution, then force any sort of fever dream or nightmare onto yourself. Moreover, The Golden Age also makes a point about the dangers of remaining unchanged. Orpheus, the most wealthy man in history, has modified his brain such that his values and worldview will never shift. This puts him in sharp contrast to Phaethon as the protagonist, whose whole arc is about shifting the strict moral equilibrium of the public to make important change happen. Orpheus, trapped in his morals, is as out of touch in the era of Phaethon as would be a Catholic crusader in modern Rome.

Non-lawyers often believe that lawyers and judges believe that laws and contracts should be interpreted literally.

"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."

But about 100 people die every minute!

1Uni
100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn't need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world. Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn't, I find the concept "unlimited power" poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge? So, just like Robin Hanson says, we shouldn't spend time on this problem. We will solve in the best possible way with our unlimited power as soon as we have got unlimited power. We can be sure the solution will be wonderful and perfect.

I have signed up with Alcor. When I suggest to other people that they should sign up the common response has been that they wouldn't want to be brought back to life after they died.

I don't understand this response. I'm almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of... (read more)

0Swimmer963 (Miranda Dixon-Luinenburg)
I actually had a nightmare recently where I was diagnosed with an aggressive cancer and would have preferred not to go through treatment, but felt pressured by other, more aggressively anti-death members of the rationality community. Was afraid people would think I didn't care about them if I didn't try to stay alive longer to be with them, etc. (I'm an ICU nurse; I have a pretty good S1 handle on how horrific a lot of life saving treatments are, and how much quality of life it's possible to lose.) I've thought about cryonics, but haven't made a decision either way; right now, my feeling is that I don't have anything against the principle, but that it doesn't seem likely enough to work for the cost-benefit analysis to come out positive.
3[anonymous]
It's painful, expensive, leaves you in ill health the rest of your (shortened) life, and you've only got a 20% chance? Why would someone take that deal?
1Princess_Stargirl
This is more than slightly odd. I am considering cryonics but I would never take that cancer treatment. It seems like a horrible deal .

You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won't get a dominant position. You are saying that post-singularity (or at least post one day before the singularity) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a singularity.

I wrote in this post that such a gap is likely: http://www.overcomingbias.com/2008/11/billion-dollar.html

-7timtyler

Have you ever had a job where your boss yelled at you if you weren't continually working? If not consider getting a part-time job at a fast food restaurant where you work maybe one day a week for eight hours at a time. Fast food restaurant managers are quite skilled at motivating (and please forgive this word) "lazy" youths.

Think of willpower as a muscle. And think of the fast food manager as your personal trainer.

My guess is your problem arises from never having had to stay up all night doing homework that you found boring, pointless, tedious, and very difficult.

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.

Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?

Economists do look at innovation. See my working paper "Teaching Innovation in principles of microeconomics classes."

http://sophia.smith.edu/~jdmiller/teachinginnovation.pdf

The Real Ultimate Power: Reproduction.

Two compatible users of this ability can create new life forms which possess many of the traits of the two users. And many of these new life forms will themselves be able to reproduce, leading to a potential exponential spreading of the users' traits. Through reproduction users can obtain a kind of immortality.

An unusual case of this power allows one person with access to enormous computing power to form it into a person. This results in a very alien entity, which may have its own powers. It's resulting moral system can't be predicted, but it can be controlled to some extent. This power takes decades to activate, almost inevitably leads to failure, and has the potential to fail catastrophically, but it also can succeed amazingly, and is considered worth the risk.

Sorry, I misread the question. Ignore my last answer.

We should take into account the costs to a scientist of being wrong. Assume that the first scientist would pay a high price if the second ten data points didn't support his theory. In this case he would only propose the theory if he was confident it was correct. This confidence might come from his intuitive understanding of the theory and so wouldn't be captured by us if we just observed the 20 data points.

In contrast, if there will be no more data the second scientist knows his theory will never be proved wrong.

Carl Shulman,

Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.

The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.

4taryneast
no, but not because they're not worth it. But because of market forces. Engineers are often willing to work for only a few times average salary, even if they are worth ten times more. A classic article on this phenomena, and also the difference between "lots of average programmers" vs "one or two awesome programmers" is in Joel Spolsky's article, Hitting the high notes

The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.

Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Goo... (read more)

0[anonymous]
I certainly hope Google does not Foom... Especially since their idea seems orthogonal to AGI.

"Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears." should read:

"Maybe someday, the names of people who prevent wars from occurring will be as well known as the names of people who win wars."

0wrongish
I think it's safe to say that virtually all major wars are caused by forces too powerful for one single person to make a difference.

If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.

Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.

If we assume that Omega almost never makes a mistake and we allow the chooser to use true randomization (perhaps by using quantum physics) in making his choice, then Omega must make his decision in part through seeing into the future. In this case the chooser should obviously pick just B.

"What takes real courage is braving the outright incomprehension of the people around you,"

I suspect that autistics are far more willing than neurotypicals to be true iconoclast because many neurotypicals find autistics incomprehensible regardless of what the autistics believe. So the price of being an intellectual iconoclast is lower for autistics than for most other people.

Yes -- I was going to reply to "There are certain people who have no fear of departing the pack" with "there are some people who can't stay with the pack!".

These (not just the autistics, but also other neurodiverse folks) are the true "natural outsiders". As demonstrated by the OP's comments, their presence in a group (or contrariwise their exclusion) has nontrivial effects on how a group acts, and especially how it deals with challenges.

Carl,

Are you sure the dilution of Hellworlds would work if, given that you do something today that causes you to be damned, all future copies you make of yourself will spend eternity in Hell?

You neglected the non-zero probability that whoever is running this simulation is sufficiently amused by the story to grant Eliezer an equally large reward.

The Soviet new "man" that Stalin wanted to create was a half-ape, half-man super-warrior.

See http://news.scotsman.com/ViewArticle.aspx?articleid=2688011

I no longer trust the validity of this article.

I think that militarily President Bush under-reacted to 9/11. The U.S. faces a tremendous future threat of being attacked by weapons of mass destruction. Unfortunately, before 9/11 it was politically difficult for the President to preemptively use the military to reduce such threats. 9/11 gave President Bush more political freedom and he did use it to some extent. But I fear he has not done enough. I would have preferred, for example, that the U.S., Russia, China, UK, Israel and perhaps France announced that in one year they will declare war an any ot... (read more)

Perhaps firms should conduct "blind" interviews of potential employees in which the potential employee is interviewed while behind a screen.

TGGP,

I have not read the Myth of the Rule of Law.

In the first year of law school students learn that for every clear legal rule there always exists situations for which either the rule doesn't apply or for which the rule gives a bad outcome. This is why we always need to give judges some discretion when administering the law.

Eliezer, you wrote:

"Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?"

Won't our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?

5PhilGoetz
Genetic fitness refers to reproduction of individuals. The future will not have a firm concept of individuals. What is relevant is control of resources; this is independent of reproduction. Furthermore, what we think of today as individuality, will correspond to information in the future. Reproduction will correspond to high mutual information. And high mutual information in your algorithms leads to inefficient use of resources. Therefore, evolution, and competition, will at least in this way go against the future correlate of "genetic fitness".

Eliezer,

Your posts on evolution are fantastic. I hope there will be many more of them.

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

8timujin
You know, that actually persuaded me to override my intuitions and pick torture over dust specks.

This is a very general problem. If the government decides to give away $X to someone, people are willing to spend up to $X to get it. If people intensively compete for the money then you would expect people to collectively spend $X trying to get the $X.

The main purpose of medical tort law is to enrich trial lawyers. Doctors and trial lawyers play legal games in which the doctors try to minimize their liability with disclosures and the trial lawyers argue that the disclosures don't offer legal protection for the doctors. There is little political incentive for anyone to care about the value of disclosure to patients. This is especially true since patients who care about such information will do their own research.

I find certain types of video games far more addictive than the average human does. This, however, reduces my demand for these video games. I have never bought or played World of Warcraft because I strongly suspect that I would become addicted to the game. If enough potential addicts are like me then games that become too addictive will suffer in the marketplace.