In response to Chaotic Inversion
Comment author: James_D._Miller 29 November 2008 05:19:09PM 0 points [-]

Have you ever had a job where your boss yelled at you if you weren't continually working? If not consider getting a part-time job at a fast food restaurant where you work maybe one day a week for eight hours at a time. Fast food restaurant managers are quite skilled at motivating (and please forgive this word) "lazy" youths.

Think of willpower as a muscle. And think of the fast food manager as your personal trainer.

My guess is your problem arises from never having had to stay up all night doing homework that you found boring, pointless, tedious, and very difficult.

Comment author: James_D._Miller 28 November 2008 02:40:13AM 0 points [-]

"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."

If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.

Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?

Comment author: James_D._Miller 24 November 2008 03:23:11PM 1 point [-]

Economists do look at innovation. See my working paper "Teaching Innovation in principles of microeconomics classes."

http://sophia.smith.edu/~jdmiller/teachinginnovation.pdf

In response to Mundane Magic
Comment author: James_D._Miller 31 October 2008 05:54:22PM 4 points [-]

The Real Ultimate Power: Reproduction.

Two compatible users of this ability can create new life forms which possess many of the traits of the two users. And many of these new life forms will themselves be able to reproduce, leading to a potential exponential spreading of the users' traits. Through reproduction users can obtain a kind of immortality.

Comment author: James_D._Miller 29 September 2008 09:46:51PM 0 points [-]

Sorry, I misread the question. Ignore my last answer.

Comment author: James_D._Miller 29 September 2008 09:24:45PM 0 points [-]

We should take into account the costs to a scientist of being wrong. Assume that the first scientist would pay a high price if the second ten data points didn't support his theory. In this case he would only propose the theory if he was confident it was correct. This confidence might come from his intuitive understanding of the theory and so wouldn't be captured by us if we just observed the 20 data points.

In contrast, if there will be no more data the second scientist knows his theory will never be proved wrong.

Comment author: James_D._Miller 28 September 2008 06:18:08PM -3 points [-]

Carl Shulman,

Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.

The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.

Comment author: James_D._Miller 28 September 2008 03:50:59PM 3 points [-]

The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.

Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Google search, for example, might get better and better at understanding human requests and slowly acquire the ability to pass a Turing test. And Google doesn't need a "precise theory to permit stable self-improvement" to continually improve its search engine.

In response to 9/26 is Petrov Day
Comment author: James_D._Miller 26 September 2008 07:31:27PM 3 points [-]

"Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears." should read:

"Maybe someday, the names of people who prevent wars from occurring will be as well known as the names of people who win wars."

Comment author: James_D._Miller 21 September 2008 03:18:51PM 2 points [-]

If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.

Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.

View more: Prev | Next