Comment author: Phil_Goetz5 14 October 2008 12:32:21AM 1 point [-]

I'm unclear whether you're saying that we perceive those in power to be corrupt, or that they actually are corrupt. The beginning focuses on the former; the second half, on the latter.

The idea that we have evolved to perceive those in power over us as being corrupt faces the objection that the statement, "Power corrupts", is usually made upon observing all known history, not just the present.

Comment author: Phil_Goetz5 09 October 2008 08:22:46PM 0 points [-]

Has Eliezer explained somewhere (hopefully on a web page) why he doesn't want to post a transcript of a successful AI-box experiment?

Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?

Comment author: Phil_Goetz5 07 October 2008 04:36:54PM 0 points [-]

David - Yes, a human-level AI could be very useful. Politics and economics alone would benefit greatly from the simulations you could run.

(Of course, all of us but manual laborers would soon be out of a job.)

Comment author: Phil_Goetz5 06 October 2008 11:02:40PM 0 points [-]

could you elaborate on the psychology of mythical creatures? That some creatures are "spiritual" sounds to me like a plausible distinction. I count vampires, but not unicorns. To me, a unicorn is just another chimera. Why do you think they're more special than mermaids? magic powers? How much of a consensus do you think exists?

Sorry I missed this!

I think it may have to do with how heavy a load of symbolism the creature carries. Unicorns were used a lot to symbolize purity, and acquired magical and non-magical properties appropriate to that symbolism. Dragons, vampires, and werewolves are also used symbolically. Mermaids, basilisks, not so much. Centaurs have lost their symbolism (a Greek Apollo/Dionysus dual-nature-of-man thing, I think), and CS Lewis did much to destroy the symbolism associated with fauns by making them nice chaps who like tea and dancing.

Now that I think about it, Lewis and Tolkien both wrote fantasy that was very literal-minded, and replaced symbolism with allegory.

Comment author: Phil_Goetz5 06 October 2008 09:31:06PM 9 points [-]

Thousands of years ago, philosophers began working on "impossible" problems. Science began when some of them gave up working on the "impossible" problems, and decided to work on problems that they had some chance of solving. And it turned out that this approach eventually lead to the solution of most of the "impossible" problems.

Comment author: Phil_Goetz5 06 October 2008 09:06:11PM 0 points [-]

Eliezer,

If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end doing less than optimal.

You say that like it's a bad thing. Your statement implies that something that is "necessary" is not necessary.

Just this morning I gave a presentation on the use of Bayesian methods for automatically predicting the functions of newly sequenced genes. The authors of the method I presented used the approximation

P(A, B, C) ~ P(A) x P(B|A) x P(C|A)

because it would have been difficult to compute P(C | B, A), and they didn't think B and C were correlated. Your statement condemns them as "less than optimal". But a sub-optimal answer you can compute is better than an optimal answer that you can't.

Do only that which you must do, and which you cannot do in any other way.

I am willing to entertain the notion that this is not utter foolishness, if you can provide us with some examples - say, ten or twenty - of scientists who had success using this approach. I would be surprised if the ratio of important non-mathematical discoveries made by following this maxim, to those made by violating it, was greater than .05. Even mathematicians often have many possible ways of approaching their problems.

David,

Building an AGI and setting it at "human level" would be of limited value. Setting it at "human level" plus epsilon could be dangerous. Humans on their own are intelligent enough to develop dangerous technologies with existential risk. (Which prompts the question: Are we safer with AI, or without AI?)

Comment author: Phil_Goetz5 30 September 2008 08:53:23PM -2 points [-]

If the probability of AI (or grey goo, or some other exotic risk) existential risks were low enough (neglecting the creation of hell-worlds with negative utility), then you could neglect in favor of those other risks.

Asteroids don't lead to a scenario in which a paper-clipping AI takes over the entire light-cone and turns it into paper clips, preventing any interesting life from ever arising anywhere, so they aren't quite comparable.

Still, your point only makes me wonder how we can justify not devoting 10% of GDP to deflecting asteroids. You say that we don't need to put all resources into preventing unfriendly AI, because we have other things to prevent. But why do anything productive? How do you compare the utility of preventing possible annihilation to the utility of improvements in life? Why put any effort into any of the mundane things that we put almost all of our efforts into? (Particularly if happiness is based on the derivative of, rather than absolute, quality of life. You can't really get happier, on average; but action can lead to destruction. Happiness is problematic as a value for transhumans.)

This sounds like a straw man, but it might not be. We might just not have reached (or acclimatized ourselves to) the complexity level at which the odds of self-annihilation should begin to dominate our actions. I suspect that the probability of self-annihilation increases with complexity. Rather like how the probability of an individual going mad may increase with their intelligence. (I don't think that frogs go insane as easily as humans do, though it would be hard to be sure.) Depending how this scales, it could mean that life is inherently doomed. But that would result in a universe where we were unlikely to encounter other intelligent life... uh...

It doesn't even need to scale that badly; if extinction events have a power law (they do), there are parameters for which a system can survive indefinitely, and very similar parameters for which it has a finite expected lifespan. Would be nice to know where we stand. The creation of AI is just one more point on this road of increasing complexity, which may lead inevitably to instability and destruction.

I suppose the only answer is to say that destruction is acceptable (and possibly inevitable); total area under the utility curve is what counts. Wanting an interesting world may be like deciding to smoke and drink and die young - and it may be the right decision. The AIs of the future may decide that dooming all life in the long run is worth it.

In short, the answer to "Eliezer's wager" may be that we have an irrational bias against destroying the universe.

But then, deciding what are acceptable risk levels in the next century depends on knowing more about cosmology, the end of the universe, and the total amount of computation that the universe is capable of.

I think that solving aging would change people's utility calculations in a way that would discount the future less, bringing them more in line with the "correct" utility computations.

Re. AI hell-worlds: SIAI should put "I have no mouth, and I must scream" by Harlan Ellison on its list of required reading.

Comment author: Phil_Goetz5 30 September 2008 05:09:22PM 0 points [-]

We are entering into a Pascal's Wager situation.

"Pascal's wager" is the argument that you should be Christian, because if you compute the expected value of being a Christian vs. of being an atheist, then for any finite positive probability that Christianity is correct, that finite probability multiplied by (infinite +utility minus infinite -utility) outweights the other side of the equation.

The similar Yudkowsky wager is the argument that you should be an FAIer, because the negative utility of destroying the universe outweighs the other side of the equation, whatever the probabilities are. It is not exactly analogous, unless you believe that the universe can support infinite computation (if it isn't destroyed), because the negative utility isn't actually infinite.

I feel that Pascal's wager is not a valid argument, but have a hard time articulating a response.

Comment author: Phil_Goetz5 29 September 2008 04:59:46PM 0 points [-]

I've seen too many cases of overfitting data to trust the second theory. Trust the validated one more.

The question would be more interesting if we said that the original theory accounted for only some of the new data.

If you know a lot about the space of possible theories and "possible" experimental outcomes, you could try to compute which theory to trust, using (surprise) Bayes' law. If it were the case that the first theory applied to only 9 of the 10 new cases, you might find parameters such that you should trust the new theory more.

In the given case, I don't think there is any way to deduce that you should trust the 2nd theory more, unless you have some a priori measure of a theory's likelihood, such as its complexity.

In response to Competent Elites
Comment author: Phil_Goetz5 28 September 2008 01:21:31AM 3 points [-]

It's true that we don't like to think people better-off than us might be better than us. But two caveats:

1. Just because the cream is concentrated at the top, doesn't mean that most of the cream (or the best cream) is at the top.

2. Causation probably runs both ways on this one. There is a lot of evidence that richer and more-respected people are happier and healthier. Various explanations have been tried to explain this, including the explanation that health causes career success. That explanation turned out to have serious problems, although I can't now remember what they are, other than that I heard them summarized in a talk from a SAGE (anti-aging) conference circa 2004, which I can no longer find any information via Google on because there is now a different organization called SAGE that holds conferences on LGBT aging that totally dominates Google search results.

I think that, if we could measure the degree to which a culture is able to promote based on merit, it would turn out to be a powerful economic indicator - particularly for knowledge-based economies.

View more: Prev | Next