Posts

Sorted by New

Wiki Contributions

Comments

Matthew C -

You are advocating nonreductionism and psi at the same time.

Supposing that you are right requires us to suppose that there is both a powerful argument against reductionism, and a powerful argument in favor of psi.

Supposing that you are a crank requires only one argument, and one with a much higher prior.

In other words, if you were advocating one outrageous theory, someone might listen. The fact that you are advocating two simultaneously makes dismissing all of your claims, without reading the book you recommend, the logical response. We thus don't have to read it to have a rational basis to dismiss it.

Religion is the classic example of a delusion that might be good for you. There is some evidence that being religious increases human happiness, or social cohesion. It's universality in human culture suggests that it has adaptive value.

See last week's Science, Oct. 3 2008, p. 58-62: "The origin and evolution of religious prosociality". One chart shows that, in any particular year, secular communes are four times as likely to dissolve as religious communes.

I guess I am questioning whether making a great effort to shake yourself free of a bias is a good or a bad thing, on average. Making a great effort doesn't necessarily get you out of biased thinking. It may just be like speeding up when you suspect you're going in the wrong direction.

If someone else chose a belief of yours for you to investigate, or if it were chosen for you at random, then this effort might be a good thing. However, I have observed many cases where someone chose a belief of theirs to investigate thoroughly, precisely because it was an untenable belief that they had a strong emotional attachment to, or a strong inclination toward, and wished to justify. If you read a lot of religious conversion stories, as I have, you see this pattern frequently. A non-religious person has some emotional discontent, and so spends years studying religions until they are finally able to overcome their cognitive dissonance and make themselves believe in one of them.

After enough time, the very fact that you have spent time investigating a premise without rejecting it becomes, for most people, their main evidence for it.

I don't think that, from the inside, you can know for certain whether you are trying to test, or trying to justify, a premise.

I think Einstein is a good example of both bending with the wind (when he came up with relativity)
I'm not sure what you mean by bending with the wind. I thought it was the evidence that provided the air pressure, but there was no evidence to support Einstein's theory above the theories of the day. He took an idea and ran with it to its logical conclusions. Then the evidence came, he was running ahead of the evidential wind. You do know roughly what I mean, which is that strenuous effort is only part of the solution; not clinging to ideas is the other part of the solution. Focusing on the strenuous effort part can lead to people making strenuous effort to justify bad ideas. Who makes the most strenuous effort on the question of evolution? Creationists.

Einstein had evidence; it just wasn't experimental evidence. The discovery that your beliefs contain a logical inconsistency is a type of evidence.

@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?

That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it's a cheap trick, while some non-participants said it meets the spirit and letter of the rules.


It would be nice if Eliezer himself would say whether he used meta-arguments. "Yes" or "no" would suffice. Eliezer?

Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.

Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.

If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.

If they do incorporate agents, then for them not to be "harmed", they need not to feel bad if their trial fails. What would it mean to build agents that weren't disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See "dead world" above.

Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.

First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.

Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn't need totalitarian top-down Stalinist control. See economic theory.

Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you're doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don't want that to happen. So it's better if they're emotionally attached to their particular starting parameters.

It would be interesting if the free market didn't actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don't think there are any such conditions that don't require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.

Tim - I'm asking the question whether competition, and its concomitant unpleasantness (losing, conflict, and the undermining of CEV's viability), can be eliminated from the world. Under a wide variety of assumptions, we can characterize all activities, or at least all mental activities, as computational. We also hope that these computations will be done in a way such that consciousness is still present.

My argument is that optimization is done best by an architecture that uses competition. The computations engaged in this competition are the major possible loci for consciousness. You can't escape this by saying that you will simulate the competition, because this simulation is itself a computation. Either it is also part of a possible locus of consciousness, or you have eliminated most of the possible loci of consciousness, and produced an active but largely "dead" (unconscious) universe.

In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.
We're talking about competition between optimization processes. What would it mean to be a simulation of a computation? I don't think there is any such distinction. Subjectivity belongs to these processes; and they are the things which must compete. If the winner could be determined by a simpler computation, you would be running that computation instead; and the hypothetical consciousness that we were talking about would be that computation instead.

Tim -

What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.

There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners won’t have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of “agent”) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.

I don't think we can eliminate competition; and I don't think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).

The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying “just at the top” is probably wrong.

Imagining a future that isn’t ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processes’ inputs to its possible outputs.

Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.

I'd be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.

Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]

Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.


"Death" as we know it is a concept that makes sense only because we have clearly-defined locuses of subjectivity.

If we imagine a world where

- you can share (or sell) your memories with other people, and borrow (or rent) their memories

- most of "your" memories are of things that happened to other people

- most of the time, when someone is remembering something from your past, it isn't you

- you have sold some of the things that "you" experienced to other people, so that legally they are now THEIR experiences and you may be required to pay a fee to access them, or to erase them from your mind

- you make, destroy, augment, or trim copies of yourself on a daily basis; or loan out subcomponents of yourself to other people while borrowing some of their components, according to the problem at hand, possibly by some democratic (or economic) arbitration among "your" copies

- and you have sold shares in yourself to other processes, giving them the right to have a say in these arbitrations about what to do with yourself

- "you" subcontract some of your processes - say, your computation of emotional responses - out to a company in India that specializes in such things

- which is advantageous from a lag perspective, because most of the bandwidth-intensive computation for your consciousness usually ends up being distributed to a server farm in Singapore anyway

- and some of these processes that you contract out are actually more computationally intensive than the parts of "you" that you own/control (you've pooled your resources with many other people to jointly purchase a really good emotional response system)

- and large parts of "you" are being rented from someone else; and you have a "job" which means that your employer, for a time, owns your thoughts - not indirectly, like today, but is actually given write permission into your brain and control of execution flow while you're on the clock

- but you don't have just one employer; you rent out parts of you from second to second, as determined by your eBay agent

- and some parts of you consider themselves conscious, and are renting out THEIR parts, possibly without notifying you

- or perhaps some process higher than you in the hierarchy is also conscious, and you mainly work for it, so that it considers you just a part of itself, and can make alterations to your mind without your approval (it's part of the standard employment agreement)

- and there are actually circular dependencies in the graph of who works for whom, so that you may be performing a computation that is, unknown to you, in the service of the company in India calculating your emotional responses

- and these circles are not simple circles; they branch and reconverge, so that the computation you are doing for the company in India will be used to help compute the emotions of trillions of "people" around the world


In such a world, how would anybody know if "you" had died?

Load More