Religion is the classic example of a delusion that might be good for you. There is some evidence that being religious increases human happiness, or social cohesion. It's universality in human culture suggests that it has adaptive value.
See last week's Science, Oct. 3 2008, p. 58-62: "The origin and evolution of religious prosociality". One chart shows that, in any particular year, secular communes are four times as likely to dissolve as religious communes.
I guess I am questioning whether making a great effort to shake yourself free of a bias is a good or a bad thing, on average. Making a great effort doesn't necessarily get you out of biased thinking. It may just be like speeding up when you suspect you're going in the wrong direction.
If someone else chose a belief of yours for you to investigate, or if it were chosen for you at random, then this effort might be a good thing. However, I have observed many cases where someone chose a belief of theirs to investigate thoroughly, precisely because it was an ...
I think Einstein is a good example of both bending with the wind (when he came up with relativity)I'm not sure what you mean by bending with the wind. I thought it was the evidence that provided the air pressure, but there was no evidence to support Einstein's theory above the theories of the day. He took an idea and ran with it to its logical conclusions. Then the evidence came, he was running ahead of the evidential wind. You do know roughly what I mean, which is that strenuous effort is only part of the solution; not clinging to ideas is the ot...
@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?
That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it's a cheap trick, while some non-participants said it meets the spirit and letter of the rules.
Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.
Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.
If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world de...
Tim - I'm asking the question whether competition, and its concomitant unpleasantness (losing, conflict, and the undermining of CEV's viability), can be eliminated from the world. Under a wide variety of assumptions, we can characterize all activities, or at least all mental activities, as computational. We also hope that these computations will be done in a way such that consciousness is still present.
My argument is that optimization is done best by an architecture that uses competition. The computations engaged in this competition are the major possib...
In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.We're talking about competition between optimization processes. What would it mean to be a simulation of a computation? I don't think there is any such distinction. Subjectivity belongs to these processes; and they are the things which must compete....
Tim -
What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.
There are a variety of proposals floating about for ...
Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]
Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.
suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after itOops. I really should clarify that Mike didn't mention annihilation. That's my interepretation/extrapolation.
The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else.Presuming that I am one of these "silly people": Quite the opposite, and it is hard for me to imagine how you could fail to understand that from reading my comments. It is because I can imagine these things, and see that they have impor...
Phil Goetz and Tim Tyler, if you don't know what my opinions are, stop making stuff up. If I haven't posted them explicitly, you lack the power to deduce them.I see we have entered the "vague accusation" stage of our relationship.
Eliezer, I've seen you do this repeatedly before, notably with Loosemore and Caledonian. If you object to some characterization I've made of something you said, you should at least specify what it was that I said that you disagree with. Making vague accusations is irresponsible and a waste of our time.
I will try to be...
part of his tendency to gloss over ethical and philosophical underpinnings.All right, it wasn't really fair of me to say this. I do think that Eliezer is not as careful in such matters as he is in most matters.
Nick:
- Explain how desiring to save humans does not conflict with envisioning a world with no humans. Do not say that these non-humans will be humanity extrapolated, since they must be subject to CEV. Remember that everything more intelligent than a present-day human must be controlled by CEV. If this is not so, explain the processes that gradu...
My personal vision of the future involves uploading within 100 years, and negligible remaining meat in 200. In 300 perhaps not much would remain that's recognizably human. Nothing Eliezer's said has conflicted, AFAICT, with this vision.For starters, saying that he wants to save humanity contradicts this.
But it is more a matter of omission than of contradiction. I don't have time or space to go into it here, particularly since this thread is probably about to die; but I believe that consideration of what an AI society would look like would bring up a grea...
I too thought Nesov's comment was written by Eliezer.Me too. Style and content.
We're going to build this "all-powerful superintelligence", and the problem of FAI is to make it bow down to its human overlords - waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
Eliezer is, as he said, focusing on the wall. He doesn't seem to have thought about what comes after. As far as I can tell, he has a vague notion of a Star Trek future where meat is still flying around the galaxy hundreds of years from now. This is one of the weak points in his structure.
Phil, you might already understand, but I was talking about formal proofs, so your main worry wouldn't be the AI failing, but the AI succeeding at the wrong thing. (I.e., your model's bad.) Is that what your concern is?Yes. Also, the mapping from the world of the proof into reality may obliterate the proof.
Additionally, the entire approach is reminiscent of someone in 1800 who wants to import slaves to America saying, "How can I make sure these slaves won't overthrow their masters? I know - I'll spend years researching how to make REALLY STRONG leg irons, and how to mentally condition them to lack initiative." That approach was not a good long-term solution.
Mike: You're right - that is a problem. I think that in this case, underestimating your own precision by e is better than overestimating your precision by e (hence not using Nick's equation).
But it's just meant to illustrate that I consider overconfidence to be a serious character flaw in a potential god.
Phil, that penalizes people who believe themselves to be precise even when they're right. Wouldn't, oh, intelligence / (1 + |precision - (self-estimate of precision)|) be better?Look at my little equation again. It has precision in the numerator, for exactly that reason.
What do you mean by "precision", anyway?
Precision in a machine-learning experiment (as in "precision and recall") means the fraction of the time that the answer your algorithm comes up with is a good answer. It ignores the fraction of the time that there is a good answer that your algorithm fails to come up with.
Anna, I haven't assigned probabilities to those events. I am merely comparing Eliezer to various other people I know who are interested in AGI. Eliezer seems to think that the most important measure of his ability, given his purpose, is his intelligence. He scores highly on that. I think the appropriate measure is something more like [intelligence * precision / (self-estimate of precision)], and I think he scores low on that relative to other people on my list.
Matthew C -
You are advocating nonreductionism and psi at the same time.
Supposing that you are right requires us to suppose that there is both a powerful argument against reductionism, and a powerful argument in favor of psi.
Supposing that you are a crank requires only one argument, and one with a much higher prior.
In other words, if you were advocating one outrageous theory, someone might listen. The fact that you are advocating two simultaneously makes dismissing all of your claims, without reading the book you recommend, the logical response. We thus don't have to read it to have a rational basis to dismiss it.