Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments?

-3 SilasBarta 12 September 2012 03:29AM

In light of recent (and potential) events, I wanted to start a discussion here about a certain method of handling conflicts on this site's discussion threads, and hopefully form a consensus on when to use the measure described in the title.  Even if the discussion has no impact on site policy ("executive veto"), I hope administrators will at least clarify when such a measure will be used, and for what reason.

I also don't want to taint or "anchor" the discussion by offering hypothetical situations or arguments for one position or another.  Rather, I simply want to ask: Under what conditions should a specific poster, "Alice" be prohibited from replying directly to the arguments in a post/comment made by another poster, "Bob"?  (Note: this is referring specifically to replies to ideas and arguments Bob has advanced, not general comments about Bob the person, which should probably go under much closer scrutiny because of the risk of incivility.)

Please offer your ideas and thoughts here on when this measure should be used.

Friendly AI and the limits of computational epistemology

18 Mitchell_Porter 08 August 2012 01:16PM

Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.

I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.

Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

Now let's consider how SI will approach these goals.

The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.

The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure",  as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.

Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.

Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.

Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.

I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.

First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.

For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.

The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.

Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.

What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.

Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.

An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.

But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.

Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.

It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.

It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.

Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.

Self-skepticism: the first principle of rationality

36 aaronsw 06 August 2012 12:51AM

When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.

Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.

What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.

As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.

When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.

And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.

Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".

Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!

Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?

A cynical explanation for why rationalists worry about FAI

25 aaronsw 04 August 2012 12:27PM

My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

What are you counting?

-14 OrphanWilde 18 July 2012 03:22PM

Eliezer's post How To Convince Me That 2 + 2 = 3 has an interesting consideration - if putting two sheep in a field, and putting two more sheep in a field, resulted in three sheep being in the field, would arithmetic hold that two plus two equals three?

I want to introduce another question.  What exactly are you counting?

Imagine one sheep in one field, and another sheep in another.  Now put them together.  Do you now have two sheep?

"Of course!"

Ah, but is that -all- you have?

"What?"

Two sheep are more than twice as complex as a single sheep.  It takes more than twice as many bits to describe two sheep than it takes to describe a single sheep, because, in addition to those two sheep, you now also have to describe their relationship to one another.

Or, to phrase it slightly differently, does 1+1=2?

Well, the answer is, it depends on what you're counting.

If you're counting the number of discrete sheep, 1+1=2.  However, why is the number of discrete sheep meaningful?

If you're a hunter counting, not herded sheep, but prey - two sheep is, roughly, twice as much meat as one sheep.  1+1=2.  If you're a herder, however, two sheep could be a lot more valuable than one - two sheep can turn into three sheep, if one is female and one is male.  The value of two sheep can be more than twice the value of a single sheep.  And if you're a hypercomputer running Solomonoff Induction to try to describe sheep positional vectors, two sheep will have a different complexity than twice the complexity of a single sheep.

Which is not to say that one plus one does not equal two.  It is, however, to say that one plus one may not be meaningful as a concept outside a very limited domain.

Would an alien intelligence have arrived at arithmetic?  Depends on what it counts.  Is arithmetic correct?

Well, does a set of two sheep contain only two sheep, or does it also contain their interactions?  Depends on your problem domain; 1+1 might just equal 2+i.

Poly marriage?

-9 h-H 06 June 2012 07:57PM

A thought occurred to me today as I skimmed an article in a rationality forum where the subject of gay marriage cropped up; seeing as the issue has been hotly contested in various public fora and especially the courts, what about poly? After all, many if not all the arguments for gay marriage apply to poly marriage as well.

Questions for LWers who are currently in a such a relationship, or have an opinion to share:

Do polies want to marry each other or do such relationships not lend themselves to permanence above a threshold of partners? Should polies campaign for the right for a civil union anyway? what are the up and down sides of this? etc

 

 

"Progress"

1 PhilGoetz 04 June 2012 03:51AM

I often hear people speak of democracy as the next, or the final, inevitable stage of human social development.  Its inevitability is usually justified not by describing power relations that result in democracy being a stable attractor, but in terms of morality - democracy is more "enlightened".  I don't see any inevitability to it - China and the Soviet Union manage(d) to maintain large, technologically-advanced nations for a long time without it - but suppose, for the sake of argument, that democracy is the inevitable next stage of human progress.

The May 18 2012 issue of Science has an article on p. 844, "Ancestral hierarchy and conflict", by Christopher Boehm, which, among other things, describes the changes over time of equality among male hominids.  If we add its timeline to recent human history, then here is the history of democracy over time in the evolutionary line leading to humans:

  1. Pre-human male hominids, we infer from observing bonobos and chimpanzees, were dominated by one alpha male per group, who got the best food and most of the females.
  2. Then, in the human lineage, hunter-gatherers developed larger social groups, and the ability to form stronger coalitions against the alpha; and they became more egalitarian.
  3. Then, human social groups even became larger, and it became possible for a central alpha-male chieftain to control a large area; and the groups became less egalitarian.
  4. Then, they became even larger, so that they were too large for a central authority to administer efficiently; and decentralized market-based methods of production led to democracy.  (Or so goes one story.)

There are two points to observe in this data:

  • There is no linear relationship between social complexity, and equality.  Steadily-increasing social complexity lead to more equality, then less, then more.
  • Enlightenment has nothing to do with it - if any theory makes sense, it is that social equality tunes itself to the level that provides maximal social competitive fitness.  Even if we agree that democracy is the most-enlightened political system, this realization says nothing about what the future holds.

I do believe "progress" is a meaningful term.  But there isn't some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once.

This post is for sacrificing my credibility!

-29 Will_Newsome 02 June 2012 12:08AM

Thank you for your cooperation and understanding. Don't worry, there won't be future posts like this, so you don't have to delete my LessWrong account, and anyway I could make another, and another.

But since you've dared to read this far:

Credibility. Should you maximize it, or minimize it? Have I made an error?

Discuss.

Don't be shallow, don't just consider the obvious points. Consider that I've thought about this for many, many hours, and that you don't have any privileged information. Whence our disagreement, if one exists?

One possible issue with radically increased lifespan

10 Spectral_Dragon 30 May 2012 10:24PM

I might need a better title (It has now been updated), but here goes, anyway:

I've been considering this for a while now. Suppose we reach a point where we can live for centuries, maybe even millenia, then how do we balance? Even assuming we're as efficient as possible, there's a limit for how much resources we can have, meaning an artificial limit at the amount of people that could exist at any given moment even if we explore what we can of the galaxy and use any avaliable resource. There would have to be roughly the same rate of births and deaths in a stable population.

How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal elite that the poor, unaugmented ones would have no chance of measuring up to? I'm sorry if this has already been considered, I'm very uneducated on the topic. If it has, could someone maybe link an analysis of the topic of lifespans and the like?

Analogy to the Heisenberg Uncertainty Principle for Powerful AI?

-15 demented 28 May 2012 01:07PM

What do you think? There might be a theoritical limitation to how much data an AI could collect without influencing the data itself and making its prediction redundant. Would this negate the idea of a 'God' AIand cause it to make suboptimal choices even with near limitless processing power?

View more: Prev | Next