'Life exists beyond 50'

-11 hankx7787 14 January 2013 02:28PM

[Link] 12 Myths about Hunger

-10 Curiouskid 23 November 2012 10:25PM

Copy and pasted from here.For those interested, there's an book that expands on the article.
EDIT: people seem to like the first few myths, but they get rather political as the article goes on.
I couldn't find much on hunger on GiveWell's site other than these three articles.
The Advanced Civilization Wiki's page on Food probably covers the good aspects of this article and expands on them.


Why so much hunger?

What can we do about it?

To answer these questions we must unlearn much of what we have been taught.

Only by freeing ourselves from the grip of widely held myths can we grasp the roots of hunger and see what we can do to end it.

Myth 1

Not Enough Food to Go Around

Reality: Abundance, not scarcity, best describes the world's food supply. Enough wheat, rice and other grains are produced to provide every human being with 3,500 calories a day. That doesn't even count many other commonly eaten foods - vegetables, beans, nuts, root crops, fruits, grass-fed meats, and fish. Enough food is available to provide at least 4.3 pounds of food per person a day worldwide: two and half pounds of grain, beans and nuts, about a pound of fruits and vegetables, and nearly another pound of meat, milk and eggs-enough to make most people fat! The problem is that many people are too poor to buy readily available food. Even most "hungry countries" have enough food for all their people right now. Many are net exporters of food and other agricultural products.

Myth 2

Nature's to Blame for Famine

Reality: It's too easy to blame nature. Human-made forces are making people increasingly vulnerable to nature's vagaries. Food is always available for those who can afford it—starvation during hard times hits only the poorest. Millions live on the brink of disaster in south Asia, Africa and elsewhere, because they are deprived of land by a powerful few, trapped in the unremitting grip of debt, or miserably paid. Natural events rarely explain deaths; they are simply the final push over the brink. Human institutions and policies determine who eats and who starves during hard times. Likewise, in America many homeless die from the cold every winter, yet ultimate responsibility doesn't lie with the weather. The real culprits are an economy that fails to offer everyone opportunities, and a society that places economic efficiency over compassion.

Myth 3

Too Many People

Reality: Birth rates are falling rapidly worldwide as remaining regions of the Third World begin the demographic transition—when birth rates drop in response to an earlier decline in death rates. Although rapid population growth remains a serious concern in many countries, nowhere does population density explain hunger. For every Bangladesh, a densely populated and hungry country, we find a Nigeria, Brazil or Bolivia, where abundant food resources coexist with hunger. Costa Rica, with only half of Honduras' cropped acres per person, boasts a life expectancy—one indicator of nutrition —11 years longer than that of Honduras and close to that of developed countries. Rapid population growth is not the root cause of hunger. Like hunger itself, it results from underlying inequities that deprive people, especially poor women, of economic opportunity and security. Rapid population growth and hunger are endemic to societies where land ownership, jobs, education, health care, and old age security are beyond the reach of most people. Those Third World societies with dramatically successful early and rapid reductions of population growth rates-China, Sri Lanka, Colombia, Cuba and the Indian state of Kerala-prove that the lives of the poor, especially poor women, must improve before they can choose to have fewer children.

Myth 4

The Environment vs. More Food?

Reality: We should be alarmed that an environmental crisis is undercutting our food-production resources, but a tradeoff between our environment and the world's need for food is not inevitable. Efforts to feed the hungry are not causing the environmental crisis. Large corporations are mainly responsible for deforestation-creating and profiting from developed-country consumer demand for tropical hardwoods and exotic or out-of-season food items. Most pesticides used in the Third World are applied to export crops, playing little role in feeding the hungry, while in the U.S. they are used to give a blemish-free cosmetic appearance to produce, with no improvement in nutritional value.

Alternatives exist now and many more are possible. The success of organic farmers in the U.S. gives a glimpse of the possibilities. Cuba's recent success in overcoming a food crisis through self-reliance and sustainable, virtually pesticide-free agriculture is another good example. Indeed, environmentally sound agricultural alternatives can be more productive than environmentally destructive ones.

Myth 5

The Green Revolution is the Answer

Reality: The production advances of the Green Revolution are no myth. Thanks to the new seeds, million of tons more grain a year are being harvested. But focusing narrowly on increasing production cannot alleviate hunger because it fails to alter the tightly concentrated distribution of economic power that determines who can buy the additional food. That's why in several of the biggest Green Revolution successes—India, Mexico, and the Philippines—grain production and in some cases, exports, have climbed, while hunger has persisted and the long-term productive capacity of the soil is degraded. Now we must fight the prospect of a 'New Green Revolution' based on biotechnology, which threatens to further accentuate inequality.

Myth 6

We Need Large Farms

Reality: Large landowners who control most of the best land often leave much of it idle. Unjust farming systems leave farmland in the hands of the most inefficient producers. By contrast, small farmers typically achieve at least four to five times greater output per acre, in part because they work their land more intensively and use integrated, and often more sustainable, production systems. Without secure tenure, the many millions of tenant farmers in the Third World have little incentive to invest in land improvements, to rotate crops, or to leave land fallow for the sake of long-term soil fertility. Future food production is undermined. On the other hand, redistribution of land can favor production. Comprehensive land reform has markedly increased production in countries as diverse as Japan, Zimbabwe, and Taiwan. A World Bank study of northeast Brazil estimates that redistributing farmland into smaller holdings would raise output an astonishing 80 percent.

Myth 7

The Free Market Can End Hunger

Reality: Unfortunately, such a "market-is-good, government-is-bad" formula can never help address the causes of hunger. Such a dogmatic stance misleads us that a society can opt for one or the other, when in fact every economy on earth combines the market and government in allocating resources and distributing goods. The market's marvelous efficiencies can only work to eliminate hunger, however, when purchasing power is widely dispersed.

So all those who believe in the usefulness of the market and the necessity of ending hunger must concentrate on promoting not the market, but the consumers! In this task, government has a vital role to play in countering the tendency toward economic concentration, through genuine tax, credit, and land reforms to disperse buying power toward the poor. Recent trends toward privatization and de-regulation are most definitely not the answer.

Myth 8

Free Trade is the Answer

Reality: The trade promotion formula has proven an abject failure at alleviating hunger. In most Third World countries exports have boomed while hunger has continued unabated or actually worsened. While soybean exports boomed in Brazil-to feed Japanese and European livestock-hunger spread from one-third to two-thirds of the population. Where the majority of people have been made too poor to buy the food grown on their own country's soil, those who control productive resources will, not surprisingly, orient their production to more lucrative markets abroad. Export crop production squeezes out basic food production. Pro-trade policies like NAFTA and GATT pit working people in different countries against each other in a 'race to the bottom,' where the basis of competition is who will work for less, without adequate health coverage or minimum environmental standards. Mexico and the U.S. are a case in point: since NAFTA we have had a net loss of 250,000 jobs here, while Mexico has lost 2 million, and hunger is on the rise in both countries.

Myth 9

Too Hungry to Fight for Their Rights

Reality: Bombarded with images of poor people as weak and hungry, we lose sight of the obvious: for those with few resources, mere survival requires tremendous effort. If the poor were truly passive, few of them could even survive. Around the world, from the Zapatistas in Chiapas, Mexico, to the farmers' movement in India, wherever people are suffering needlessly, movements for change are underway. People will feed themselves, if allowed to do so. It's not our job to 'set things right' for others. Our responsibility is to remove the obstacles in their paths, obstacles often created by large corporations and U.S. government, World Bank and IMF policies.

Myth 10

More U.S. Aid Will Help the Hungry

Reality: Most U.S. aid works directly against the hungry. Foreign aid can only reinforce, not change, the status quo. Where governments answer only to elites, our aid not only fails to reach hungry people, it shores up the very forces working against them. Our aid is used to impose free trade and free market policies, to promote exports at the expense of food production, and to provide the armaments that repressive governments use to stay in power. Even emergency, or humanitarian aid, which makes up only five percent of the total, often ends up enriching American grain companies while failing to reach the hungry, and it can dangerously undercut local food production in the recipient country. It would be better to use our foreign aid budget for unconditional debt relief, as it is the foreign debt burden that forces most Third World countries to cut back on basic health, education and anti-poverty
programs.

Myth 11

We Benefit From Their Poverty

Reality: The biggest threat to the well-being of the vast majority of Americans is not the advancement but the continued deprivation of the hungry. Low wages-both abroad and in inner cities at home-may mean cheaper bananas, shirts, computers and fast food for most Americans, but in other ways we pay heavily for hunger and poverty. Enforced poverty in the Third World jeopardizes U.S. jobs, wages and working conditions as corporations seek cheaper labor abroad. In a global economy, what American workers have achieved in employment, wage levels, and working conditions can be protected only when working people in every country are freed from economic desperation.

Here at home, policies like welfare reform throw more people into the job market than can be absorbed-at below minimum wage levels in the case of 'workfare'-which puts downward pressure on the wages of those on higher rungs of the employment ladder. The growing numbers of 'working poor' are those who have part- or full-time low wage jobs yet cannot afford adequate nutrition or housing for their families. Educating ourselves about the common interests most Americans share with the poor in the Third World and at home allows us to be compassionate without sliding into pity. In working to clear the way for the poor to free themselves from economic oppression, we free ourselves as well.

Myth 12

Curtail Freedom to End Hunger?

Reality: There is no theoretical or practical reason why freedom, taken to mean civil liberties, should be incompatible with ending hunger. Surveying the globe, we see no correlation between hunger and civil liberties. However, one narrow definition of freedom-the right to unlimited accumulation of wealth-producing property and the right to use that property however one sees fit-is in fundamental conflict with ending hunger. By contrast, a definition of freedom more consistent with our nation's dominant founding vision holds that economic security for all is the guarantor of our liberty. Such an understanding of freedom is essential to ending hunger.


12 Myths About Hunger based on World Hunger: 12 Myths, 2nd Edition, by Frances Moore Lappé, Joseph Collins and Peter Rosset, with Luis Esparza (fully revised and updated, Grove/Atlantic and Food First Books, Oct. 1998)

Institute for Food and Development Policy Backgrounder

 

Eliezer apparently wrong about higgs boson

9 duckduckMOO 17 July 2012 07:44PM

So someone told me that Eliezer Yudkowsky predicted no 5 sigma evidence of the higgs boson, and that 6 sigma evidence had been found. A quick search found the post referred to, and a slightly longer but not particularly thorough search did not find anything discussing it.

So:

http://lesswrong.com/lw/1dt/open_thread_november_2009/17xb (02 November 2009)

I'll go ahead and predict here that the Higgs boson will not be showing up. As best I can put the reason into words: I don't think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass. They are welcome to prove me wrong.

(I'll also predict that the LHC will never actually run, but that prediction is (almost entirely) a joke, whereas the first prediction is not.)

Anyone challenging me to bet on the above is welcome to offer odds.

In the post below rolfandreassen sets the condition of 5 sigma evidence before 2014 and offers a bet of $25. In the post below that Eliezer accepts.

Discuss.

Risks from AI and Charitable Giving

2 XiXiDu 13 March 2012 01:54PM

If you’re interested in being on the right side of disputes, you will refute your opponents' arguments. But if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse.

-- Black Belt Bayesian

This is an informal post meant as a reply to a post by user:utilitymonster, 'What is the best compact formalization of the argument for AI risk from fast takeoff?'

I hope to find the mental strength to put more effort into it in future to improve it. But since nobody else seems to be willing to take a critical look at the overall topic I feel that doing what I can is better than doing nothing.

Please review the categories 'Further Reading' and 'Notes and References'.

Contents

 

Abstract

In this post I just want to take a look at a few premises (P#) that need to be true simultaneously to make the SIAI a wortwhile charity from the point of view of someone trying to do as much good as possible by contributing money. I am going to show that the case of risks from AI is strongly conjunctive, that without a concrete and grounded understanding of AGI an abstract analysis of the issues is going to be very shaky, and that therefore SIAI is likely to be a bad choice as a charity. In other words, that which speaks in favor of SIAI does mainly consist of highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.

Requirements for an Intelligence Explosion

P1 Fast, and therefore dangerous, recursive self-improvement is logically possible.

It took almost four hundred years to prove Fermat’s Last Theorem. The final proof is over a hundred pages long. Over a hundred pages! And we are not talking about something like an artificial general intelligence that can magically make itself smart enough to prove such theorems and many more that no human being would be capable of proving. Fermat’s Last Theorem simply states “no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two.”

Even artificial intelligence researchers admit that "there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power." [1] We just don't know.

Other possible problems include the impossibility of a stable utility function and a reflective decision theory, the intractability of real world expected utility maximization or that expected utility maximizers stumble over Pascal's mugging, among other things [2].

For an AI to be capable of recursive self-improvement it also has to guarantee that its goals will be preserved when it improves itself. It is still questionable if it is possible to conclusively prove that improvements to an agent's intelligence or decision procedures maximize expected utility. If this isn't possible it won't be rational or possible to undergo explosive self-improvement.

P1.b The fast computation of a simple algorithm is sufficient to outsmart and overpower humanity.

Imagine a group of 100 world-renowned scientists and military strategists.

  • The group is analogous to the initial resources of an AI.
  • The knowledge that the group has is analogous to what an AI could come up with by simply "thinking" about it given its current resources.

Could such a group easily wipe away the Roman empire when beamed back in time?

  • The Roman empire is analogous to our society today.

Even if you gave all of them a machine gun, the Romans would quickly adapt and the people from the future would run out of ammunition.

  • Machine guns are analogous to the supercomputer it runs on.

Consider that it takes a whole technological civilization to produce a modern smartphone.

You can't just say "with more processing power you can do more different things", that would be analogous to saying that "100 people" from today could just build more "machine guns". But they can't! They can't use all their knowledge and magic from the future to defeat the Roman empire.

A lot of assumptions have to turn out to be correct to make humans discover simple algorithms over night that can then be improved to self-improve explosively.

You can also compare this to the idea of a Babylonian mathematician discovering modern science and physics given that he would be uploaded into a supercomputer (a possibility that is in and of itself already highly speculative). It assumes that he could brute-force conceptual revolutions.

Even if he was given a detailed explanation of how his mind works and the resources to understand it, self-improving to achieve superhuman intelligence assumes that throwing resources at the problem of intelligence will magically allow him to pull improved algorithms from solution space as if they were signposted.

But unknown unknowns are not signposted. It's rather like finding a needle in a haystack. Evolution is great at doing that and assuming that one could speed up evolution considerably is another assumption about technological feasibility and real-world resources.

That conceptual revolutions are just a matter of computational resources is pure speculation.

If one were to speed up the whole Babylonian world and accelerate cultural evolution, obviously one would arrive quicker at some insights. But how much quicker? How much are many insights dependent on experiments, to yield empirical evidence, that can't be speed-up considerably? And what is the return? Is the payoff proportionally to the resources that are necessary?

If you were going to speed up a chimp brain a million times, would it quickly reach human-level intelligence? If not, why then would it be different for a human-level intelligence trying to reach transhuman intelligence? It seems like a nice idea when formulated in English, but would it work?

Being able to state that an AI could use some magic to take over the earth does not make it a serious possibility.

Magic has to be discovered, adapted and manufactured first. It doesn't just emerge out of nowhere from the computation of certain algorithms. It emerges from a society of agents with various different goals and heuristics like "Treating Rare Diseases in Cute Kittens". It is an evolutionary process that relies on massive amounts of real-world feedback and empirical experimentation. Assuming that all that can happen because some simple algorithm is being computed is like believing it will emerge 'out of nowhere', it is magical thinking.

Unknown unknowns are not sign-posted. [3]

If people like Benoît B. Mandelbrot would have never decided to research Fractals then many modern movies wouldn't be possible, as they rely on fractal landscape algorithms. Yet, at the time Benoît B. Mandelbrot conducted his research it was not foreseeable that his work would have any real-world applications.

Important discoveries are made because many routes with low or no expected utility are explored at the same time [4]. And to do so efficiently it takes random mutation, a whole society of minds, a lot of feedback and empirical experimentation.

"Treating rare diseases in cute kittens" might or might not provide genuine insights and open up new avenues for further research. As long as you don't try it you won't know.

The idea that a rigid consequentialist with simple values can think up insights and conceptual revolutions simply because it is instrumentally useful to do so is implausible.

Complex values are the cornerstone of diversity, which in turn enables creativity and drives the exploration of various conflicting routes. A singleton with a stable utility-function lacks the feedback provided by a society of minds and its cultural evolution.

You need to have various different agents with different utility-functions around to get the necessary diversity that can give rise to enough selection pressure. A "singleton" won't be able to predict the actions of new and improved versions of itself by just running sandboxed simulations. Not just because of logical uncertainty but also because it is computationally intractable to predict the real-world payoff of changes to its decision procedures.

You need complex values to give rise to the necessary drives to function in a complex world. You can't just tell an AI to protect itself. What would that even mean? What changes are illegitimate? What constitutes "self"? That are all unsolved problems that are just assumed to be solvable when talking about risks from AI.

An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.

Yet even if we assume that there is one complete theory of general intelligence, once discovered, one just has to throw more resources at it. It might be able to incorporate all human knowledge, adapt it and find new patterns. But would it really be vastly superior to human society and their expert systems?

Can intelligence itself be improved apart from solving well-defined problems and making more accurate predictions on well-defined classes of problems? The discovery of unknown unknowns does not seem to be subject to other heuristics than natural selection. Without goals, well-defined goals, terms like "optimization" have no meaning.

P2 Fast, and therefore dangerous, recursive self-improvement is physically possible.

Even if it could be proven that explosive recursive self-improvement is logically possible, e.g. that there are no complexity constraints, the question remains if it is physically possible.

Our best theories about intelligence are highly abstract and their relation to real world human-level general intelligence is often wildly speculative [5][6].

P3 Fast, and therefore dangerous, recursive self-improvement is economically feasible.

To exemplify the problem take the science fictional idea of using antimatter as explosive for weapons. It is physically possible to produce antimatter and use it for large scale destruction. An equivalent of the Hiroshima atomic bomb will only take half a gram of antimatter. But it will take 2 billion years to produce that amount of antimatter [7].

We simply don’t know if intelligence is instrumental or quickly hits diminishing returns [8].

P3.b AGI is able to create (or acquire) resources, empowering technologies or civilisatory support [9].

We are already at a point where we have to build billion dollar chip manufacturing facilities to run our mobile phones. We need to build huge particle accelerators to obtain new insights into the nature of reality.

An AI would either have to rely on the help of a whole technological civilization or be in control of advanced nanotech assemblers.

And if an AI was to acquire the necessary resources on its own, its plan for world-domination would have to go unnoticed. This would require the workings of the AI to be opaque to its creators yet comprehensible to itself.

But an AI capable of efficient recursive self improvement must be able to

  1. comprehend its own workings
  2. predict how improvements, respectively improved versions of itself, are going to act to ensure that its values are preserved

So if the AI can do that, why wouldn't humans be able to use the same algorithms to predict what the initial AI is going to do? And if the AI can't do that, how is it going to maximize expected utility if it is unable to predict what it is going to do?

Any AI capable of efficient self-modification must be able to grasp its own workings and make predictions about improvements to various algorithms and its overall decision procedure. If an AI can do that, why would the humans who build it be unable to notice any malicious intentions? Why wouldn't the humans who created it not be able to use the same algorithms that the AI uses to predict what it will do? If humans are unable to predict what the AI will do, how is the AI able to predict what improved versions of itself will do?

And even if an AI was able to somehow acquire large amounts of money. It is not easy to use the money. You can't "just" build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. An AI could not simply create a new Intel or Apple over a few years without its creators noticing anything.

The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions.

A plan for world domination seems like something that can't be concealed from its creators. Lying is no option if your algorithms are open to inspection.

P4 Dangerous recursive self-improvement is the default outcome of the creation of artificial general intelligence.

Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).

Even the creation of paperclips is a much more complex goal than telling an AI to compute as many decimal digits of Pi as possible.

For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence (AGI), the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.

And even given a rational utility maximizer. It is possible to maximize paperclips in a lot of different ways. How it does it is fundamentally dependent on its utility-function and how precisely it was defined.

If there are no constraints in the form of design and goal parameters then it can maximize paperclips in all sorts of ways that don't demand recursive self-improvement.

"Utility" does only become well-defined if we precisely define what it means to maximize it. Just maximizing paperclips doesn't define how quickly and how economically it is supposed to happen.

The problem is that "utility" has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.

You can also assign utility to maximize paperclips as long as nothing turns you off but don't care about being turned off. If an AI is not explicitly programmed to care about it, then it won't.

Without well-defined goals in form of a precise utility-function, it might be impossible to maximize expected "utility". Concepts like "efficient", "economic" or "self-protection" all have a meaning that is inseparable with an agent's terminal goals. If you just tell it to maximize paperclips then this can be realized in an infinite number of ways that would all be rational given imprecise design and goal parameters. Undergoing to explosive recursive self-improvement, taking over the universe and filling it with paperclips, is just one outcome. Why would an arbitrary mind pulled from mind-design space care to do that? Why not just wait for paperclips to arise due to random fluctuations out of a state of chaos? That wouldn't be irrational. To have an AI take over the universe as fast as possible you would have to explicitly design it to do so.

But for the sake of a thought experiment assume that the default case was recursive self-improvement. Now imagine that a company like Apple wanted to build an AI that could answer every question (an Oracle).

If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. It wouldn't be economically useful to take over the universe to answer simple questions.

It would neither be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?

A reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don't think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.

P5 The human development of artificial general intelligence will take place quickly.

What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?

If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes, or have enough time to develop friendly AI, while having to face other existential risks.

This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.

Therefore the probability of an AI to undergo explosive recursive self-improvement (P(FOOM)) is the probability of the conjunction (P#P#) of its premises:

P(FOOM) = P(P1∧P2∧P3∧P4∧P5)

Of course, there are many more premises that need to be true in order to enable an AI to go FOOM, e.g. that each level of intelligence can effectively handle its own complexity, or that most AGI designs can somehow self-modify their way up to massive superhuman intelligence. But I believe that the above points are enough to show that the case for a hard takeoff is not disjunctive, but rather strongly conjunctive.

Requirements for SIAI to constitute an optimal charity

In this section I will assume the truth of all premises in the previous section.

P6 SIAI can solve friendly AI.

Say you believe that unfriendly AI will wipe us out with a probability of 60% and that there is another existential risk that will wipe us out with a probability of 10% even if unfriendly AI turns out to be no risk or in all possible worlds where it comes later. Both risks have the same utility x (if we don't assume that an unfriendly AI could also wipe out aliens etc.). Thus .6x > .1x. But if the probability of solving friendly AI = A to the probability of solving the second risk = B is A ≤ (1/6)B then the expected utility of mitigating friendly AI is at best equal to the other existential risk because .6Ax ≤ .1Bx.

Consider that one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI.

So how hard is it to solve friendly AI?

Take for example Pascal's mugging, if you can't solve it then you need to implement a hack that is largely based on human intuition. Therefore, in order to estimate the possibility of solving friendly AI one needs to account for the difficulty in solving all sub-problems.

Consider that we don't even know "how one would start to research the problem of getting a hypothetical AGI to recognize humans as distinguished beings." [10]

P7 SIAI does not increase risks from AI.

By trying to solve friendly AI, SIAI has to think about a lot of issues related to AI in general and might have to solve problems that will make it easier to create artificial general intelligence.

It is far from being clear that SIAI is able to protect its findings against intrusion, betrayal, industrial or espionage.

P8 SIAI does not increase negative utility.

There are several possibilities by which SIAI could actually cause a direct increase in negative utility.

1) Friendly AI is incredible hard and complex. Complex systems can fail in complex ways. Agents that are an effect of evolution have complex values. To satisfy complex values you need to meet complex circumstances. Therefore any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways. A half-baked, not quite friendly, AI might create a living hell for the rest of time, increasing negative utility dramatically [11].

2) Humans are not provably friendly. Given the power to shape the universe the SIAI might fail to act altruistic and deliberately implement an AI with selfish motives or horrible strategies [12].

P9 It makes sense to support SIAI at this time [13].

Therefore the probability of SIAI to be a worthwhile charity (P(CHARITY)) is the probability of the conjunction (P#P#) of its premises:

P(CHARITY) = P(P6∧P7∧P8∧P9)

As before, there are many more premises that need to be true in order for SIAI to be the best choice for someone who wants to maximize doing good by contributing money to a charity.

Further Reading

The following posts and resources elaborate on many of the above points and hint at a lot of additional problems.

Notes and References

[1] Q&A with Shane Legg on risks from AI

[2] http://lukeprog.com/SaveTheWorld.html

[3] "In many ways, this is a book about hindsight. Pythagoras could not have imagined the uses to which his equation would be put (if, indeed, he ever came up with the equation himself in the first place). The same applies to almost all of the equations in this book. They were studied/discovered/developed by mathematicians and mathematical physicists who were investigating subjects that fascinated them deeply, not because they imagined that two hundred years later the work would lead to electric light bulbs or GPS or the internet, but rather because they were genuinely curious."

17 Equations that changed the world

[4] Here is my list of "really stupid, frivolous academic pursuits" that have lead to major scientific breakthroughs.

  • Studying monkey social behaviors and eating habits lead to insights into HIV (Radiolab: Patient Zero)
  • Research into how algae move toward light paved the way for optogenetics: using light to control brain cells (Nature 2010 Method of the Year).
  • Black hole research gave us WiFi (ICRAR award)
  • Optometry informs architecture and saved lives on 9/11 (APA Monitor)
  • Certain groups HATE SETI, but SETI's development of cloud-computing service SETI@HOME paved the way for citizen science and recent breakthroughs in protein folding (Popular Science)
  • Astronomers provide insights into medical imaging (TEDxBoston: Michell Borkin)
  • Basic physics experiments and the Fibonacci sequence help us understand plant growth and neuron development

http://blog.ketyov.com/2012/02/basic-science-is-about-creating.html

[5] "AIXI is often quoted as a proof of concept that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process."

Alexander Kruel, Why an Intelligence Explosion might be a Low-Priority Global Risk

[6] "…please bear in mind that the relation of Solomonoff induction and “Universal AI” to real-world general intelligence of any kind is also rather wildly speculative… This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they’re not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on “scaling them down” to make them realistic; so far this only works for very simple toy problems, and it’s hard to see how to extend the approach broadly to yield anything near human-level AGI). And it’s not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts."

Ben Goertzel, 'Are Prediction and Reward Relevant to Superintelligences?'

[7] http://public.web.cern.ch/public/en/spotlight/SpotlightAandD-en.html

[8] "If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors."

Alexander Kruel, Why an Intelligence Explosion might be a Low-Priority Global Risk

[9] Section 'Necessary resources for an intelligence explosion', Why an Intelligence Explosion might be a Low-Priority Global Risk, Alexander Kruel

[10] http://lesswrong.com/lw/3aa/friendly_ai_research_and_taskification/

[11] http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/5ylx

[12] http://lesswrong.com/lw/8c3/qa_with_new_executive_director_of_singularity/5y77

[13] "I think that if you're aiming to develop knowledge that won't be useful until very very far in the future, you're probably wasting your time, if for no other reason than this: by the time your knowledge is relevant, someone will probably have developed a tool (such as a narrow AI) so much more efficient in generating this knowledge that it renders your work moot."

Holden Karnofsky in a conversation with Jaan Tallinn

What are YOU doing against risks from AI?

-5 XiXiDu 17 March 2012 11:56AM

This is directed at those who agree with SIAI but are not doing everything they can to support their mission.

Why are you not doing more?

Comments where people proclaim that they have contributed money to SIAI are upvoted 50 times and more. 180 people voted for 'unfriendly AI' to be the most fearsome risk.

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

Limits on self-optimisation

6 RolfAndreassen 20 January 2012 09:58PM

Disclaimer: I am a physicist, and in the field of computer science my scholarship is weak. It may be that what I suggest here is well known, or perhaps just wrong.

Abstract: A Turing machine capable of saying whether two arbitrary Turing machines have the same output for all inputs is equivalent to solving the Halting Problem. To optimise a function it is necessary to prove that the optimised version always has the same output as the unoptimised version, which is impossible in general for Turing machines. However, real computers have finite input spaces.

 

Context: FOOM, Friendliness, optimisation processes.

Consider a computer program which modifies itself in an attempt to optimise for speed. A modification to some algorithm is *proper* if it results, for all inputs, in the same output; it is an optimisation if it results in a shorter running time on average for typical inputs, and a *strict* optimisation if it results in a shorter running time for all inputs.

A Friendly AI, optimising itself, must ensure that it remains Friendly after the modification; it follows that it can only make proper modifications. (When calculating a CEV it may make improper modifications, since the final answer for "How do we deal with X" may change in the course of extrapolating; but for plain optimisations the answer cannot change.)

For simplicity we may consider that the output of a function can be expressed as a single bit; the extension to many bits is obvious. However, in addition to '0' and '1' we must consider that the response to some input can be "does not terminate". The task is to prove that two functions, which we may consider as Turing machines, have the same output for all inputs.

Now, suppose you have a Turing machine that takes as input two arbitrary Turing machines and their respective tapes, and outputs "1" if the two input machines have the same output, and "0" otherwise. Then, by having one of the inputs be a Turing machine which is known not to terminate - one that executes an infinite loop - you can solve the Halting Problem. Therefore, such a machine cannot exist: You cannot build a Turing machine to prove, for arbitrary input machines, that they have the same output.

It seems to follow that you cannot build a fully general proper-optimisation detector.

However, "arbitrary Turing machines" is a strong claim, in fact stronger than we require. No physically realisable computer is a true Turing machine, because it cannot have infinite storage space, as the definition requires. The problem is actually the slightly easier (that is, not *provably* impossible) one of making a proper-optimisation detector for the space of possible inputs to an actual computer, which is finite though very large. In practice we may limit the input space still further by considering, say, optimisations to functions whose input is two 64-bit numbers, or something. Even so, the brute-force solution of running the functions on all possible inputs and comparing is already rather impractical.

Yehweh and the Methods of Rationality

-20 DanielLC 28 September 2011 03:06AM

To Speak Veripoop

-18 thre3e 18 July 2011 02:50AM

From the sociological point of view I cannot imagine a world without compulsory, god declared, laws for basic behaviors, such as sex-related, murder-related, and god-worship related behaviors. My outlook comes from my certainty that some minds are susceptible to the seeking of such compulsions, and my certainty that some other minds are susceptible to a need to supply such compulsions, sometimes as themselves as the authority, and sometimes as representatives of higher authority. The latter group always seems to produce some very successful iterations, from Moses to Jim Jones. . . As it is said in commerce, if there is demand, there will always be folks who will make it a life quest to supply that demand.

If what I'm saying has bases in fact, and if the atheistic challenge is to disenfranchise, dis-empower, organized religion, and other publicists of drivel, then how can mere logical, rational, rhetoric be looked to in order to bring about this goal? It seems evident to me that such rhetoric does not have the needed determinants to effect the goal. Rationality cannot seem to supply the needed compulsions. Thus, rationality goes unheeded.

I have an idea for a possible solution. What if we successfully substituted a new word for truth. What if it became common to say VERIPOOP in place of VERITAS? From that small acorn might grow reexamination of the human faculty for knowing, and claiming truth. It should be obvious to all, that we humans do not have a truth-knowing faculty. We can only know human level truth, which is always temporary and finitely circumscribed. Grass was known to be green for a long time in history, but, as we all know, green is not a property of grass any more. Nature supplies color only to those who are not color blind. Greenness is a human thing, not a grass thing. Reflecting white light at a certain wavelength is intrinsic to grass, but not color. We humans can know only truth that is bound to change in time, but "real" truth cannot change. It is already truth. Where else could it go?

Yes, there are mathematical proofs that would present themselves as truth forever. But it's easy to overlook the fact that all scientific and mathematical pronouncements are abstracts of reality. They may be correct within the confines of the postulates that undergird them, but reality is greater than any finite number of postulates. Further, postulates are arbitrarily chosen. Parallel lines may never meet, or always meet, or meet just under specified specified conditions. Therefore, that which is correct is not necessarily truth.This is a fact about the human knowledge horizon, the human condition. The horizon, wherever one draws it, however far we might advance in knowledge, is inexorably there. Yet the wild eyed compulsion addicts are willing to die for what? Why it's their "truth," of course. So, I say that the very word needs to be expunged, because, amazingly, every time it is uttered, it presents a lie. It claims that someone has corralled truth.

VERIPOOP would put us in our place. A new appreciation may develop of the human knowability horizon. How can one be an extremist when one knows that what one proclaims with vehemence is VERIPOOP? It seems to have a calming effect. Scientific veripoops are wonderful. The fact that presently the scientific method doesn't allow truth to be considered truth forever, as it did when science was in the hands of the compulsive knowers of Europe, (e.g. the Galileo problem), is also wonderful. But there is no other word available currently. Science must call its temporary findings truth, especially on true or false tests. Yet the facts show that they are a step down from truth. They are VERIPOOP!

 

Discussion: Counterintuitive ways of teaching knowledge

-6 Raw_Power 08 July 2011 09:02PM

If Miss Frizzle could do it, why couldn't we? Do we really have to be rational all the time in order to teach rationality?breaking the rules of reality within the realm of a work of fiction and making the protagonists (or the audience if it's a videogame) figure the new rules out for themselves... Actually now that I think of it videogamers are very used to adapting themselves to entirely new sets of physics on a weekly basis... but no-one has ever made them stop and think about it for a while, AFAIK.

 I think *Methods of Rationality* does this job pretty well, but it'd be nice if there were more fiction in this style, and not just *fan*fiction at that. Alicorn's *Luminosity* also goes through an itneresting route: it applies rationality to interpersonal relationships, character exploration, group dynamics, and *applied* powers, and a lot of this it does better than *Methods* (which relies quite a lot on archetypes, to the detriment of realism but also for great benefit of glorious awesomeness, and it's kind of a holdover from the source material).  But *Luminosity* falls rather short of exploring the deeper theoretical implications of such a world.

Note how *none* of these books are for kids. Child psychology is noticeably different from that of a late teen or an adult. There are some concepts they can't even *grasp*. A series of works that would teach them the key rationalist virtues and some rational ways of looking at their environment and improving their lives would be great. I'm not talking about writing books intended towards geeky kids (awesome though such a thing may be), but about teaching rationality in a way that'd be appealing to *all* kids.

In that sense, *The Magic Bus* taught us a lot about valuing curiosity, not taking the first possible explanation, and generally having fun discovering the laws of reality... in incredibly unrealistic and science-breaking ways (which were dutifully pointed out in a special section after each episode, in which they generally managed to both make us understand that there was more to the stuff we saw than what they showed us, but that sometimes it was okay to take an Artistic License to get the point across... something people like Sheldon Cooper seem chronically unable to grasp, and I'm told there are people who share those opinions in Real Life...). *My Little Pony: Friendship*, on the .other hand, taught a lot on being rational in facing daily troubles, especially regarding friendship... but, well, here again, the situation is rather mutated by the fact that those are ponies living in a pony world with strange pony rules...

This might actually help carry the point across *better*. By making the stories take place in fantastic setting, we avoid the kids superimposing their prejudices, preconceptions and heuristics to the material presented: instead, their minds become more open to new possibilities, and this is a wonderful opening to plant some wonderful Aesops...

... Wait, is this an instance of using the Dark Arts to teach the Art then?

 

I'm not suggesting we be emotionally or creatively repressed, that has nothing to do with being rational. I just wonder how exactly one can allow themselves artistic license in a way that allows people to have fun learning stuff without having the fun bits detracting from the general message.

 

Ah, also, here is one example on how to do it wrong, from My Little Pony of all places:

In ''My LittlePony Friendship Is Magic'', research magician Twilight Sparkle disregards repeated observational evidence of Pinkie Pie's "Pinkie Sense" because it's not Sufficiently Analyzed Magic. Then, under the influence of severe repeated head trauma and possible stress-induced brain anyeurism, she concludes that it "just makes sense," and that you just have to choose to believe in things you don't understand. In defense of the show, after the inevitable Internet Backlash, the creator of the show, Lauren Faust, apologized, saying that that wasn't meant to be the moral to take away from the episode.

Admittedly, I've seldom seen a Curiosity Stopper better than an Argumentum Ad Baculum where the proverbial Baculum is weilded by reality itself, but that's not addressed as Twilight's motive to stop worrying and love the Pie.

 

Future of Humanity?

-17 RickJS 24 May 2011 09:46PM

I first attempted to post this in 2009, but bounced off the karma wall.  Since then, MY forgetfulness and procrastination have been its nemesis.

I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

Remember the Swamp!

http://en.wiktionary.org/wiki/when_you're_up_to_your_neck_in_alligators,_it's_easy_to_forget_that_the_initial_objective_was_to_drain_the_swamp

I looked over the tag cloud and didn't see:

  • Existential Risk
  • War
  • Aggression
  • Competitveness
  • Territorialism
  • Nuclear arsenals

continue reading »

View more: Next