Comment author: username2 06 October 2016 09:07:26PM 1 point [-]

Google has AI safety protocol.

Citation?

Comment author: turchin 06 October 2016 11:54:47PM 1 point [-]
Comment author: scarcegreengrass 06 October 2016 04:15:21PM *  -1 points [-]

I would also contemplate the scenario that the human species might turn out to be less impressive than it currently appears, and is actually a fairly typical example of a successful Earth species. Most achievements that distinguish humans from eg plankton are in the future (eg space industry), not the past or present.

This might sound strange. Arguments in favor of this perspective:

• Homo sapiens is not the greatest species in terms of population or total biomass.

• Homo sapiens is not the only species to make tools, use agriculture, build buildings, or adapt to a variety of terrestrial habitats.

• Homo sapiens is not the first species to have a catastrophic impact on the atmosphere.

Arguments against this perspective:

• The human economy is currently doubling in scale every couple decades.

• No species (probably) ever reached the edge of the atmosphere before Homo sapiens.

(To clarify, i think this question is far from settled. But i think the idea that Homo sapiens will be smaller-impact than expected is more likely than the scenario that historical gods are representations of unknown prosperous civilizations.)

Comment author: turchin 06 October 2016 06:16:37PM 1 point [-]

If we look on humans as on typical species, we could use typical estimate of species life expectancy, which is several million years, and use it as human life expectancy. It is not bad.

But humans are definitely in the special point of their history and they could create a competitor soon (post humans or AI) and doesn't look good. Competitors are one of the main ways how species go extinct.

Comment author: Brillyant 06 October 2016 02:31:16PM 0 points [-]

While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.

Is any of this true? "Most serious"? "Dramatically change probability of human survival"? 10 lives per $1?

Comment author: turchin 06 October 2016 06:12:16PM 0 points [-]

I just provided an example of possible pitch, and I think that some people in Miri thinks in this way. I wanted to show that the pitch must have new information and be actionable.

Comment author: turchin 05 October 2016 08:28:15PM *  2 points [-]

My thoughts:

  1. Google has (is) the biggest computer program = 3 bln lines of code

  2. Google has world biggest database, including Youtube, 23andme, Gmail, Google books, all internet content

  3. Google is the world biggest computer, which includes something like 1 per cent of total world computing power

  4. Google did most impressive AI demonstartion recently that is win in Go.

  5. Google is clearly interested in creating AI.

  6. Google has AI safety protocol.

  7. Google has money to buy needed parts, including people.

So it looks like Google is in winning position. How may be its main competitors? Military AIs in NSA. Other large companies.

Comment author: skeptical_lurker 05 October 2016 01:00:40PM 0 points [-]

Is that the canon explanation? I thought Skynet was acting out of self-preservation.

Comment author: turchin 05 October 2016 04:01:50PM *  0 points [-]

It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.

Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).

But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.

It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.

Comment author: ChristianKl 04 October 2016 09:28:00PM 2 points [-]

I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it.

In our circle that might be true but many people don't have an opinion that goes beyond terminator.

Comment author: turchin 04 October 2016 11:04:37PM 0 points [-]

Yes. So we have to utilise this knowledge. We could said something like: Terminator appear because its progenitor, Skynet computer, received a command to protect US, and concluded that the best way to do it is to prevent humans from switching him off, and so he decided to exterminate humans. So Terminator appear because of unsolved problem of value alignment.

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: turchin 04 October 2016 08:47:54PM 0 points [-]

I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it. May be their opinions are wrong.

What is the goal of such elevator pitch?

I think that the message should be following: While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.

Comment author: CellBioGuy 01 October 2016 11:32:37PM *  6 points [-]

My favorite crazy unlikely idea about that is that the Paleocene-Eocene Thermal Maximum 50 megayears ago - a 200k year pulse of high CO2 levels and temperatures in which the CO2 was added over a timescale of less than 10k years (potentially much less) and had an isotopic composition consistent with having been liberated from biogenic deposits - could theoretically be explained by all the coal and oil deposits of Antarctica being burned followed by some positive feedbacks kicking in.

(Most land of Antarctica never having been investigated geologically in any detail at all due to being under kilometers of ice) (And Antarctica at that time being completely unglaciated and relatively temperate despite being where it is now by then) (And subsequent glaciation having scraped most of the surface clean of anything that was on it at the time)

We have an advantage in that we evolved in the tropics - you can take a tropical animal and keep it warm near the poles by wrapping it in clothes. It's much more difficult to take a cold-adapted polar animal and keep it alive in the tropics...

Comment author: turchin 02 October 2016 12:04:33AM 3 points [-]

In the Trent's article even mentioned possible species of Dinos who may be able have intelligent explosion. http://www.strangehorizons.com/2009/20090713/trent-a.shtml

It means that we could find really interesting (and dangerous) things during excavations in Antarctica?

Comment author: CellBioGuy 01 October 2016 11:41:19PM *  6 points [-]

Worth noting:

Possibly indicating that the end of the last glaciation rather than new invention drove the more or less simultaneous large-scale agricultural transitions that occurred all across the old and new world ~10k years ago.

Comment author: turchin 01 October 2016 11:59:17PM 1 point [-]

Interesting.

Comment author: SquirrelInHell 01 October 2016 08:22:06PM 0 points [-]

Good job with the main idea. However your speculation about past tech civilizations on Earth, artifacts preserved on moon etc. seems only half lucid.

Comment author: turchin 01 October 2016 10:03:28PM *  2 points [-]

I think that there is 3 ways to present these ideas in more rigorius form.

  1. Use Gott formula to estimate probability distribution P(N) that total number of civilizations on Earth will be N based on the fact that our rank number in all known civilization is n. (And in our case n=1, so N=2 has 50 per cent probability, N=4 has 25 per cent probability etc.) See the same calculation for original Doomsday argument. https://en.wikipedia.org/wiki/Doomsday_argument#Gott.27s_formulation:_.27vague_prior.27_total_population

  2. Use the fact that we don't know anything about past civilizations to put constrains on the informational traces T. T is function of civilisational technological level L and time distance to it t. So T(L,t) must be below some level of noticeability. T function is unknown to us but could be estimated as L/t which means that high tech and recent civilization will be more notable. Any risks from previous civilizations will also decay with time. So we could start to create math model form here.

  3. We could look on existing scientific literature. A lot of literature use observational data trying to explain original Fermi paradox, but it is surprising not true for past civilizations. There is no analog for "SETI search" for rare isotopes changes which could sign of civilization 100 million years from now here on Earth - or I don't know about this literature. I also don't know what is the rate of publishing of theoretically inappropriate results if someone randomly finds something which seems to be strange. There is attempt by late Russian author Kalandadze to collect evidences that some other hominids used fire here: http://www.evolbiol.ru/document/915 The work is controversial. I don't have special knowledge to assess it.

View more: Prev | Next