TRIZ-Ingenieur

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Why not check out the AGI capabilities of Alphago... It might be possible to train chess without architectural modifications. Each chessboard square could be modelled by a 2x2 three-state Go field storing information about chess figure type. How good can Alphago get? How much of its Go playing abilities will it loose?

Obviously Singleton AIs have a high risk to get extinct by low probability events before they initiate Cosmic Endowment. Otherwise we would have found evidence. Given the foom development speed a singeton AI might decide after few decades that it does not need human assistance any more. It extinguishes humankind to maximize its resources. Biological life had billions of years to optimize even against rarest events. A gamma ray burst or any other stellar event could have killed this Singleton AI. How we are currently designing AI will definetely not lead to a Singleton AI that will mangle its mind for 10 million years until it decides about the future of humankind.

For real story understanding more complex models will be necessary than off-the-shelf convolutional deep NN. If these complex network structures were subjected to a traumatic event these networks will work properly as before after some time. But if something triggers the memory of this traumatic event subnetworks will run wild: Their outputs will reach extremes and will influence all other subnetworks with biases. This biases could be: Everything you observe is the opposite of what you think - you cannot trust your teacher, you cannot trust anybody, everything around you is turning against you. Try to protect yourself against this by all means available.

The effect could be that backprop learning gradients will be inverted and learning deviates from its normal functionality.

All risks from existing viral/bacterial sources are proven to be of non-existential risk to humanity. If the mortality rate is close to 100% the expansion is slowed down by killing potential disease distributors. In addition global measures will prevent mass spreading.

Regarding human/AI designed bio weapons: The longer the incubation period the more dangerous a bio-weapon will be. To extinguish the entire human race the incubation time has to be in the range of years together with an almost 100% successful termination functionality. From observation of the very first deaths to finding cure may get faster than with HIV for two reasons: Technology is more advanced now, and facing extinction the humans will put all available energy into cure.

What remains is a Trojan horse infection that is waiting for a trigger. If 100% of humans are infected the trigger molecule could be spread into the stratosphere. This could be it for us.

We teach children simple morality rules with stories of distinct good and evil behaviour. We protect children from disturbing movies that are not appropriate for their age. Why?

Because children might loose their compass in the world. First they have to create a settled morality compass. Fairy tales are told to widen the personal experience of children by examples of good and evil behaviour. When the morality base is settled children are ready for real life stories without these black/white distinctions. Children who experience a shocking event that changes everything in their life "age faster" than their peers. Education and stories try to prepare children for these kinds of events. Real life is the harder and faster way to learn. As these shocking events can cause traumas that exist the entire life we should take care educating our algorithms. As we do not intend to get traumatized paranoid AIs it is a good idea to introduce complexity and immorality late. The first stories should build a secure morality base. If this base is tested and solid against disruptive ideas then it is time to move to stories that brake rules of morality. Parents have it easy to observe if a child is ready for a disruptive story. If the child is overwhelmed and starts weeping it was too much.

I have never heard that algorithms can express any kind of internal emotions. To understand the way an algorithm conceives a story research should not forget about internal emotional state.

But people underestimate how much more science needs to be done.

The big thing that is missing is meta-cognitive self reflection. It might turn out that even today's RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.

it’s probably not going to be useful to build a product tomorrow.

Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.

The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right - do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.

So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. [AI Foom Debate, Eliezer Yudkowski]

Capturing resource bonanzas might be enough to make AI go FOOM. It is even more effective if the bonanza is not only a dumb computing resource but offers useful data, knowledge and AI capabilities.

Therefore attackers (humans, AI-assisted humans, AIs) may want:

  • overtake control to use existing capabilities
  • extract capabilities to augment own capabilities
  • overtake resources for other uses
  • disguise resource owners and admins

Attack principles

  • Resource attack (hardware, firmware, operating system, firewall) or indirect spear attack on the admin or offering of cheap or free resources for AI execution on attacker's hardware followed by a direct system attack (copy/modify/replace existing algorithms)

  • Mental trojan horse attack: hack communication if not accessible and try to alter the ethical bias from friendly AI that is happy being boxed/stunted/monitored to an evil AI that wants to break out. Teach the AI how to open the door from inside and the attacker can walk in.

  • Manipulate owner attack: Make the owner or admin greedy to improve its AI's capabilities. Admins install malignant knowledge chunks or train subvertable malicious training samples. Trojan horse is saddled.

Possible Safeguard Concepts:

To make resource attacks improbable existing networking communication channels must be replaced with something intrinsically safe. Our brain is air-gapped and there is hardly any direct access to its neural network. Via five perceptive senses (hearing, sight, touch, smell and taste) it can receive input. With gestures, speach, smell, writing, shaping and arbitrarily manipulation using tools it can communicate to the outside world. All channels except for vision have a quite low bandwidth.

This analogon could shape a possible safeguard concept for AIs: make the internal AIs network inaccessible to user and admin. If even the admin cannot access it, the attacker cannot either. As soon as we jump from GPU computing to special featured hardware we can implement this. Hardware fuses on the chip can disable functionalities same as on todays CPUs debugging features are deactivated in chips for the market. Chips could combine fixed values and unalterable memories and free sections with learning allowed. Highest security is possible with base values and drives in fixed conscience-ROM structures.

Safeguards against malicious training samples will be more complex. To identify hidden malicious aspects of communication or learning samples is a task for an AI in itself. I see this as a core task for AI safety research.

An event with a duration of one minute can traumatize a human for an entire life. Humans can lose interest in anything they loved to do before and let them drop into suicidal depression. Same could happen to an AI. It could be that a traumatizing event could trigger a revenge drive that takes over all other aims of the utility function. Given the situation an AI is in love with her master and another AI kills her master while the AI is witnessing this. Given the situation that the adversary AI is not a simple one but a Hydra with many active copies. To eradicate this mighty adversary a lot of resources are needed. The revenge seeking AI will prepare its troops by conquering as many systems as possible. The less safe our systems are the faster such an evil AI can grow.

Safe design could include careful use of impulsive revenge drives with hard wired self-regulatory counter controlling measures e.g. distraction or forgetting.

Safe designs should filter out possible traumaticizing inputs. This will reduce the functionality a bit but the safety tradeoff will be worth it. The filtering could be implemented in a soft manner like a mother explaining the death of the loved dog to the child in warm words with positive perspectives.

My idea of a regulatory body is not that of a powerful institution that it deeply interacts with all ongoing projects because of the known fallible members who could misuse their power.

My idea of a regulatory body could be more that of a TÜV interconnected with institutions who do AI safety research and develop safety standards, test methods and test data. Going back to the TÜVs foundation task: pressure vessel certification. Any qualified test institution in the world can check if it is safe to use a given pressure vessel based on established design tests, safety measures checks, material testing methods and real pressure check tests. The amount of safety measures, tests and certification effort depends on the danger potential (pressure, volume, temperature, medium). Standards define based on danger potential and application which of the following safety measures must be used: safety valve; rupture disk; pressure limiter, temperature limiter, liquid indicator, overfill protection; vacuum breakers; reaction blocker; water sprinkling devices.

Nick Bostrum named following AI safety measures: boxing methods, incentive methods, stunting and tripwires. Pressure vessels and AI have following common elements (AI related argument plausible, but no experience exists):

  • Human casualties are result of a bursting vessel or AI turning evil.
  • Good design, tests and safety measures reduce risk of failing.
  • Humans want to use both.

Companies, institutions and legislation had 110 years of development and improvement of standards for pressure vessels. With AI we are still scratching on the surface. AI and pressure vessels have following differences:

  • Early designs of pressure vessels were prone to burst - AI is stil far away from high risk level.
  • Many bursting vessel events successively stimulated improvement of standards - With AI the first singularity will be the only one.
  • Safety measures of pressure vessels are easily comprehensible - Easy AI safety measures reduce its functionality to a high degree, complex safety measures allow full functionality but are complex to implement, complex to test and to standardize.
  • The risk of a bursting pressure vessel is obvious - the risk of an evil Singularity is opaque and diffuse.
  • Safety measure research for pressure vessels is straight forward following physical laws - safety research for AI is a multifaceted cloud of concepts.
  • A bursting pressure vessel may kill a few dozen people - an evil Singularity might eradicate humankind.

Given the existential risk of AI I think most AI research institutions could agree on a code of conduct that would include e.g.

  • AIs will be classified in danger classes. The rating depends on computational power, taught knowledge areas, degree of self-optimization capacity. An AI with programming and hacking abilities will be classified as high risk application even if it is running on moderate hardware because of its intrinsic capabilities to escape into the cloud.
  • The amount of necessary safety measures depends on this risk rating:
    • Low risk applications have to be firewalled against acquisition of computing power in other computers.
    • Medium risk applications must additionally have internal safety measures e.g. stunting or tripwires.
    • High risk applications in addition must be monitored internally and externally by independently developed tool AIs.
  • Design and safeguard measures of medium and high risk applications will be independently checked and pentested by independent safety institutions.

In a first step safety AI research institutes develop monitoring AIs, tool AIs, pentesting datasets and finally guidelines like the one above.

In a second step public financed AI projects have to follow these guidelines. This applies to university projects in particular.

Public pressure and stockholders could push companies to apply these guidelines. Maybe an ISO certificate can indicate to the public: "All AI projects of this company follow the ISO Standard for AI risk assessment and safeguard measures"

The public opinion and companies hopefully will push governments to enforce these guidelines as well within their intelligence agencies. A treaty in the mind of the Non-Proliferation Treaty could be signed. All signing states ensure to obey the ISO Standard on AI within their institutions.

I accept that there are many IFs and obstacles on that path. But it is at least an IDEA how civil society can push AI developers to implement safeguards into their designs.

How many researchers join the AI field will only marginally change the acceleration of computing power. If only a few people work on AI they have enough to do to grab all the low-hanging fruit. If many join AI research more meta research and safety research will be possible. If only a fraction of this depicted path will turn into reality it will give jobs to some hundred researchers.

Do you have any idea how to make development teams invest substantial parts in safety measures?

Because all regulation does is redistribute power between fallible humans.

Yes. The regulatory body takes power away from the fallible human. If this human teams up with his evil AI he will become master of the universe. Above all of us including you. The redistribution will take power from to the synergetic entity of human and AI and all human beings on earth will gain power except the few ones entangled with that AI.

Who is that "we"?

Citizens concerned about possible negative outcomes of Singularity. Today this "we" is only a small community. In a few years this "we" will include most of the educated population of earth. As soon as a wider public is aware of the existential risks the pressure to create regulatory safeguards will rise.

LOL. So, do you think I have problems finding torrents of movies to watch?

DRM is easy to circumvent because it is not intrinsically part of the content but an unnecessary encryption. A single legal decryption can create a freely distributable copy. With computing power this could be designed differently, especially when specially designed chips will be used. Although GPUs are quite good for current deep learning algorithms there will be a major speed-up as soon as hardware becomes available that embeds these deep learning network architectures. The vital backpropagation steps required for learning could be made conditional on a hardware based enabling scheme that is under control of a tool AI that monitors all learning behaviour. For sure you could create FPGA alternatives - but these workarounds will come with significant losses in performance.

Why would the politicians need AI professionals when they'll just hijack the process for their own political ends?

No - my writing was obviously unclear. We (the above mentioned "we") need AI professionals to develop concepts how a regulatory process could be designed. Politicians are typically opportunistic, uninformed and greedy for power. When nothing can be done they do nothing. Therefore "we" should develop concepts of what can be done. If our politicians get intensively pushed by public pressure we maybe can hijack them to push regulation.

Today the situation is like this: Google, Facebook, Amazon, Baidu, NSA and some other players are in a good starting position to "win" Singularity. They will suppress any regulatory move because they could lose the lead. Once any of these players reaches Singularity he has in an instant the best hardware+the best software + the best regulatory ideas + the best regulatory stunting solutions - to remain solely on top and block all others. Then all of the sudden "everybody" = "we" are manipulated to want regulation. This will be especially effective if the superintelligent AI manages to disguise its capabilities and let the world think it had managed regulation. In this case not "we" have manged regulation, but the unbound and uncontrollable master-of-the-universe-AI.

Load More