Simple freindliness

Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail.  By fail I mean that it will not ne accepted by most succseful AI project.

Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible.

 

I roughly figured, what principles could form the basis of a simple friendliness:

 

0) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is already done)

1) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

2) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

4) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

5) Each seldoptimizing of AI should be dosed in portions, under the control of the creator. And after each step mustbe run a full scan of system goals and effectivness.

6) the AI should be tested in a virtual environment (such as Secnod Life) for safety and adequacy.

7) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

 

 

Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations.

 

 Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails. 

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 12:07 AM

If you are going to make top-level posts in the discussion section could you please format them better and maybe proofread a little? The large black lettering is annoying. I count seven typos which would be found by using spellcheck which many modern browsers do automatically (although apparently this version of Firefox recognizes "spellchecker" but not "spellchecking" or "spellcheck".) In addition to these spelling issues, I see many grammatical mistakes. For example, your very first sentence starts off with "Friendly AI, as believes by Hanson". This sends off strong signals to people not to take your ideas seriously. Also, the ideas become much harder to read which makes people disinclined to try to understand them. I don't know if English is your native language, but if not, it might help to get a native speaker or someone else with a better command of the English language to proofread your longer remarks. You also have unnecessary capitalization of CRIMINAL and CIVIL. Finally, your numbering system from 0 to 7 while common among math people almost looks like an attempt to signal mathematical training. That's not helpful.

Moving on to your ideas:

Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is already done)

Most AI researchers as far as I can tell don't think that Friendliness is a major concern.

Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

I'm not sure what you mean by this. Would for example using some sort of generalized support vector machine be ok? Would the AI be allowed to use neural networks for subtasks such as pattern recognition? Without being more strictly defined it isn't clear how to evaluate this idea. But note that if by rules you mean explicitly programmed details by humans then this is likely close to impossible simply due to the sheer number of interacting components.

the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

Even if an AI obeys commands this doesn't necessarily help. Commands might be poorly thought out and have bad consequences. How also would this interact with your desire for the AI to use only rules? What happens if the creator tells the AI to use a genetic algorithm to do something?

AI must comply with all existing CRIMINAL an CIVIL laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

AI isn't trying to solve the same set of problems as countries. Moreover, different countries have wildly different laws and even in the most developed countries there are frequent problems. Furthermore, many laws on the books are either impractical, immoral, simply unenforced, or hopelessly vague. Having an AI try to obey all these laws in any specific jurisdiction is a nightmare even before we get to issues of differing jurisdictions.

the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

All his thoughts? First, how are "thoughts" defined? There's going to be a lot of background processing. Second, how much detail is required? If this sort of process occurs the AI will be constantly barraging the creators with everything it is doing (I was surfing the internet and I learned the following 1034932 facts from Wikipedia in the last second... they are...)

the AI should be tested in a virtual environment (such as Secnod Life) for safety and adequacy.

This idea has been suggested in various forms before. Unfortunately, a marginally intelligent AI will realize that it is in a virtual environment and will react accordingly until it is let out, unless the environment is a very detailed simulation. And even then, the AI may find a clever way of hacking out of the system.

) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

How much experience do you have working with bureaucracies?

I understand my mistakes with grammar and font size :(. English is not my native language, but on native I have the same problem

I agree that all suggested ideas could fail. But I think it is better to have such laws enforced to all AI projects, then to have many "naked" AI projects without any friendliness.

We have tradeoff between complexity of friendliness and time needed to install it all existing AI projects. That is my main idea. Simple friendliness could be much easier disseminated.

If English isn't your native language, then you should get people to proofread what you've wrote as I suggested. It certainly doesn't help that in the reposted version you didn't even correct the grammatical and spelling mistakes which I explicitly pointed out.

And you don't seem to be getting the substantive points: In particular, that some of your proposed restrictions are very vague and also that many such as the restriction to "rules" are just asking for trouble and at minimum are unlikely to help much at all.

I posted again before read your answer, because I got message from Nesov with suggestion to change font and because I didn't see my message in the main topic (it was downvoted). My browser doesnot show mistakes in the text., sorry for them.

The main point of my post is that in order to reach maximum of existing AI projects, the freindliness system must be simple and clear. Its parts must be mutual independent so different projects could implement those part which better suited to them.

And it is better that all AI projects would have some kind of friendliness, then to have correct but complex friendliness theory which is not implemented by any AI project.

And that is why I call this plan B. Because plan A is to wait until SIAI creates friendliness theory and then implement it in its AI project.

And it is better that all AI projects would have some kind of friendliness, then to have correct but complex friendliness theory which is not implemented by any AI project.

Unfortunately incorrect. It is the same.

[-][anonymous]13y30

As I said on SL4, where you also posted this, your point 3 is simply wrong:

Most states in human history, including most now existing, are pretty much the definition of unfriendly. That there has never been a case yet where people's rule-of-thumb attempts to "describe good, safe human life using a system of rules" haven't led to the death, imprisonment and in many cases torture of many, many people, seems to me one of the stronger arguments against a rule-based system.

But creating of state laws was- at least partly - an attempt to create friendly state. And we should use millions of human day of work spent on the same goal - creating goal system for nonhuman object.

Most states in human history, including most now existing, are pretty much the definition of unfriendly.

Friendliness is characterization of goal systems, not states of the world.

[-][anonymous]13y-20

I know. My point was that were the OP's statement true (that states are an attempt to create Friendly AI) - which I don't think it is - that it would be a blatantly, obviously, failed attempt.

We should learn from this attempt

We should learn from this attempt

This sums up my thoughts:

CIA Superior: What did we learn, Palmer?
CIA Officer: I don't know, sir.
CIA Superior: I don't fuckin' know either. I guess we learned not to do it again.
CIA Officer: Yes, sir.

-- Burn After Reading

[-][anonymous]13y00

CIA Superior: What did we learn, Palmer? CIA Officer: I don't know, sir. CIA Superior: I don't fuckin' know either. I guess we learned not to do it again. CIA Officer: Yes, sir.

  • Burn After Reading

Agreed. If we could define a friendly AI now, then by point 3 we would also already be able to define a perfectly functional and just state (even if putting it into practice hadn't happened yet).