Probably, this link: https://www.wired.com/2016/10/obama-envisions-ai-new-apollo-program/
The map of agents which may create x-risks
Recently Phil Torres wrote an article where he raises a new topic in existential risks research: the question about who could be possible agents in the creation of a global catastrophe. Here he identifies five main types of agents, and two main reasons why they will create a catastrophe (error and terror).
He discusses the following types of agents:
(1) Superintelligence.
(2) Idiosyncratic actors.
(3) Ecoterrorists.
(4) Religious terrorists.
(5) Rogue states.
Inspired by his work I decided to create a map of all possible agents as well as their possible reasons for creating x-risks. During this work some new ideas appeared.
I think that a significant addition to the list of agents should be superpowers, as they are known to have created most global risks in the 20th century; corporations, as they are now on the front line of AGI creation; and pseudo-rational agents who could create a Doomsday weapon in the future to use for global blackmail (may be with positive values), or who could risk civilization’s fate for their own benefits (dangerous experiments).
The X-risks prevention community could also be an agent of risks if it fails to prevent obvious risks, or if it uses smaller catastrophes to prevent large risks, or if it creates new dangerous ideas of possible risks which could inspire potential terrorists.
The more technology progresses, the more types of agents will have access to dangerous technologies, even including teenagers. (like: "Why This 14-Year-Old Kid Built a Nuclear Reactor” )
In this situation only the number of agents with risky tech will matter, not the exact motivations of each one. But if we are unable to control tech, we could try to control potential agents or their “medium" mood at least.
The map shows various types of agents, starting from non-agents, and ending with types of agential behaviors which could result in catastrophic consequences (error, terror, risk etc). It also shows the types of risks that are more probable for each type of agent. I think that my explanation in each case should be self evident.
We could also show that x-risk agents will change during the pace of technological progress. In the beginning there are no agents, and later there are superpowers, and then smaller and smaller agents, until there will be millions of people with biotech labs at home. In the end there will be only one agent - SuperAI.
So, a lessening the number of agents, and increasing their ”morality” and intelligence seem to be the most plausible directions in lowering risks. Special organizations or social networks may be created to control the most risky type of agents. Differing agents probably need differing types of control. Some ideas of this agent-specific control are listed in the map, but a real control system should be much more complex and specific.
The map shows many agents, some of them real and exist now (but don’t have dangerous capabilities), and some are only possible in moral sense or in technical sense.
So there are 4 types of agents, and I show them in the map in different colours:
1) Existing and dangerous, that is already having technology to destroy the humanity. That is superpowers, arrogant scientists – Red
2) Existing, and willing to end the world, but lacking needed technologies. (ISIS, VHEMt) - Yellow
3) Morally possible, but don’t existing. We could imagine logically consistent value systems which may result in human extinction. That is Doomsday blackmail. - Green
4) Agents, which will pose risk only after supertechnologies appear, like AI-hackers, children biohackers. - Blue
Many agents types are not fit for this classification so I rest them white in the map.
The pdf of the map is here: http://immortality-roadmap.com/agentrisk11.pdf
(The jpg of the map is below because side bar is closing part of it I put it higher)
(The jpg of the map is below because side bar is closing part of it I put it higher)

She could read "The Basic AI Drives" to him at night.
In hope that he will stop creating AI? But in 6 years it will be Microsoft.
Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?
Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.
- Some can't survive on less or have other obligations that looks like charity (child support)
- We would have less initiative to earn more
- It would hurt our economy, as it is consumer driven. We must buy Iphones
- I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
- I pay taxes and it is like charity.
- I know better how to spent money on my needs.
- Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
- If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
- If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
- Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
- If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.
There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.
These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.
I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.
I agree. FAI somehow should use human upload or human-like architecture for its value core. In this case values will be presented in it in complex and non-ortogonal ways, and at least one human-like creature will survive.
Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.
Yes. I think that we need not only workable solution, but also implementable. If someone create 800 pages pdf starting with new set theory, solution of Lob theorem problem etc and come to Google with it and say: "Hi, please, switch off all you have and implement this" - it will not work.
But MIRI added in 2016 the line of research for machine learning.
Get a job at Google or seek to influence the people developing the AI. If, say, you were a beautiful woman you could, probably successfully, start a relationship with one of Google's AI developers.
And how she will use this relation to make safer AI?
Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.
Good point, but my question was about what we can do to raise chances that it will be friendly AI.
There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html
Some interesting lines:
Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.
Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.