You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

4 ete 28 January 2015 03:29PM

From a paper by Center for Technology and National Security Policy & National Defense University:

"Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.

National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions."

So strong AI is on the american military's radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.

Why don't we have a proper strategy against high risk threats?

-16 [deleted] 08 February 2014 12:04PM

I recently finished reading the Sequences and they convinced me that the existential risk brought by intelligence explosion is worth considering. There was a one thing that bothered me though - the strategy adopted by this community seems to be notably one-sided.

Advancing our community’s interests will require a full spectrum of direct and indirect leadership approaches. MIRI and CEA are only one side of the coin. It would not be very wise to ignore high reward approaches that might even have a comparative advantage in comparison with our current strategy. We should be prepared to act as security guarantor – preferably with partners and allies, but alone if necessary – to deter and defeat acts of significantly high risk taking in the sector of artificial intelligence and related fields.

Why

Here's a quote by Yudkowsky:

This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us.  If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays.

The fate of vast amount of people depends on our actions. Yudkowsky observed that by using consequentialist reasoning it's morally right to torture someone for 50 years if by doing it you can save vast amount of people from pain, however slight.  The number of our descendants won't be 3^^^3 (unless the universe is spatially infinite and meets a few other conditions), but it will be big enough to outweight some of the intrinsic things we care about nowadays. And not only can we save those people from non-existence, or in the worst case scenario from eternal suffering in the hands of UFAI, but we can possibly give those vast amounts of people vast amounts of positive utility. It may sound unintuitive, but even torture and murder pale by comparison with the fate of all of us and our descendants. Of course, the cost-benefit ratio of torture or murder is not ideal except under extreme circumstances under imminent threat. It's true that some criminal organizations like Mexican drug cartels use torture in their operations, but that is mainly to instill fear which we don't really have a need to do.  But there is nothing that could raise the chance of FAI and lower the chance of UFAI that would not be morally right. Why haven't we taken full advantage of this fact?

tl;dr we should try to affect policy decisions related to AI safety by whatever means possible so that we can beat death, avoid eternal torture and live forever in a fun utopia. You already know this, so why haven't you suggested this before? Mods here have made certain policy decisions because they believe it will increase the chance of happy ending so why not go beyond that?

How

I suggest some kind of paramilitary and intelligence gathering organization alongside MIRI and CEA. In pursuing our objectives, this new organization would make critical contributions to AI safety beyond MIRI. CFAR could be transformed to partly support this organization - the boot camp style of rationality training might be useful in other contexts too.

You might ask, what can a few individuals concerned about existential risks do without huge financial support and government backing? The answer is: quite a lot. Let's not underestimate our power. Like gwern said in his article on the effectiveness on terrorism, it's actually quite easy to dismantle an organization if you're truly committed:

Suppose people angry at X were truly angry: so angry that they went beyond posturing and beyond acting against X's only if action were guaranteed to cost them nothing (like writing a blog post). If they ceased to care about whether legal proceedings might be filed against them; if they become obsessed with destroying X, if they devoted their lives to it and could ignore all bodily urges and creature comforts. If they could be, in a word, like Niven’s Protectors or Vinge’s Focused.

Could they do it? Could they destroy a 3 century old corporation with close to $1 trillion in assets, with sympathizers and former employees throughout the upper echelons of the United States Federal Government (itself the single most powerful entity in the world)?

Absolutely. It would be easy.

As I said, the destructive power of a human is great; let’s assume we have 100 fanatics - a vanishingly small fraction of those who have hated on X over the years - willing to engage even in assassination, a historically effective tactic33 and perhaps the single most effective tactic available to an individual or small group.

Julian Assange explains the basic theory of Wikileaks in a 2006 essay, “State and Terrorist Conspiracies” / “Conspiracy as Governance”: corporations and conspiracies form a graph network; the more efficiently communication flows, the more powerful a graph is; partition the graph, or impede communication (through leaks which cause self-inflicted wounds of secrecy & paranoia), and its power goes down. Carry this to its logical extreme…

"If all links between conspirators are cut then there is no conspiracy. This is usually hard to do, so we ask our first question: What is the minimum number of links that must be cut to separate the conspiracy into two groups of equal number? (divide and conquer). The answer depends on the structure of the conspiracy. Sometimes there are no alternative paths for conspiratorial information to flow between conspirators, other times there are many. This is a useful and interesting characteristic of a conspiracy. For instance, by assassinating one ‘bridge’ conspirator, it may be possible to split the conspiracy. But we want to say something about all conspiracies."

We don’t. We’re interested in shattering a specific conspiracy by the name of X. X has ~30,000 employees. Not all graphs are trees, but all trees are graphs, and corporations are usually structured as trees. If X’s hierarchy is similar to that of a binary tree, then to completely knock out the 8 top levels, one only needs to eliminate 256 nodes. The top 6 levels would require only 64 nodes.

If one knocked out the top 6 levels, then each of the remaining subtrees in level 7 has no priority over the rest. And there will be 27−26 or 64 such subtrees/nodes. It is safe to say that 64 sub-corporations, each potentially headed by someone who wants a battlefield promotion to heading the entire thing, would have trouble agreeing on how to reconstruct the hierarchy. The stockholders might be expected to step in at this point, but the Board of Directors would be included in the top of the hierarchy, and by definition, they represent the majority of stockholders.

One could launch the attack during a board meeting or similar gathering, and hope to have 1 fanatic take out 10 or 20 targets. But let’s be pessimistic and assume each fanatic can only account for 1 target - even if they spend months and years reconnoitering and preparing fanatically.

This leaves us 36 fanatics. X will be at a minimum impaired during the attack; financial companies almost uniquely operate on such tight schedules that one day’s disruption can open the door to predation. We’ll assign 1 fanatic the task of researching emails and telephone numbers and addresses of X rivals; after a few years of constant schmoozing and FOIA requests and dumpster-diving, he ought to be able to reach major traders at said rivals. (This can be done by hiring or becoming a hacker group - as has already penetrated X - or possibly simply by open-source intelligence and sources like a Bloomberg Terminal.) When the hammer goes down, he’ll fire off notifications and suggestions to his contacts34. (For bonus points, he will then go off on an additional suicide mission.)

X claims to have offices in all major financial hubs. Offhand, I would expect that to be no more than 10 or 20 offices worth attacking. We assign 20 of our remaining 35 fanatics the tasks of building Oklahoma City-sized truck bombs. (This will take a while because modern fertilizer is contaminated specifically to prevent this; our fanatics will have to research how to undo the contamination or acquire alternate explosives. The example of Anders Behring Breivikreminds us that simple guns may be better tools than bombs.) The 20 bombs may not eliminate the offices completely, but they should take care of demoralizing the 29,000 in the lower ranks and punch a number of holes in the surviving subtrees.

Let’s assume the 20 bomb-builders die during the bombing or remain to pick off survivors and obstruct rescue services as long as possible.

What shall we do with our remaining 15 agents? The offices lay in ruins. The corporate lords are dead. The lower ranks are running around in utter confusion, with long-oppressed subordinates waking to realize that becoming CEO is a live possibility. The rivals have been taking advantage of X’s disarray as much as possible (although likely the markets would be in the process of shutting down).

15 is almost enough to assign one per office. What else could one do besides attack the office and its contents? Data centers are a good choice, but hardware is very replaceable and attacking them might impede the rivals’ efforts. One would want to destroy the software X uses in trading, but to do that one would have to attack the source repositories; those are likely either in the offices already or difficult to trace. (You’ll notice that we haven’t assigned our fanatics anything particularly difficult or subtle so far. I do this to try to make it seem as feasible as possible; if I had fanatics becoming master hackers and infiltrating X’s networks to make disastrous trades that bankrupt the company, people might say ‘aw, they may be fanatically motivated, but they couldn’t really do that’.)

It’s not enough to simply damage X once. We must attack on the psychological plane: we must make it so that people fear to ever again work for anything related to X.

Let us postulate one of our 15 agents was assigned a research task. He was to get the addresses of all X employees. (We may have already needed this for our surgical strike.) He can do this by whatever mean: being hired by X’s HR department, infiltrating electronically, breaking in and stealing random hard drives, open source intelligence - whatever. Where there’s a will, there’s a way.

Divvy the addresses up into 14 areas centered around offices, and assign the remaining 14 agents to travel to each address in their area and kill anyone there. A man may be willing to risk his own life for fabulous gains in X - but will he risk his family? (And families are easy targets too. If the 14 agents begin before the main attacks, it will be a while before the X link becomes apparent. Shooting someone is easy; getting away with it is the hard part.)

I would be shocked if X could survive even half the agents.

The above description applies mainly to non-military organizations, but threats can also come from the direction of state actors more heavily backed up by military which requires more preparation. Security agencies find themselves faced with a complex spectrum of conflict and this might encourage them to continue to expand their capabilities and powers including automated systems, which poses risks. State-sponsored and non-state actors just complicate issues by extending their reach through advanced technologies that were once solely the domain of states. High-risk threats in the non-military non-state sector might be easier to neutralize, but we should not underprioritize possible state targets.

Of course, the scenario outlined above needs to be the absolute last resort when there is nothing else you can do. You need truly 9/11 scale effort to dismantle an organization capable of creating a fooming AGI. This means that after completing that kind of mission our community can't exist any more than al-Qaeda can in the US. Extreme solutions are bad partly because they are extreme solutions and only right under extreme circumstances. As you can see from this earlier thread on the effectiveness of terrorism "perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise." See also this thread and the first comment by Yvain: http://lesswrong.com/lw/2se/rational_terrorism_or_why_shouldnt_we_burn_down/

Instead, in support of our community's interests, our new unit would ideally take a strong role in efforts to safeguard access, sustain security, have undercover operatives in high-risk corporations and agencies, provide oversight and accountability, and promote responsible norms in AI safety. It would also facilitate cooperation in AI safety with transparent, routine, and predictable practices as part of our theater strategies. As technological capability and capacity increases in non-state technological actors, we will seek new ways to catalyze greater security cooperation.

When

It will probably take decades until there exists something resembling Strong AI. This is a reason to start creating our new unit and recruiting and training potential candidates soon so we're ready and we have all the necessary information when the time arrives. This may seem ambitious, but is it really more ambitious than MIRI's goal, which is in itself one of the hardest tasks in the world?

 


 

Can someone explain why this is a bad idea? Since I'm not sure if you can publicly endorse this post I suggest the following approach: to signal agreement that you're willing to take what I said seriously and proceed with this strategy, remove this post in the next 24 hours, just like we planned.

Military Rationalities and Irrationalities

21 pscheyer 09 September 2013 11:48PM

In response to the question

"Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general."

(Kaj_Solata)

I posted that my military experience seems effectively designed to increase executive function. Some examples of this from myself and metastable are

Uniforms- not having to think about your wardrobe, ever, saves a lot of time, mental effort, and money. Steve Jobs and President Obama are known for also using uniforms specifically for this purpose.

PT- Daily, routinized exercise. Done in a way that very few people are deciding what comes next.

-Maximum use of daylight hours

Med Group and Force Support-Minimized high-risk projects outside of workplace (paternalistic health care, insurance, and in many cases, housing and continuing education.)

 

After a moment's thought it occurred to me that there are some double-edged swords in Military Rationality as well, some of which lead to classic jokes like 'Military Intelligence is an oxymoron.'

 

Regulations- A select few 'experts' create policies which everyone else is required to follow at all times. Unfortunately these experts are never (never ever) encouraged to consider knock-on effects. Ugh.

 

Anybody else have insights on the military they want to share here? I feel a couple of good posts on increasing executive function might come out of a discussion on the rationalities and irrationalities of the armed forces.

 

Internet Research (with tangent on intelligence analysis and collapse)

11 [deleted] 31 July 2013 04:58AM

Want to save time? Skip down to "I'm looking to compile a thread on Internet Research"!

Opinionated Preamble:

There is a lot of high level thinking on Less Wrong, which is great. It's done wonders to structure and optimize my own decisions. I think the political and futurology-related issues that Less Wrong cover can sometimes get out of sync with the reality and injustices of events in the immediate world. There are comprehensive treatments of how medical science is failing, or how academia cannot give unbiased results, and this is the milieu of programmers and philosophers in the middle-to-upper-class of the planet. I at least believe that this circle of awareness can be expanded, even if it's treading into mind-killing territory. If anything I want to give people a near-mode sense of the stakes aside from x-risk: all in all the x-risk scenarios I've seen Less Wrong fear the most, kill humanity somewhat instantly. A slower descent into violence and poverty is to me much more horrifying, because I might have to live in it and I don't know how. In a matter of fact, I have no idea of how to predict it.

This is one reason why I'm drawn to the Intelligence Operations performed by the military and crime units, among other things. Intelligence product delivery is about raw and immediate *fact*, and there is a lot of it. The problems featured in IntelOps are one of the few things rationality is good for - highly uncertain scenarios with one-off executions and messy or noisy feedback. Facts get lost in translation as messages are passed through, and of course the feeding and receiving fake facts are all a part of the job - but nevertheless, knowing *everything* *everywhere* is in the job description, and some form of rationality became a necessity.

It gets ugly. The demand for these kinds of skills often lie in industries that are highly competitive, violent, and illegal. I believe that once a close look is taken on how force and power is applied in practice then there isn't any pretending anymore that human evils are an accident.

Open Source Intelligence, or "OSINT", is the mining of data and facts from public information databases, news articles, codebases, journals. Although the amount of classified data dwarfs the unclassified, the size and scope of the unclassified is responsible for a majority of intelligence reports - and thus is involved in the great majority of executive decisions made by government entities. It's worth giving some thought as to how much that we know, that they do too. As illustrated in this expose, the processing of OSINT is a great big chunk of what modern intelligence is about aside from many other things. I think understanding how rationality as developed on Less Wrong can contribute to better IntelOps, and how IntelOps can feed the rationality community, would be awesome, but that's a post for another time.

--

The Show

Through my investigations into IntelOps I've noticed the emphasis on search. Good search.

I'm looking to compile a thread on Internet Research. I'm wondering if there is any wisdom on Less Wrong that can be taken advantage of here on how to become more effective searchers.  Here are some questions that could be answered specifically, but they are just guidelines - feel free to voice associated thoughts, we're exploring here.

  • Before actually going out and searching, what would be the most effective way of drafting and optimizing a collection plan? Are there any formal optimization models that inform our distribution of time and attention? Exploration vs exploitation comes to mind, but it would be worth formulating something specific. I heard that the multi-armed bandit problem is solved?
  • Do you have any links or resources regarding more effective search?
  • Do you have any experiences regarding internet research that you can share? Any patterns that you've noticed that have made you more effective at searching?
  • What are examples of closed-source information that are low-hanging fruit in terms of access (e.g. academic journals)? What are possible strategies for acquiring closed source data (e.g. enrolling in small courses at universities, e-mailing researchers, cohesion via the law/Freedom of Information Act, social engineering etc)?
  • I would like to hear from SEOs and software developers on what their interpretation of semantic web technologies and how they are going to affect end-users. I am somewhat unfamiliar with the semantic web, but from my understanding information that could not be indexed is now indexed; and new ontologies will emerge as this information is mined. What should an end-user expect and what opportunities will there be that didn't exist in the current generation of search?

That should be enough to get started. Below are some links that I have found useful with respect to Internet Research.

--

Meta-Search Engines or Assisted Search:

Summarizers:

Bots/Collectors/Automatic Filters:

Compilations and Directories:

Guides:

Practice:

I don't really care how you use this information, but I hope I've jogged some thinking of why it could be important.