You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Barack Obama's opinions on near-future AI [Fixed]

3 scarcegreengrass 12 October 2016 03:46PM

Why don't we have a proper strategy against high risk threats?

-16 [deleted] 08 February 2014 12:04PM

I recently finished reading the Sequences and they convinced me that the existential risk brought by intelligence explosion is worth considering. There was a one thing that bothered me though - the strategy adopted by this community seems to be notably one-sided.

Advancing our community’s interests will require a full spectrum of direct and indirect leadership approaches. MIRI and CEA are only one side of the coin. It would not be very wise to ignore high reward approaches that might even have a comparative advantage in comparison with our current strategy. We should be prepared to act as security guarantor – preferably with partners and allies, but alone if necessary – to deter and defeat acts of significantly high risk taking in the sector of artificial intelligence and related fields.

Why

Here's a quote by Yudkowsky:

This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us.  If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays.

The fate of vast amount of people depends on our actions. Yudkowsky observed that by using consequentialist reasoning it's morally right to torture someone for 50 years if by doing it you can save vast amount of people from pain, however slight.  The number of our descendants won't be 3^^^3 (unless the universe is spatially infinite and meets a few other conditions), but it will be big enough to outweight some of the intrinsic things we care about nowadays. And not only can we save those people from non-existence, or in the worst case scenario from eternal suffering in the hands of UFAI, but we can possibly give those vast amounts of people vast amounts of positive utility. It may sound unintuitive, but even torture and murder pale by comparison with the fate of all of us and our descendants. Of course, the cost-benefit ratio of torture or murder is not ideal except under extreme circumstances under imminent threat. It's true that some criminal organizations like Mexican drug cartels use torture in their operations, but that is mainly to instill fear which we don't really have a need to do.  But there is nothing that could raise the chance of FAI and lower the chance of UFAI that would not be morally right. Why haven't we taken full advantage of this fact?

tl;dr we should try to affect policy decisions related to AI safety by whatever means possible so that we can beat death, avoid eternal torture and live forever in a fun utopia. You already know this, so why haven't you suggested this before? Mods here have made certain policy decisions because they believe it will increase the chance of happy ending so why not go beyond that?

How

I suggest some kind of paramilitary and intelligence gathering organization alongside MIRI and CEA. In pursuing our objectives, this new organization would make critical contributions to AI safety beyond MIRI. CFAR could be transformed to partly support this organization - the boot camp style of rationality training might be useful in other contexts too.

You might ask, what can a few individuals concerned about existential risks do without huge financial support and government backing? The answer is: quite a lot. Let's not underestimate our power. Like gwern said in his article on the effectiveness on terrorism, it's actually quite easy to dismantle an organization if you're truly committed:

Suppose people angry at X were truly angry: so angry that they went beyond posturing and beyond acting against X's only if action were guaranteed to cost them nothing (like writing a blog post). If they ceased to care about whether legal proceedings might be filed against them; if they become obsessed with destroying X, if they devoted their lives to it and could ignore all bodily urges and creature comforts. If they could be, in a word, like Niven’s Protectors or Vinge’s Focused.

Could they do it? Could they destroy a 3 century old corporation with close to $1 trillion in assets, with sympathizers and former employees throughout the upper echelons of the United States Federal Government (itself the single most powerful entity in the world)?

Absolutely. It would be easy.

As I said, the destructive power of a human is great; let’s assume we have 100 fanatics - a vanishingly small fraction of those who have hated on X over the years - willing to engage even in assassination, a historically effective tactic33 and perhaps the single most effective tactic available to an individual or small group.

Julian Assange explains the basic theory of Wikileaks in a 2006 essay, “State and Terrorist Conspiracies” / “Conspiracy as Governance”: corporations and conspiracies form a graph network; the more efficiently communication flows, the more powerful a graph is; partition the graph, or impede communication (through leaks which cause self-inflicted wounds of secrecy & paranoia), and its power goes down. Carry this to its logical extreme…

"If all links between conspirators are cut then there is no conspiracy. This is usually hard to do, so we ask our first question: What is the minimum number of links that must be cut to separate the conspiracy into two groups of equal number? (divide and conquer). The answer depends on the structure of the conspiracy. Sometimes there are no alternative paths for conspiratorial information to flow between conspirators, other times there are many. This is a useful and interesting characteristic of a conspiracy. For instance, by assassinating one ‘bridge’ conspirator, it may be possible to split the conspiracy. But we want to say something about all conspiracies."

We don’t. We’re interested in shattering a specific conspiracy by the name of X. X has ~30,000 employees. Not all graphs are trees, but all trees are graphs, and corporations are usually structured as trees. If X’s hierarchy is similar to that of a binary tree, then to completely knock out the 8 top levels, one only needs to eliminate 256 nodes. The top 6 levels would require only 64 nodes.

If one knocked out the top 6 levels, then each of the remaining subtrees in level 7 has no priority over the rest. And there will be 27−26 or 64 such subtrees/nodes. It is safe to say that 64 sub-corporations, each potentially headed by someone who wants a battlefield promotion to heading the entire thing, would have trouble agreeing on how to reconstruct the hierarchy. The stockholders might be expected to step in at this point, but the Board of Directors would be included in the top of the hierarchy, and by definition, they represent the majority of stockholders.

One could launch the attack during a board meeting or similar gathering, and hope to have 1 fanatic take out 10 or 20 targets. But let’s be pessimistic and assume each fanatic can only account for 1 target - even if they spend months and years reconnoitering and preparing fanatically.

This leaves us 36 fanatics. X will be at a minimum impaired during the attack; financial companies almost uniquely operate on such tight schedules that one day’s disruption can open the door to predation. We’ll assign 1 fanatic the task of researching emails and telephone numbers and addresses of X rivals; after a few years of constant schmoozing and FOIA requests and dumpster-diving, he ought to be able to reach major traders at said rivals. (This can be done by hiring or becoming a hacker group - as has already penetrated X - or possibly simply by open-source intelligence and sources like a Bloomberg Terminal.) When the hammer goes down, he’ll fire off notifications and suggestions to his contacts34. (For bonus points, he will then go off on an additional suicide mission.)

X claims to have offices in all major financial hubs. Offhand, I would expect that to be no more than 10 or 20 offices worth attacking. We assign 20 of our remaining 35 fanatics the tasks of building Oklahoma City-sized truck bombs. (This will take a while because modern fertilizer is contaminated specifically to prevent this; our fanatics will have to research how to undo the contamination or acquire alternate explosives. The example of Anders Behring Breivikreminds us that simple guns may be better tools than bombs.) The 20 bombs may not eliminate the offices completely, but they should take care of demoralizing the 29,000 in the lower ranks and punch a number of holes in the surviving subtrees.

Let’s assume the 20 bomb-builders die during the bombing or remain to pick off survivors and obstruct rescue services as long as possible.

What shall we do with our remaining 15 agents? The offices lay in ruins. The corporate lords are dead. The lower ranks are running around in utter confusion, with long-oppressed subordinates waking to realize that becoming CEO is a live possibility. The rivals have been taking advantage of X’s disarray as much as possible (although likely the markets would be in the process of shutting down).

15 is almost enough to assign one per office. What else could one do besides attack the office and its contents? Data centers are a good choice, but hardware is very replaceable and attacking them might impede the rivals’ efforts. One would want to destroy the software X uses in trading, but to do that one would have to attack the source repositories; those are likely either in the offices already or difficult to trace. (You’ll notice that we haven’t assigned our fanatics anything particularly difficult or subtle so far. I do this to try to make it seem as feasible as possible; if I had fanatics becoming master hackers and infiltrating X’s networks to make disastrous trades that bankrupt the company, people might say ‘aw, they may be fanatically motivated, but they couldn’t really do that’.)

It’s not enough to simply damage X once. We must attack on the psychological plane: we must make it so that people fear to ever again work for anything related to X.

Let us postulate one of our 15 agents was assigned a research task. He was to get the addresses of all X employees. (We may have already needed this for our surgical strike.) He can do this by whatever mean: being hired by X’s HR department, infiltrating electronically, breaking in and stealing random hard drives, open source intelligence - whatever. Where there’s a will, there’s a way.

Divvy the addresses up into 14 areas centered around offices, and assign the remaining 14 agents to travel to each address in their area and kill anyone there. A man may be willing to risk his own life for fabulous gains in X - but will he risk his family? (And families are easy targets too. If the 14 agents begin before the main attacks, it will be a while before the X link becomes apparent. Shooting someone is easy; getting away with it is the hard part.)

I would be shocked if X could survive even half the agents.

The above description applies mainly to non-military organizations, but threats can also come from the direction of state actors more heavily backed up by military which requires more preparation. Security agencies find themselves faced with a complex spectrum of conflict and this might encourage them to continue to expand their capabilities and powers including automated systems, which poses risks. State-sponsored and non-state actors just complicate issues by extending their reach through advanced technologies that were once solely the domain of states. High-risk threats in the non-military non-state sector might be easier to neutralize, but we should not underprioritize possible state targets.

Of course, the scenario outlined above needs to be the absolute last resort when there is nothing else you can do. You need truly 9/11 scale effort to dismantle an organization capable of creating a fooming AGI. This means that after completing that kind of mission our community can't exist any more than al-Qaeda can in the US. Extreme solutions are bad partly because they are extreme solutions and only right under extreme circumstances. As you can see from this earlier thread on the effectiveness of terrorism "perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise." See also this thread and the first comment by Yvain: http://lesswrong.com/lw/2se/rational_terrorism_or_why_shouldnt_we_burn_down/

Instead, in support of our community's interests, our new unit would ideally take a strong role in efforts to safeguard access, sustain security, have undercover operatives in high-risk corporations and agencies, provide oversight and accountability, and promote responsible norms in AI safety. It would also facilitate cooperation in AI safety with transparent, routine, and predictable practices as part of our theater strategies. As technological capability and capacity increases in non-state technological actors, we will seek new ways to catalyze greater security cooperation.

When

It will probably take decades until there exists something resembling Strong AI. This is a reason to start creating our new unit and recruiting and training potential candidates soon so we're ready and we have all the necessary information when the time arrives. This may seem ambitious, but is it really more ambitious than MIRI's goal, which is in itself one of the hardest tasks in the world?

 


 

Can someone explain why this is a bad idea? Since I'm not sure if you can publicly endorse this post I suggest the following approach: to signal agreement that you're willing to take what I said seriously and proceed with this strategy, remove this post in the next 24 hours, just like we planned.

[LINK] Climate change and food security

-1 [deleted] 14 April 2013 11:11AM

A Guardian article on the impact of climate change on food security. This is worrying (albeit perhaps not a global catastrophic (or existential) risk). It has the potential to wipe out the gains made against extreme poverty in the last few decades.

Should we be so pessimistic? Climate change might be averted through government action or a technological fix; or the poorest might get rich enough to be protected from this insecurity; or we could see a second 'Green Revolution' with GM, etc. I've also seen some discussion that climate change could in fact increase food cultivation - in Russia and Canada for example.

How do people feel about this - optimistic or pessimistic?

Optimal User-End Internet Security (Or, Rational Internet Browsing)

1 [deleted] 09 September 2011 06:23PM

Hacking and Cracking, Internet security, Cypherpunk. I find these topics fascinating as well as completely over my head.

Yet, there are still some things that can be said to a layman, especially by the ever-poignant Randall Munroe:

Password Strength

Passwords Reuse

I'm guilty on both charges (reusing poorly formulated passwords, not stealing them).

These arguments may be just be the tip of the iceberg of a much larger problem that needs optimizing: Social Engineering, or mainly how it can be used against our interests (to quip Person 2, "It doesn't matter how much security you put on the box.  Humans are not secure."). I get the feeling that I'm not managing my risks on the Internet as well as I should.

So the questions I ask are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how? I've enforced HTTPS when available, as well as kept my Internet use iconoclastic rather than typical, but I doubt that's a comprehensive list.

I don't know how usefully I can contribute, but I hope that many on Less Wrong can.

Computer Programs Rig Elections

-2 magfrump 23 August 2011 02:03AM

I don't know how interested this community would be in this topic, I don't mean to be talking politics so much as technology and decision mechanisms.

According to this programmer's testimony, voting machine companies requested that their programmers make it possible for the companies to rig elections, while in communication with elected officials.

http://www.youtube.com/watch?v=1thcO_olHas&sns=fb

If there is a discussion of how worthwhile taking the time to vote is, this may be worth knowing.

This is something that I expected to be true beforehand, but I am wondering: How reliable is this testimony?  What are other LWers' prior and posterior probabilities of elections being rigged in this way?  Is it worth trying to do something about this, and if so what?

Schneier talks about The Dishonest Minority [Link]

6 Nic_Smith 10 May 2011 05:27AM

Evolution. Morality. Strategy. Security/Cryptography. This hits so many topics of interest, I can't imagine it not being discussed here. Bruce Schneier blogs about his book-in-progress, The Dishonest Minority:

Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual's short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.

I am somewhat reminded of Robin Hanson's Homo Hypocritus writings from the above, although it is not the same. Schneier says that the book is basically a first draft at this point, and might still change quite a bit. Some of the comments focus on whether "dishonest" is actually the best term to use for defecting from social norms.