Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[link] Reality Show 'Utopia'

-8 MathieuRoy 06 September 2014 08:39PM

The TV series 'Utopia' just started.

"The series follows a cast of 15 men and women who are placed in isolation and filmed twenty-four hours a day for one year. The cast must create their own society and figure out how to survive. The series will be shown twice a week, but there will be online streaming 24/7 with 129 hidden and unhidden cameras all over the Utopia compound. The live streams will begin on August 29, the day when the 15 pioneers will enter Utopia. Over 5,000 people auditioned for the series. Every month three pioneers will be nominated and could be sent back to their everyday lives. The live streamers will decide which new pioneers get their chance to become Utopian." (source: http://en.wikipedia.org/wiki/Utopia_(U.S._reality_TV_series))

Since every month new 'pioneers' will be introduced, you can still audition for the series; here's how: http://www.utopiatvcasting.com/how-to-audition. I would love to see a well-trained rationalist teaching "the world" some applied rationality principles, and I think this TV show would be an excellent medium to reach the "average person". It would also be nice to see someone explaining what Utopia means to a transhumanist. Let us know if you apply.

Why don't we have a proper strategy against high risk threats?

-16 [deleted] 08 February 2014 12:04PM

I recently finished reading the Sequences and they convinced me that the existential risk brought by intelligence explosion is worth considering. There was a one thing that bothered me though - the strategy adopted by this community seems to be notably one-sided.

Advancing our community’s interests will require a full spectrum of direct and indirect leadership approaches. MIRI and CEA are only one side of the coin. It would not be very wise to ignore high reward approaches that might even have a comparative advantage in comparison with our current strategy. We should be prepared to act as security guarantor – preferably with partners and allies, but alone if necessary – to deter and defeat acts of significantly high risk taking in the sector of artificial intelligence and related fields.

Why

Here's a quote by Yudkowsky:

This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us.  If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays.

The fate of vast amount of people depends on our actions. Yudkowsky observed that by using consequentialist reasoning it's morally right to torture someone for 50 years if by doing it you can save vast amount of people from pain, however slight.  The number of our descendants won't be 3^^^3 (unless the universe is spatially infinite and meets a few other conditions), but it will be big enough to outweight some of the intrinsic things we care about nowadays. And not only can we save those people from non-existence, or in the worst case scenario from eternal suffering in the hands of UFAI, but we can possibly give those vast amounts of people vast amounts of positive utility. It may sound unintuitive, but even torture and murder pale by comparison with the fate of all of us and our descendants. Of course, the cost-benefit ratio of torture or murder is not ideal except under extreme circumstances under imminent threat. It's true that some criminal organizations like Mexican drug cartels use torture in their operations, but that is mainly to instill fear which we don't really have a need to do.  But there is nothing that could raise the chance of FAI and lower the chance of UFAI that would not be morally right. Why haven't we taken full advantage of this fact?

tl;dr we should try to affect policy decisions related to AI safety by whatever means possible so that we can beat death, avoid eternal torture and live forever in a fun utopia. You already know this, so why haven't you suggested this before? Mods here have made certain policy decisions because they believe it will increase the chance of happy ending so why not go beyond that?

How

I suggest some kind of paramilitary and intelligence gathering organization alongside MIRI and CEA. In pursuing our objectives, this new organization would make critical contributions to AI safety beyond MIRI. CFAR could be transformed to partly support this organization - the boot camp style of rationality training might be useful in other contexts too.

You might ask, what can a few individuals concerned about existential risks do without huge financial support and government backing? The answer is: quite a lot. Let's not underestimate our power. Like gwern said in his article on the effectiveness on terrorism, it's actually quite easy to dismantle an organization if you're truly committed:

Suppose people angry at X were truly angry: so angry that they went beyond posturing and beyond acting against X's only if action were guaranteed to cost them nothing (like writing a blog post). If they ceased to care about whether legal proceedings might be filed against them; if they become obsessed with destroying X, if they devoted their lives to it and could ignore all bodily urges and creature comforts. If they could be, in a word, like Niven’s Protectors or Vinge’s Focused.

Could they do it? Could they destroy a 3 century old corporation with close to $1 trillion in assets, with sympathizers and former employees throughout the upper echelons of the United States Federal Government (itself the single most powerful entity in the world)?

Absolutely. It would be easy.

As I said, the destructive power of a human is great; let’s assume we have 100 fanatics - a vanishingly small fraction of those who have hated on X over the years - willing to engage even in assassination, a historically effective tactic33 and perhaps the single most effective tactic available to an individual or small group.

Julian Assange explains the basic theory of Wikileaks in a 2006 essay, “State and Terrorist Conspiracies” / “Conspiracy as Governance”: corporations and conspiracies form a graph network; the more efficiently communication flows, the more powerful a graph is; partition the graph, or impede communication (through leaks which cause self-inflicted wounds of secrecy & paranoia), and its power goes down. Carry this to its logical extreme…

"If all links between conspirators are cut then there is no conspiracy. This is usually hard to do, so we ask our first question: What is the minimum number of links that must be cut to separate the conspiracy into two groups of equal number? (divide and conquer). The answer depends on the structure of the conspiracy. Sometimes there are no alternative paths for conspiratorial information to flow between conspirators, other times there are many. This is a useful and interesting characteristic of a conspiracy. For instance, by assassinating one ‘bridge’ conspirator, it may be possible to split the conspiracy. But we want to say something about all conspiracies."

We don’t. We’re interested in shattering a specific conspiracy by the name of X. X has ~30,000 employees. Not all graphs are trees, but all trees are graphs, and corporations are usually structured as trees. If X’s hierarchy is similar to that of a binary tree, then to completely knock out the 8 top levels, one only needs to eliminate 256 nodes. The top 6 levels would require only 64 nodes.

If one knocked out the top 6 levels, then each of the remaining subtrees in level 7 has no priority over the rest. And there will be 27−26 or 64 such subtrees/nodes. It is safe to say that 64 sub-corporations, each potentially headed by someone who wants a battlefield promotion to heading the entire thing, would have trouble agreeing on how to reconstruct the hierarchy. The stockholders might be expected to step in at this point, but the Board of Directors would be included in the top of the hierarchy, and by definition, they represent the majority of stockholders.

One could launch the attack during a board meeting or similar gathering, and hope to have 1 fanatic take out 10 or 20 targets. But let’s be pessimistic and assume each fanatic can only account for 1 target - even if they spend months and years reconnoitering and preparing fanatically.

This leaves us 36 fanatics. X will be at a minimum impaired during the attack; financial companies almost uniquely operate on such tight schedules that one day’s disruption can open the door to predation. We’ll assign 1 fanatic the task of researching emails and telephone numbers and addresses of X rivals; after a few years of constant schmoozing and FOIA requests and dumpster-diving, he ought to be able to reach major traders at said rivals. (This can be done by hiring or becoming a hacker group - as has already penetrated X - or possibly simply by open-source intelligence and sources like a Bloomberg Terminal.) When the hammer goes down, he’ll fire off notifications and suggestions to his contacts34. (For bonus points, he will then go off on an additional suicide mission.)

X claims to have offices in all major financial hubs. Offhand, I would expect that to be no more than 10 or 20 offices worth attacking. We assign 20 of our remaining 35 fanatics the tasks of building Oklahoma City-sized truck bombs. (This will take a while because modern fertilizer is contaminated specifically to prevent this; our fanatics will have to research how to undo the contamination or acquire alternate explosives. The example of Anders Behring Breivikreminds us that simple guns may be better tools than bombs.) The 20 bombs may not eliminate the offices completely, but they should take care of demoralizing the 29,000 in the lower ranks and punch a number of holes in the surviving subtrees.

Let’s assume the 20 bomb-builders die during the bombing or remain to pick off survivors and obstruct rescue services as long as possible.

What shall we do with our remaining 15 agents? The offices lay in ruins. The corporate lords are dead. The lower ranks are running around in utter confusion, with long-oppressed subordinates waking to realize that becoming CEO is a live possibility. The rivals have been taking advantage of X’s disarray as much as possible (although likely the markets would be in the process of shutting down).

15 is almost enough to assign one per office. What else could one do besides attack the office and its contents? Data centers are a good choice, but hardware is very replaceable and attacking them might impede the rivals’ efforts. One would want to destroy the software X uses in trading, but to do that one would have to attack the source repositories; those are likely either in the offices already or difficult to trace. (You’ll notice that we haven’t assigned our fanatics anything particularly difficult or subtle so far. I do this to try to make it seem as feasible as possible; if I had fanatics becoming master hackers and infiltrating X’s networks to make disastrous trades that bankrupt the company, people might say ‘aw, they may be fanatically motivated, but they couldn’t really do that’.)

It’s not enough to simply damage X once. We must attack on the psychological plane: we must make it so that people fear to ever again work for anything related to X.

Let us postulate one of our 15 agents was assigned a research task. He was to get the addresses of all X employees. (We may have already needed this for our surgical strike.) He can do this by whatever mean: being hired by X’s HR department, infiltrating electronically, breaking in and stealing random hard drives, open source intelligence - whatever. Where there’s a will, there’s a way.

Divvy the addresses up into 14 areas centered around offices, and assign the remaining 14 agents to travel to each address in their area and kill anyone there. A man may be willing to risk his own life for fabulous gains in X - but will he risk his family? (And families are easy targets too. If the 14 agents begin before the main attacks, it will be a while before the X link becomes apparent. Shooting someone is easy; getting away with it is the hard part.)

I would be shocked if X could survive even half the agents.

The above description applies mainly to non-military organizations, but threats can also come from the direction of state actors more heavily backed up by military which requires more preparation. Security agencies find themselves faced with a complex spectrum of conflict and this might encourage them to continue to expand their capabilities and powers including automated systems, which poses risks. State-sponsored and non-state actors just complicate issues by extending their reach through advanced technologies that were once solely the domain of states. High-risk threats in the non-military non-state sector might be easier to neutralize, but we should not underprioritize possible state targets.

Of course, the scenario outlined above needs to be the absolute last resort when there is nothing else you can do. You need truly 9/11 scale effort to dismantle an organization capable of creating a fooming AGI. This means that after completing that kind of mission our community can't exist any more than al-Qaeda can in the US. Extreme solutions are bad partly because they are extreme solutions and only right under extreme circumstances. As you can see from this earlier thread on the effectiveness of terrorism "perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise." See also this thread and the first comment by Yvain: http://lesswrong.com/lw/2se/rational_terrorism_or_why_shouldnt_we_burn_down/

Instead, in support of our community's interests, our new unit would ideally take a strong role in efforts to safeguard access, sustain security, have undercover operatives in high-risk corporations and agencies, provide oversight and accountability, and promote responsible norms in AI safety. It would also facilitate cooperation in AI safety with transparent, routine, and predictable practices as part of our theater strategies. As technological capability and capacity increases in non-state technological actors, we will seek new ways to catalyze greater security cooperation.

When

It will probably take decades until there exists something resembling Strong AI. This is a reason to start creating our new unit and recruiting and training potential candidates soon so we're ready and we have all the necessary information when the time arrives. This may seem ambitious, but is it really more ambitious than MIRI's goal, which is in itself one of the hardest tasks in the world?

 


 

Can someone explain why this is a bad idea? Since I'm not sure if you can publicly endorse this post I suggest the following approach: to signal agreement that you're willing to take what I said seriously and proceed with this strategy, remove this post in the next 24 hours, just like we planned.

Absent Transhumanism and Transformative Technologies, Which Utopia is Left?

3 diegocaleiro 05 December 2012 12:54PM

Assume for the time being that it will forever remain beyond the scope of science to change Human Nature. AGI is also impossible, as is Nanotech, BioImmortality, and those things.

Douglas Adams mice finished their human experiment, giving to you, personally, the job of redesigning earth, and specially human society, according to your wildest utopian dreams, but you can't change the unchangeables above.

You can play with architecture, engineering, gender ratio, clothing, money, science grants, governments, feeding rituals, family constitution, the constitution itself, education, etc...  Just don't forget if you slide something too far away from what our evolved brains were designed to accept, things may slide back, or instability and catastrophe may ensue.

Finally, if you are not the kind of utilitarian that assigns exactly the same amount of importance to your desires, and to that of others, I want you to create this Utopia for yourself, and your values, not everyone.

The point of this exercise is: The vast majority of folk not related to this community that I know, when asked about an ideal world, will not change human nature, or animal suffering, or things like that, they'll think about changing whatever the newspaper editors have been writing about last few weeks. I am wondering if there is symmetry here, and folks from this community here do not spend that much time thinking about those kinds of change which don't rely on transformative technologies.  It is just an intuition pump, a gedankenexperiment if you will. Force your brain to face this counterfactual reality, and make the best world you can given those constraints. Maybe, if sufficiently many post here, the results might clarify something about CEV, or the sociology of LessWrongers...

 

Writing about Singularity: needing help with references and bibliography

4 [deleted] 05 March 2012 01:27AM

 

It was Yudkowsky's Fun Theory sequence that inspired me to undertake the work of writing a novel on a singularitarian society... however, there are gaps I need to fill, and I need all the help I can get. It's mostly book recommendations that I'm asking for.

 

One of the things I'd like to tackle in it would be the interactions between the modern, geeky Singularitarianisms, and Marxism, which I hold to be somewhat prototypical in that sense, as well as other utopisms. And contrasting them with more down-to-earth ideologies and attitudes, by examining the seriously dangerous bumps of the technological point of transition between "baseline" and "singularity". But I need to do a lot of research before I'm able to write anything good: if I'm not going to have any original ideas, at least I'd like to serve my readers with a collection of well-researched. solid ones.

 

So I'd like to have everything that is worth reading about the Singularity, specifically the Revolution it entails (in one way or another) and the social aftermath. I'm particularly interested in the consequences of the lag of the spread of the technology from the wealthy to the baselines, and the potential for baselines oppression and other forms of continuation of current forms of social imbalances, as well as suboptimal distribution of wealth. After all, according to many authors, we've had the means to end war, poverty and famine, and most infectious diseases, since the sixties, and it's just our irrational methods of wealth distribution That is, supposing the commonly alleged ideal of total lifespan and material welfare maximization for all humanity is what actually drives the way things are done. But even with other, different premises and axioms, there's much that can be improved and isn't, thanks to basic human irrationality, which is what we combat here.

 

Also, yes, this post makes my political leanings fairly clear, but I'm open to alternative viewpoints and actively seek them. I also don't intend to write any propaganda, as such. Just to examine ideas, and scenarios, for the sake of writing a compelling story, with wide audience appeal. The idea is to raise awareness of the Singularity as something rather imminent ("Summer's Coming"), and cause (or at least help prepare) normal people to question the wonders and dangers thereof, rationally.

 

It's a frighteningly ambitious, long-term challenge, I am terribly aware of that. And the first thing I'll need to read is a style-book, to correct my horrendous grasp of standard acceptable writing (and not seem arrogant by doing anything else), so please feel free to recommend as many books and blog articles and other material as you like. I'll take my time going though it all.

 

Utopia in Manna

9 Konkvistador 25 February 2012 09:53PM

Manna is the title of a science fiction story that describes a near future transition to an automated society where humans are uneconomical. In the later chapters it describes in some detail a post-scarcity society. There are several problems with it however, the greatest by far is that the author seems to have assumed that "want" and "envy" are primarily tied in material needs. This is simply not true.

I would love to live in a society with material equality on a sufficiently hight standard, I'd however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.

Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.

I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn't be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar's number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don't really figure in their "my social status compared to others" part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.

To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about "equality". We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.

I'm pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect... dedicating resoruces for something like a real life space elevator simply wouldn't cross their minds. People I think systematically overestimate how much something being "real" matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining "peas" (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn't buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy (if they knew about it).

I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.

It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.

It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years doesn't seem like something fleas on a insignificant speck should sneer at.

Eutopia is Scary - for the author

10 Stuart_Armstrong 28 December 2011 09:42AM

As Eliezer makes the point that real utopias will be scary - certainly more scary that my latest attempt. Mainly they will be scary because they'll be different, and humans don't like different, and it's vital that the authors realise this if they want to create a realistic scenario. It's necessary to craft a world where we would be out of place.

But it's important to remember that utopias will not be scary for the people living there - the aspects that we find scary at the beginning of the 21st century are not what the locals will be afraid of (put your hand up if you are currently terrified that the majority of women can vote in modern democracies). Scary is in the observer, not the territory.

This is a special challenge when writing a fictional utopia. Dystopias and flawed utopias are much easier to write than utopias; when you can drop an anvil on your protagonist whenever you feel like it, then the tension and interest are much easier to sustain. And the scary parts of utopia are a cheap and easy way of dropping anvils: the reader thrills to this frightening and interesting concept, start objecting/agreeing/thinking about and with it. But it's all ok, you think, it's not dystopia, it's just a scary utopia; you can get your thrills without going astray.

But all that detracts from your real mission, which is to write a utopia that is genuinely good for the people in it, and would be genuinely interesting to read about even if it weren't scary. I found this particularly hard, and I'd recommend that those who write utopias do a first draft or summary without any scary bits in it - if this doesn't feel interesting on its own, then you've failed.

Then when you do add the scary bits, make sure they don't suck all the energy out of your story, and make sure you emphasise that the protagonists find these aspects commonplace rather than frightening. There is a length issue - if your story is long, you can afford to put more scary bits in, and even make the reader start seeing them just as the locals do, without the main point being swallowed up. If your story's short, however, I'd cut down on the scary radically: if "rape is legal" and you only have a few pages, then that's what most people are going to remember about your story. The scariness is a flavouring, not the main dish.