[Link] Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

6 ignoranceprior 14 October 2016 07:58PM

Meetup : Baltimore Area / UMBC Weekly Meetup

0 iarwain1 14 October 2016 04:39PM

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

WHEN: 16 October 2016 08:00:00PM (-0400)

WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250

Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want.

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

Weekly LW Meetups

0 FrankAdamek 14 October 2016 03:56PM

This summary was posted to LW Main on October 14th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] GiveWell: A case study in effective altruism, part 1

0 philh 14 October 2016 10:46AM

[Link] Wikipedia book based on betterhumans' article on cognitive biases

1 MathieuRoy 14 October 2016 01:03AM

The map of agents which may create x-risks

2 turchin 13 October 2016 11:17AM

Recently Phil Torres wrote an article  where he raises a new topic in existential risks research: the question about who could be possible agents in the creation of a global catastrophe. Here he identifies five main types of agents, and two main reasons why they will create a catastrophe (error and terror).  

He discusses the following types of agents: 

 

(1) Superintelligence. 

(2) Idiosyncratic actors.  

(3) Ecoterrorists.  

(4) Religious terrorists.  

(5) Rogue states.  

 

Inspired by his work I decided to create a map of all possible agents as well as their possible reasons for creating x-risks. During this work some new ideas appeared.  

I think that a significant addition to the list of agents should be superpowers, as they are known to have created most global risks in the 20th century; corporations, as they are now on the front line of AGI creation; and pseudo-rational agents who could create a Doomsday weapon in the future to use for global blackmail (may be with positive values), or who could risk civilization’s fate for their own benefits (dangerous experiments). 

The X-risks prevention community could also be an agent of risks if it fails to prevent obvious risks, or if it uses smaller catastrophes to prevent large risks, or if it creates new dangerous ideas of possible risks which could inspire potential terrorists.  

The more technology progresses, the more types of agents will have access to dangerous technologies, even including teenagers. (like: "Why This 14-Year-Old Kid Built a Nuclear Reactor” ) 

In this situation only the number of agents with risky tech will matter, not the exact motivations of each one. But if we are unable to control tech, we could try to control potential agents or their “medium" mood at least. 

The map shows various types of agents, starting from non-agents, and ending with types of agential behaviors which could result in catastrophic consequences (error, terror, risk etc). It also shows the types of risks that are more probable for each type of agent. I think that my explanation in each case should be self evident. 

We could also show that x-risk agents will change during the pace of technological progress. In the beginning there are no agents, and later there are superpowers, and then smaller and smaller agents, until there will be millions of people with biotech labs at home. In the end there will be only one agent - SuperAI.  

So, a lessening the number of agents, and increasing their ”morality” and intelligence seem to be the most plausible directions in lowering risks. Special organizations or social networks may be created to control the most risky type of agents. Differing agents probably need differing types of control. Some ideas of this agent-specific control are listed in the map, but a real control system should be much more complex and specific.

The map shows many agents, some of them real and exist now (but don’t have dangerous capabilities), and some are only possible in moral sense or in technical sense.

 

So there are 4 types of agents, and I show them in the map in different colours:

 

1) Existing and dangerous, that is already having technology to destroy the humanity. That is superpowers, arrogant scientists – Red

2) Existing, and willing to end the world, but lacking needed technologies. (ISIS, VHEMt) - Yellow

3) Morally possible, but don’t existing. We could imagine logically consistent value systems which may result in human extinction. That is Doomsday blackmail. - Green

4) Agents, which will pose risk only after supertechnologies appear, like AI-hackers, children biohackers. - Blue

 

Many agents types are not fit for this classification so I rest them white in the map. 

 

The pdf of the map is here: http://immortality-roadmap.com/agentrisk11.pdf

 

 

 

 

(The jpg of the map is below because side bar is closing part of it I put it higher)

 

 

 

 

 

 

 

 

(The jpg of the map is below because side bar is closing part of it I put it higher)

 

 

 

 

 

 

 

 

 

 

 

 

 

Meetup : San Francisco Meetup: Rationality Diary

0 rocurley 13 October 2016 05:18AM

Discussion article for the meetup : San Francisco Meetup: Rationality Diary

WHEN: 17 October 2016 06:15:00PM (-0700)

WHERE: 1769 15th St, San Francisco, CA

We'll be meeting to tell stories about when we tried to solve a problem in our lives, and how it went.

For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): three zero one, three five six, five four two four.

Format:

We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.

About these meetups:

The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.

We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.

Discussion article for the meetup : San Francisco Meetup: Rationality Diary

[Link] Barack Obama's opinions on near-future AI [Fixed]

3 scarcegreengrass 12 October 2016 03:46PM

[Link] An attempt in layman's language to explain the metaethics sequence in a single post.

1 Bound_up 12 October 2016 01:57PM

Meetup : Washington, D.C.: Fun & Games

0 RobinZ 12 October 2016 12:02AM

Discussion article for the meetup : Washington, D.C.: Fun & Games

WHEN: 16 October 2016 03:30:00PM (-0400)

WHERE: Donald W. Reynolds Center for American Art and Portraiture

We will be meeting in the courtyard to hang out, play games, and engage in fun conversation.

Upcoming meetups:

  • Oct. 23: Communication
  • Oct. 30: Halloween Party

Discussion article for the meetup : Washington, D.C.: Fun & Games

View more: Next