AGI-12 and AGI-Impacts - late places available

3 Stuart_Armstrong 22 November 2012 12:53PM

There are still some places available in the Winter Intelligence Multi-Conference, a dual conference including AGI-12 (the Fifth Conference on Artificial General Intelligence), followed by the AGI impacts conference. The impacts conference will about the safety, risks and impacts of AGI, and how best to prepare now for these challenges. This is of great relevance to the people of Less Wrong. Plus it's in Oxford - Oxford is nice.

The AGI-12 conference is on the 8th-9th December (with morning workshops on the 10th-11th), while the AGI impacts conference in on the 10th-11th. Reduced prices are available for students; details here.

Hope to see as many of you as we can! And if people want to stay on for a few days after the conference, people from the Future of Humanity Institute should be available to chat with.

How can I reduce existential risk from AI?

46 lukeprog 13 November 2012 09:56PM

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

Suppose also that you think AI is the most pressing x-risk, because (1) mitigating AI risk could mitigate all other existential risks, but not vice-versa, and because (2) AI is plausibly the first existential risk that will occur.

In that case, what should you do? How can you reduce AI x-risk?

It's complicated, but I get this question a lot, so let me try to provide some kind of answer.

 

Meta-work, strategy work, and direct work

When you're facing a problem and you don't know what to do about it, there are two things you can do:

1. Meta-work: Amass wealth and other resources. Build your community. Make yourself stronger. Meta-work of this sort will be useful regardless of which "direct work" interventions turn out to be useful for tackling the problem you face. Meta-work also empowers you to do strategic work.

2. Strategy work: Purchase a better strategic understanding of the problem you're facing, so you can see more clearly what should be done. Usually, this will consist of getting smart and self-critical people to honestly assess the strategic situation, build models, make predictions about the effects of different possible interventions, and so on. If done well, these analyses can shed light on which kinds of "direct work" will help you deal with the problem you're trying to solve.

When you have enough strategic insight to have discovered some interventions that you're confident will help you tackle the problem you're facing, then you can also engage in:

3. Direct work: Directly attack the problem you're facing, whether this involves technical research, political action, particular kinds of technological development, or something else.

Thinking with these categories can be useful even though the lines between them are fuzzy. For example, you might have to do some basic awareness-raising in order to amass funds for your cause, and then once you've spent those funds on strategy work, your strategy work might tell you that a specific form of awareness-raising is useful for political action that counts as "direct work." Also, some forms of strategy work can feel like direct work, depending on the type of problem you're tackling.

continue reading »

AGI-12 conference in Oxford in December

4 Stuart_Armstrong 31 July 2012 10:38AM

The AGI impacts conference in Oxford in December of this year will happen alongside the AGI-12 conference on Artificial General Intelligence. They also have a call for papers, to which some on this list may be interested in submitting:

AGI-12 Paper Submission Deadline EXTENDED to August 15

Some good news for tardy AGI authors!

As you may recall, the Fifth Conferences on Artificial General Intelligence (AGI-12) will be held Dec 8-11 at Oxford University in the UK.  The AGI conferences are the only major conference series dedicated to research on the creation of thinking machines with general intelligence at the human level and ultimately beyond.  The full AGI-12 Call for Papers may be found at:

http://agi-conf.org/2012/call-for-papers/

Our proceedings publisher for AGI-12, Springer Lecture Notes in AI (LNAI), has informed us that their deadline for receiving the proceedings manuscript from is later than previously thought.  So, we have been able to extend the paper submission deadline once more, till August 15, allowing us to round up a few more excellent papers from tardy authors.

We look forward to seeing you at Oxford in December!

AGI Impacts conference in Oxford in December, with Call for Papers

9 Stuart_Armstrong 14 June 2012 11:37AM

From the 8th to the 12th of December, the Future of Humanity Institute will be hosting the Winter Intelligence Multi-Conference, a dual conference including AGI-12 (the Fifth Conference on Artificial General Intelligence), followed by the AGI impacts conference. Of great relevance to the people on Less Wrong, the impacts conference will about the safety, risks and impacts of AGI, and how best to prepare now for these challenges.

The conference now has a Call for Papers, with an associated prize offered by the Singularity Institute. Please publicise on any relevant places:

 

'IMPACTS AND RISKS OF ARTIFICIAL GENERAL INTELLIGENCE'

AGI Impacts, 10-11.12.2012, OXFORD

The first conference on the Impacts and Risks of Artificial General Intelligence will take place at the University of Oxford, St. Anne’s College, on December 10th and 11th, 2012 – immediately following the fifth annual conference on Artificial General Intelligence AGI-12. AGI-Impacts is organized by the “Future of Humanity Institute” (FHI) at Oxford University through its “Programme on the Impacts of Future Technology”. The two events form the Winter Intelligence Multi-Conference 2012, hosted by FHI.

continue reading »

I

51 PhilGoetz 08 January 2011 05:51PM

I wrote this story at Michigan State during Clarion 1997, and it was published in the Sept/Oct 1998 issue of Odyssey.  It has many faults and anachronisms that still bother me.  I'd like to say that this is because my understanding of artificial intelligence and the singularity has progressed so much since then; but it has not.  Many anachronisms and implausibilities are compromises between wanting to be accurate, and wanting to communicate.

At least I can claim the distinction of having published the story with the shortest title in the English language - measured horizontally.

I

I was the last person, and this is how he died.

continue reading »

30th Soar workshop

18 Johnicholas 23 May 2010 01:33PM

This is a report from a LessWrong perspective, on the 30th Soar workshop. Soar is a cognitive architecture that has been in continuous development for nearly 30 years, and is in a direct line of descent from some of the earliest AI research (Simon's LT and GPS). Soar is interesting to LessWrong readers for two reasons:

  1. Soar is a cognitive science theory, and has had some success at modeling human reasoning - this is relevant to the central theme of LessWrong, improving human rationality.
  2. Soar is an AGI research project - this is relevant to the AGI risks sub-theme of LessWrong.

continue reading »

View more: Next