Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"

50 ciphergoth 22 January 2015 08:21PM

Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.

"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21

Slides online from "The Future of AI: Opportunities and Challenges"

13 ciphergoth 16 January 2015 11:17AM

In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.

This is shaping up to be another extraordinary year for AI risk concerns going mainstream!

Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

54 ciphergoth 15 January 2015 04:33PM

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

Robin Hanson's "Overcoming Bias" posts as an e-book.

21 ciphergoth 31 August 2014 01:26PM

At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!

 


 

Open thread for December 17-23, 2013

5 ciphergoth 17 December 2013 08:45PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

A diagram for a simple two-player game

22 ciphergoth 10 November 2013 08:59AM

(Copied from my blog)

I always have a hard time making sense of preference matrices in two-player games. Here are some diagrams I drew to make it easier. This is a two-player game:

1

North wants to end up on the northernmost point, and East on the eastmost. North goes first, and chooses which of the two bars will be used; East then goes second and chooses which point on the bar will be used.

North knows that East will always choose the easternmost point on the bar picked, so one of these two:

2

North checks which of the two points is further north, and so chooses the leftmost bar, and they both end up on this point:

3

Which is sad, because there’s a point north-east of this that they’d both prefer. Unfortunately, North knows that if they choose the rightmost bar, they’ll end up on the easternmost, southernmost point.

Unless East can somehow precommit to not choosing this point:

4

Now East is going to end up choosing one of these two points:

5

So North can choose the rightmost bar, and the two players end up here, a result both prefer:

6

I won’t be surprised if this has been invented before, and it may even be superceded – please do comment if so :)

Here’s a game where East has to both promise and threaten to get a better outcome:

0,1-1,3_2,2-3,0

0,1-1,3_2,2-3,0-x

Meetup : London social

3 ciphergoth 07 October 2013 11:45AM

Discussion article for the meetup : London social

WHEN: 13 October 2013 02:00:00PM (+0100)

WHERE: Shakespeare's Head, 64-68 Kingsway, London WC2B 6BG

Come and hang out with the lovely people of London Less Wrong. No agenda, just whatever's interesting!

Discussion article for the meetup : London social

Meetup : London meetup: thought experiments

4 ciphergoth 19 September 2013 08:29PM

Discussion article for the meetup : London meetup: thought experiments

WHEN: 29 September 2013 02:00:00PM (+0100)

WHERE: LShift, Hoxton Point, 6 Rufus St, London, N1 6PE

NB note the change of location!

We are so good at providing reasons for our decisions after the fact that the reasons before the fact that cause our decisions are often not clear to us. At this meeting we'll discuss and practice techniques for getting past our rationalisations and learning more about the real reasons we find ourselves making the choices we do.

Discussion article for the meetup : London meetup: thought experiments

Meetup : London social meetup

2 ciphergoth 07 September 2013 03:22PM

Discussion article for the meetup : London social meetup

WHEN: 15 September 2013 02:00:00PM (+0100)

WHERE: Shakespeare's Head, 64-68 Kingsway, WC2B 6BG

Weather is predicted to be thoroughly rainy for next Sunday, so let's retreat back to the safety of the Shakespeare's Head, near Holborn station. With no more updates of HP:MoR until October at the earliest, what better way to get your fix of LessWrongy goodness?

See you there!

Facebook: https://www.facebook.com/events/194188274096175/

Discussion article for the meetup : London social meetup

Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

6 ciphergoth 26 June 2013 01:17PM

 

Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

ABSTRACT: In slogan form, the thesis of this dissertation is that shaping the far future is overwhelmingly important. More precisely, I argue that:

Main Thesis: From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions of years or longer.

The first chapter introduces some key concepts, clarifies the main thesis, and outlines what follows in later chapters. Some of the key concepts include: existential risk, the world's development trajectory, proximate benefits and ripple effects, speeding up development, trajectory changes, and the distinction between broad and targeted attempts to shape the far future. The second chapter is a defense of some methodological assumptions for developing normative theories which makes my thesis more plausible. In the third chapter, I introduce and begin to defend some key empirical and normative assumptions which, if true, strongly support my main thesis. In the fourth and fifth chapters, I argue against two of the strongest objections to my arguments. These objections come from population ethics, and are based on Person-Affecting Views and views according to which additional lives have diminishing marginal value. I argue that these views face extreme difficulties and cannot plausibly be used to rebut my arguments. In the sixth and seventh chapters, I discuss a decision-theoretic paradox which is relevant to my arguments. The simplest plausible theoretical assumptions which support my main thesis imply a view I call fanaticism, according to which any non-zero probability of an infinitely good outcome, no matter how small, is better than any probability of a finitely good outcome. I argue that denying fanaticism is inconsistent with other normative principles that seem very obvious, so that we are faced with a paradox. I have no solution to the paradox; I instead argue that we should continue to use our inconsistent principles, but we should use them tastefully. We should do this because, currently, we know of no consistent set of principles which does better.

[If there's already been a discussion post about this, my apologies, I couldn't find it.]

 

View more: Next