All of Phil Tanny's Comments + Replies

The problem with this policy is the unilateralist's curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination.

 

Yes, agreed,  what you refer to is indeed a huge obstacle.  

From years of writing on this I've discovered another obstacle.   When... (read more)

Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

 

Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.

Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically.   They're built-in.  

I can see how AI, like computing and the Internet, could have a... (read more)

Thanks much for your engagement Mitchell, appreciated.

Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response

Yes,  to quibble just a bit, not just self sustaining, but also accelerating.   The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion.  I just put up an article on the forum which explains further:

https://www.lesswr... (read more)

6Mitchell_Porter
We believe AI is pivotal because we think it's going to surpass human intelligence soon. So it's not just another technology, it's our successor.  The original plan of MIRI, the AI research institute somewhat associated with this website, was to identify a value system and a software architecture for AI, that would still be human-friendly, even after it bootstrapped itself to a level completely beyond human control or understanding, becoming the metaphorical "operating system" in charge of all life on Earth.  More recently, given the rapidity of advances in the raw power of AI, they have decided that there just isn't time to solve these design problems, before some AI lab somewhere unwittingly hatches a superintelligent AI system that steamrolls the human race, not out of malice, but simply because it has goals that aren't sufficiently finetuned to respect human life, liberty, or happiness.  Instead, their current aim is to buy time for humanity, by using early superintelligent AI, to neutralize all other dangerous AI projects, and establish a temporary regime in which civilization can deliberate on what to do with the incredible promise and peril of AI and related technologies.  There is therefore some similarity with your own idea to slow things down, but in this scenario, it is to be done by force, and by using the dangerous technology of superintelligent AI, when it first appears. Continuing the operating system metaphor, this amounts to putting AI-augmented civilization into a "safe mode" before it can do anything too destructive.  This suggests a model of the future, in which there is a kind of temporary world government, equipped with a superintelligent AI that monitors everything everywhere, and which steps in to sabotage any unapproved technology that threatens to create unfriendly superintelligence. Ideally, this period lasts as long as it takes, for humanity's wise ones to figure out how to make fully autonomous superintelligence, something that we c

EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

 

Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?

6Algon
Upvoted because I think this comment is a reasonable question, and shouldn't be getting this many downvotes. Your latter comment in the thread wasn't thought provoking, as it felt like a non-sequitur, though still not really something I'd downvote. I would encourage you to share your model for why a lack of co-operation with labs within three likely-inconsequential-to-AI states and one likely-consequential-to-AI-states implies that well intended intellectuals in the west aren't likely to have control over the future of AI.  After all, substantial chunk of the most capable AI companies take alignment risks fairly seriously (Deepmind, OpenAI sort-of), and I mostly think AGI will arrive in a decade or two. Given Chinese companies don't seem interested in building AGI, and still aren't producing as high quality research as the west, and China's slowing economic growth, I think it probable the West will play a large role in the creation of AGI.
lc113

Are you having any luck finding innovative Russian, Chinese, Iranian, or North Korean labs?

5kave
OP writes that there have been no big cooperation wins, so a fortiori, there have been no big cooperation wins with the countries you mention.

However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.

Here's a solution to all this.  I call this revolutionary new philosophy....

Acting Like Adults

Here's how it works.  We don't create a new technology which poses an existential risk until we've credibly figured out how to make the last one safe.  

So, in practice, it looks like this.  End all funding for AI, synthetic biolog... (read more)

1Stephen McAleese
Also, I don't think that any of these conclusions or recommendations are simple or common sense. Though some of them may seem simple in hindsight just as a math problem seems simple after one has seen the solution. The reason why I wrote this post was that I was very confused about the subject. If I thought there was a simple answer, I wouldn't have written the post or written a much shorter post. Here is a quote from my research proposal: And a quote from the person reviewing my proposal: Not only was the project not simple, the reviewer thought that it was almost impossible to make progress on given the number of factors at play.
1Stephen McAleese
What you're suggesting sounds like differential technological development or the precautionary principle: The problem with this policy is the unilateralist's curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination. In the post I describe several factors such as war that might affect the level of global coordination and that it might be wise to slow down AI development by a few years or decades if coordination can't be achieved since I think AI risk is higher than other risks.
1[comment deleted]

Why was progress so slow in the past?

Knowledge development feeds back on itself.  So when you have a little knowledge you get a slow speed of further development, and when you have a lot of knowledge you get a fast speed.  The more knowledge we get, the faster we go.

2jasoncrawford
Yes, but that's only one of many flywheels. If we lost all of our wealth, infrastructure, and institutions, but kept all of our knowledge, growth would slow way down. The only way to fully understand long-term growth is to understand all of these overlapping, interacting flywheels.

The first photo was incredible, amazing!  Thanks for sharing that.

So what do we make of these men, who risk so much for so little?

Macho madness.  Youtube and Facebook is full of it these days, and it truly pains me to watch young people with so much ahead of them risk everything in exchange for a few minutes of social media fame.  

But, you know, it's not just young people, it's close to everybody.   Here's an experiment to demonstrate.  The next time you're on the Interstate count how many people NASCAR drafting tailgate you at 75mph.   Risking everything, in exchange for nothing.

1GeneSmith
I think people posting on social media have much more of an incentive to act crazy for fame than mountain climbers. Very few of these climbers became famous.

On behalf of the Boomer generation I wish to offer my sincere apologies for how we totally ripped off our own children.  We feasted on the big jobs in higher education, and sent you the bill.

I paid my own way through the last two years of a four year degree, ending in 1978.  I graduated with $4,000 in debt.  That could have been you too, but we Boomer administrators wanted the corner office.

I've spent my entire adult life living near, sometimes only blocks away, from the largest university in Florida.  It used to be an institution of hi... (read more)

As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time.   As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by,  and accepts with generally good cheer.  Just another day in the office.  :-)

2lc
https://slatestarcodex.com/2013/05/18/against-bravery-debates/

A knowledge explosion itself -- to the extent that that is happening -- seems like it could be a great thing.

 

It's certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.  

The 20th century is a good real world example of the overall picture.  

  • TONS of benefits from the knowledge explosion, and...
  • Now a single human being can destroy civilization in just minutes.

This pattern illustrates the challenge presented by the knowledge explosion.   As the scale of the emerging powers grows, the room ... (read more)

Hi again Duncan, 

Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!

Can AI destroy modern civilization in the next 30 minutes?   Can a single human being unilaterally decide to make that happen, right now, today?

I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population.  So if we're not talking about nukes, which we overwhelmingly are not... (read more)

7gilch
Doubt it, but it might depend on how much of an overhang we have. My timelines aren't that short, but if there were an overhang and we were just a few breakthroughs away from recursive self-improvement, would the world look any different than it does now? Oh, good point. Pilots have intentionally crashed planes full of passengers. Kids have shot up schools, not expecting to come out alive. Murder-suicide is a thing humans have been known to do. There have been a number of well-documented close calls in the Cold War. As nuclear powers proliferate, MAD becomes more complicated. It's still about #3 on my catastrophic risk list depending on how you count things. But the number of humans who could plausibly do this remains relatively small. How many human beings could plausibly bioengineer a pandemic? I think the number is greater, and increasing as biotech advances. Time is not the only factor in risk calculations. And likely neither of these results in human extinction, but the pandemic scares me more. No, nuclear war wouldn't do it. That would require salted bombs, which have been theorized, but never deployed. Can't happen in the next 30 minutes. Fallout become survivable (if unhealthy) in a few days. Nobody is really interested in bombing New Zealand. They're too far away from everybody else to matter. Nuclear winter risk has been greatly exaggerated, and humans are more omnivorous than you'd think, especially with even simple technology helping to process food sources. Not to say that a nuclear war wouldn't be catastrophic, but there would be survivors. A lot of them. A communicable disease that's too deadly (like SARS-1) tends to burn itself out before spreading much, but an engineered (or natural!) pandemic could plausibly thread the needle and become something at least as bad as smallpox. A highly contagious disease that doesn't kill outright but causes brain damage or sterility might be similarly devastating to civilization, without being so self-limiting.

Hi Duncan, thanks for engaging.

I think that EA writers and culture are less "lost" than you think, on this axis.  I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem

Ok, where are their articles on the subject?  What I see so far are a ton of articles about AI, and nothing about the knowledge explosion unless I wrote it.   I spent almost all day every day for a couple weeks on the EA forum,... (read more)

7Duncan Sabien (Deactivated)
Note: despite the different username, I'm the author of the handbook and a former CFAR staff member. I disagree with this take as specifically outlined, even though I do think there's a kernel of truth to it. Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons! I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence. Phil may disagree with the claim that nuclear weapons are something like third on the list, rather than the top item, but that doesn't mean he's right. And CFAR staff certainly clear the bar of "spending a lot of time focusing on what seems to them to be the actually most salient threat." I agree that if somebody seems to be willfully ignoring a salient threat, they have gaps in their rationality that should give you pause.

Also, I think it should be required that all EA followers wear Cyndi Lauper style hair so that followers can easily identify each other in public.  I could be kidding about this.

Here's a suggested theme song for the EA movement.

0Phil Tanny
Also, I think it should be required that all EA followers wear Cyndi Lauper style hair so that followers can easily identify each other in public.  I could be kidding about this.

Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?   

If the development of knowledge feeds back on itself...

And if this means the knowledge explosion will continue to accelerate...

And if there is no known end to such a process....

Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.

I'm 70 and so don't worry too much about how as yet unknown future threats might affect me personally, as I don't have a lot of futur... (read more)

1alokja
A knowledge explosion itself -- to the extent that that is happening -- seems like it could be a great thing. So for what it's worth my guess would be that it does make sense to focus on mitigating the specific threats that it creates (insofar as it does) so that the we get the benefits too.
6jefftk
Living in a society that where people have adapted for the current situation is very different from living in one that has recently lost an important input. For example, at one point most cold countries heated with coal, and so weren't using any Russian oil and gas, but your house isn't set up to burn coal, you don't have a way to cheaply get coal, and it's probably not legal to burn anymore.

So long as we're talking about AI, we're not talking about the knowledge explosion which created AI, and all the other technology based existential risks which are coming our way.

Endlessly talking about AI is like going around our house mopping up puddles one after another after another every time it rains.  The more effective and rational approach is to get up on the roof and fix the hole where the water is coming in.  The most effective approach is to deal with the problem at it's source.

This year everybody is talking about AI.  Next year ... (read more)

1Evan R. Murphy
I think AI misalignment is uniquely situated as one of these threats because it multiplies the knowledge explosion effect you're talking about to a large degree. It's one of the few catastrophic risks that is a plausible total human extinction risk too. Also if AI goes well, it could be used to address many of the other threats you mention as well as upcoming unforeseen ones.
1Shiroe
Can you give some examples?

The current 80,000 Hours list of the world's most pressing problems ranks AI safety as the number one cause in the highest priority area section.

 

AI safety is not the world's most pressing problem.  It is a symptom of the world's most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.   

Our outdated relationship with knowledge is the problem.  Nuclear weapons,  AI,  genetic engineering and other technological risks are symptoms of that problem.  EA writers... (read more)

4Duncan Sabien (Deactivated)
(I clicked through to see your other comments after disagreeing with one.  Generally, I like your comments!) I think that EA writers and culture are less "lost" than you think, on this axis.  I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem, and when they talk about "AI safety" and so forth, they're somewhat shorthanding this. Like, I think most of the people around here are, in fact, worried about some of the products rolling off the end of the assembly line, but would also pretty much immediately concur with you that the assembly line itself is the root problem, or at least equally important. I can't actually speak for everybody, of course, but I think you might be docking people more points than you should.

One way to plan for the future is to slow down the machinery taking us there to reduce the uncertainty about what is coming to some degree.

Another way to plan for the future is to do what I've done, which is to get old (70) so that you have far less chips on the table in the face of the uncertainty.  Ok, sorry, not very helpful.  But on the other hand, it's most likely going to happen whether you plan it or not, and some comfort might be taken from knowing that sooner or later we all earn a "get out of jail free" card.

For today, one of the things... (read more)

If we were to respond specifically to the title of the post....

What is the best critique of AI existential risk arguments?

I would cast my vote for the premise that AI  risk arguments don't really matter so long as a knowledge explosion feeding back upon itself is generating ever more, ever larger powers, at an ever accelerating rate.

For example, let's assume for the moment that 1) AI is an existential risk, and 2) we solve that problem somehow so that AI becomes perfectly safe.  Why would that matter if civilization is then crushed when we lose c... (read more)

3Mitchell_Porter
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response - from the whole of civilization? just from the intelligentsia? It's unclear to me if you think you already have a solution.  You're also saying that focus on AI safety is a mistake, compared with focus on this larger recurring process, of dangerous new technologies emerging thanks to the process of discovery.  There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution. However, I would first like to hear what your own recommendations are, before presenting an AI-centric perspective.