Tired: can humans solve artificial intelligence alignment?
Wired: can artificial intelligence solve human alignment?
Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.
Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically. They're built-in.
I can see how AI, like computing and the Internet, could have a...
Thanks much for your engagement Mitchell, appreciated.
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response
Yes, to quibble just a bit, not just self sustaining, but also accelerating. The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion. I just put up an article on the forum which explains further:
https://www.lesswr...
However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.
Here's a solution to all this. I call this revolutionary new philosophy....
Acting Like Adults
Here's how it works. We don't create a new technology which poses an existential risk until we've credibly figured out how to make the last one safe.
So, in practice, it looks like this. End all funding for AI, synthetic biolog...
Knowledge development feeds back on itself. So when you have a little knowledge you get a slow speed of further development, and when you have a lot of knowledge you get a fast speed. The more knowledge we get, the faster we go.
The first photo was incredible, amazing! Thanks for sharing that.
So what do we make of these men, who risk so much for so little?
Macho madness. Youtube and Facebook is full of it these days, and it truly pains me to watch young people with so much ahead of them risk everything in exchange for a few minutes of social media fame.
But, you know, it's not just young people, it's close to everybody. Here's an experiment to demonstrate. The next time you're on the Interstate count how many people NASCAR drafting tailgate you at 75mph. Risking everything, in exchange for nothing.
On behalf of the Boomer generation I wish to offer my sincere apologies for how we totally ripped off our own children. We feasted on the big jobs in higher education, and sent you the bill.
I paid my own way through the last two years of a four year degree, ending in 1978. I graduated with $4,000 in debt. That could have been you too, but we Boomer administrators wanted the corner office.
I've spent my entire adult life living near, sometimes only blocks away, from the largest university in Florida. It used to be an institution of hi...
As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time. As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by, and accepts with generally good cheer. Just another day in the office. :-)
A knowledge explosion itself -- to the extent that that is happening -- seems like it could be a great thing.
It's certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.
The 20th century is a good real world example of the overall picture.
This pattern illustrates the challenge presented by the knowledge explosion. As the scale of the emerging powers grows, the room ...
Hi again Duncan,
Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!
Can AI destroy modern civilization in the next 30 minutes? Can a single human being unilaterally decide to make that happen, right now, today?
I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population. So if we're not talking about nukes, which we overwhelmingly are not...
Hi Duncan, thanks for engaging.
I think that EA writers and culture are less "lost" than you think, on this axis. I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem
Ok, where are their articles on the subject? What I see so far are a ton of articles about AI, and nothing about the knowledge explosion unless I wrote it. I spent almost all day every day for a couple weeks on the EA forum,...
Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?
If the development of knowledge feeds back on itself...
And if this means the knowledge explosion will continue to accelerate...
And if there is no known end to such a process....
Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.
I'm 70 and so don't worry too much about how as yet unknown future threats might affect me personally, as I don't have a lot of futur...
So long as we're talking about AI, we're not talking about the knowledge explosion which created AI, and all the other technology based existential risks which are coming our way.
Endlessly talking about AI is like going around our house mopping up puddles one after another after another every time it rains. The more effective and rational approach is to get up on the roof and fix the hole where the water is coming in. The most effective approach is to deal with the problem at it's source.
This year everybody is talking about AI. Next year ...
The current 80,000 Hours list of the world's most pressing problems ranks AI safety as the number one cause in the highest priority area section.
AI safety is not the world's most pressing problem. It is a symptom of the world's most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.
Our outdated relationship with knowledge is the problem. Nuclear weapons, AI, genetic engineering and other technological risks are symptoms of that problem. EA writers...
One way to plan for the future is to slow down the machinery taking us there to reduce the uncertainty about what is coming to some degree.
Another way to plan for the future is to do what I've done, which is to get old (70) so that you have far less chips on the table in the face of the uncertainty. Ok, sorry, not very helpful. But on the other hand, it's most likely going to happen whether you plan it or not, and some comfort might be taken from knowing that sooner or later we all earn a "get out of jail free" card.
For today, one of the things...
If we were to respond specifically to the title of the post....
What is the best critique of AI existential risk arguments?
I would cast my vote for the premise that AI risk arguments don't really matter so long as a knowledge explosion feeding back upon itself is generating ever more, ever larger powers, at an ever accelerating rate.
For example, let's assume for the moment that 1) AI is an existential risk, and 2) we solve that problem somehow so that AI becomes perfectly safe. Why would that matter if civilization is then crushed when we lose c...
Yes, agreed, what you refer to is indeed a huge obstacle.
From years of writing on this I've discovered another obstacle. When... (read more)