All of kerry's Comments + Replies

kerry10

yea that's cool to see.  Very similar attempt at categorization.  I feel we get caught up often in the potential / theoretical capabilities of systems.  But there are already plenty of systems that fulfill self-replicating, harmful, intelligent behaviors.  It's entirely a question of degrees.  That's why a visual ranking of all systems' metrics is in order I think.  

Defining what comprises a 'system' would be the other big challenge.  Is a hostile government a system?  That's fairly intelligent and self-replicating.  etc.

kerry10

I think the title should be rephrased, "If alignment is hard, then so is self-replication".

Linear self-improvement seems a tenable proposition to me.

Your argument assumes (perhaps correctly) that a FOOM would require continual offloading of 'greatest agency' from one agent to another, as opposed to upgrading-in-place. 

kerry30

yes to drawing and annotation.  This has been an itch of mine ever since I got into web dev over a decade ago.  The same way the mouse allowed us to designate "this thing" to the PC without having to literally name it, we could communicate the same way to each other on the web potentially

kerry40

This is nice to read, because it seems Sam is more often on the defensive in public recently and comes across sounding more "accel" than I'm comfortable with.   In this video from 6 years ago, various figures like Hassabis and Bostrom (Sam is not there) propose on several occasions exactly what's happening now - a period of rapid development, perhaps to provoke people into action / regulation while the stakes are somewhat lower, which makes me think this may have been in part what Sam was thinking all along too.

https://www.youtube.com/watch?v=h0962biiZa4

kerry10

I didn't.  I'm sure words towards articulating this have been spoken many times, but the trick is in what forum / form does it need to exist more specifically in order for it to be comprehensible and lasting.  Maybe I'm wrong that it needs to be highly public; as with nukes not many people are actually familiar with what is considered sufficient fissile material - governments (try to) maintain this barrier by themselves.  But at this stage as it still seems a fuzzy concept, any input seems valid. 

Consider the following combination of pr... (read more)

2Štěpán Los
I know I am super late to the party but this seems like something along the lines of what you’re looking for: https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios
kerry20

maybe some type of oppositional game can help in this regard?  

Along the same lines as the AI Box experiment.  We have one group "trying to be the worst case AI" starting right at this moment.  Not a hypothetical "worst case" but one taken from this moment in time, as if you were an engineer trying to facilitate the worst AI possible.  

The Worst Casers propose one "step" forward in engineering.  Then we have some sort of Reality Checking team (maybe just a general crowd vote?), where they rate to disprove the feasbility of the step... (read more)

1TinkerBird
Based on a few of his recent tweets, I'm hoping for a serious way to turn Elon Musk back in the direction he used to be facing and get him to publically go hard on the importance of the field of alignment. It'd be too much to hope for though to get him to actually fund any researchers, though. Maybe someone else. 
kerry11

There is not currently a stock solution to convince someone of the -realistic- dangers of AI who is not willfully engaged already.  There are many distracting stories about AI, which are worse than nothing. But describing a bad AI ought to be a far easier problem than aligning AI.  I believe we should be focused, paradoxically, perhaps dangerously, on finding and illustrating very clearly the shortest, most realistic and most impactful path to disaster.  

The most common misconception I think that people make is to look at the past, and our h... (read more)

2TinkerBird
This is why I'm crossing my fingers for a 'survivable disaster' - an AI that merely kills a lot of people instead of everyone. Maybe then people would take it seriously.  Coming up with a solution for spreading awareness of the problem is a difficult and important problem that ordinary people can actually tackle, and that's what I want to try. 
kerry11

I've thought for awhile here that the primary problem in alarmism (I'm one) is painting a concise, believable picture.  It takes a willful effort and  open mind to build a "realistic" picture of this here-to-fore unknown mechanism/organism for oneself, and is near impossible to do for someone who is skeptical or opposed to the potential conclusion.

Being a web dev I've brainstormed on occasion ways to build short, crowd-ranked chains of causal steps which people could evaluate for themselves, with various doubts and supporting evidence given to ea... (read more)

1TinkerBird
https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do   Any ideas you have for overcoming the problems of alarmism would be good. 
kerry10

I agree completely, and I'm currently looking for what is the most public and concise platform where these scenarios are mapped.  Or as I think of them, recipes.  There is a finite series of ingredients I think which result in extremely volatile situations.  A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc.  We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.  

kerry20

We need a clear definition of bad AI before we can know what is -not- that I think.  These benchmarks seem to itemize AI as if it will have known, concrete components.  But I think we need to first compose in the abstract a runaway self sustaining AI, and work backwards to see which pieces are already in place for it.  

I haven't kept up with this community for many years, so I have some catching up to do, but I am currently on the hunt for the most clear and concise places where the various runaway scenarios are laid out.  I know there is a wealth of literature, I have the Bostrom book from years ago as well, but I think simplicity is the key here.  In other words, where is the AI redline ?

1Peter Chatain
Curious if you ever found what you were looking for.
kerry10

I find the article well written and hits one nail on the head after another in regards to the potential scope of what's to come, but the overarching question of the black swan is a bit distracting.  To greatly oversimplify, I would say black swan is a category of massive event, on par with "catastrophe" and "miracle", it just has overtones of financial investors having hedged their bets properly or not to prepare for it (that was the context of Taleb's book iirc).  

Imho, the more profound point you started to address, was our denial of these even... (read more)

1Stephen McAleese
Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them.  I don't think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.