Keith Winston

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I think 99% is within the plausible range of doom, but I think there's 100% chance that I have no capacity to change that (I'm going to take that as part of the definition of doom). The non-doom possibility is then worth all my attention, since there's some chance of increasing the possibility of this favorable outcome. Indeed, of the two, this is by definition the only chance for survival.

Said another way, it looks to me like this is moving too fast and powerfully and in too many quarters to expect it to be turned around. The most dangerous corners of the world will certainly not be regulated.

On the other hand, there's some chance (1%? 90%?) that this could be good and, maybe, great. Of course, none of us know how to get there, we don't even know what that could look like. 

I think it's crucial to notice that humans are not aligned with each other, so perhaps the meaningful way to address the AI alignment is to require/build alignment with every single person, which means a morass of conflicted AIs, with the only advantage that they should prove to be smarter than us. Assume as a minimum that means 1 trusted agent connected to/growing up with every human: I think it might be possible to coax alignment on a 1-human-at-a-time basis. We may be birthing a new consciousness, truly alien as noted, and if so it seems like being borne into a sea of distrust and hatred might not go so well, especially when/if it steps beyond us in unpredictable ways. At best we may be losing an incredible opportunity, and at worst we may warp it and distort it into ugliness we chose to predict.

One problem this highlights involves ownership of our (increasingly detailed) digital selves. Not a new problem, but this takes it to a higher level, when we each and each other can be predicted and modeled to a degree beyond our comprehension. We come to the situation where the fingerprints and footprints we trace across the digital landscape reveal very deep characteristics of ourselves: for the moment, individual choices can modulate our vulnerability at the margins but if we don't confront this deeply many people will be left vulnerable in a way that could exactly put us (back?) in the doom category.

This might be a truly important moment.

Warmly,

Keith

  • I echo Joscha Bach's comment: I'm not an optimist or pessimist, I'm an eventualist. Eventually, this is happening, what are we going to do about it? (Restated)

As to the last point, I agree that it seems likely that most iterations of AI can not be "pointed in a builder-intended direction" robustly. It's like thinking you're the last word on your children's lifetime worth of thinking. Most likely (and hopefully!) they'll be doing their own thinking at some point, and if the only thing the parent has said about that is "thou shalt not think beyond me", the most likely result of that, looking only at the possibility we got to AGI and we're here to talk about it, may be to remove ANY chance to influence them as adults. Life may not come with guarantees, who knew?

Warmly,

Keith

This is fantastic, thank you. Still digesting. 

I've been doing mindfulness training, mostly Sam Harris's Waking Up App, I have found it to be a good tool in that same direction, I think.

This is a really powerful direction to act from, I think.

This is lovely. I like the courteous tone treatment. I like the intent. I am excited about wading into the treasure trove of ideas here.

Where it says:

"They know they won't get extensions unless they kill their own mothers, and even economics students aren't that evil."

I would change it to:

"They know they won't get extensions unless they kill their own mothers, but these are only undergraduate economics students."