All of Erlja Jkdf.'s Comments + Replies

Sidles closer

Have you heard of... philosophy of universal norms?

Perhaps the human experience thus far is more representative then the present?

Perhaps... we can expect to go a little closer to it when we push further out?

Perhaps... things might get a little more universal in this here cluttered with reality world.

So for a start...

Maybe people are right to expect things will get cool...

I think that's a bad beaver to rely on, any way you slice it. If you're imagining, say, GPT-X giving us some extremely capable AI, then it's hands-on enough you've just given humans too much power. If we're talking AGI, I agree with Yudkowsky; we're far more likely to get it wrong then get it right.

If you have a different take I'm curious, but I don't see any way that it's reassuring.

IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.

3janus
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don't spend too much time worrying about the slightly superhuman assholes. I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I'd enjoy the challenge of writing good simulacra to fight the bad ones. If we get it really right (which I don't think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.

There's a problem I bet you haven't considered.

Language and storytelling are hand-me-downs from times full of bastards. The linguistic bulk, and the more basic and traditional mass of stories, are going to be following more brutal patterns.

The deeper you dig, the more likely you end up with a genius in the shape of an ancient asshole.

And the other problem; all these smarter intelligences running around, simply by fact of their intelligence, has the potential to make life a real headache. Everything could end up so complicated.

One more bullet we have to dodge really.

2janus
hm, I have thought about this it's not that I think the patterns of ancient/perennial assholes won't haunt reanimated language, it's just that I expect strongly superhuman AI which can't be policed to appear and refactor the lightcone before that becomes a serious societal problem. But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence

Is this perhaps because the top end is simply not high enough yet?

The point is it's a near-term risk and only building on what they can already simulate.

They would be smarter at birth. Either you gene-edit your kids or you pass that up. Yes, some people would do it; and yes, you'd get genius proliferation. But so long as you've got enough hide-bound naturists, fully committed, you would always have some eco-warriors around.

There's no such thing as a million fully committed naturists, and that's why the planet is cooking and the endangered list keeps growing.

We're very good at generating existential risks. Given indefinite technological progression at our current pace, we are likely to get ourselves killed.

2JBlack
Your post - and my comment - are explicitly about necessary requirements for near-term survival. If you want to make another post about indefinite-term existential risks, then we can talk about that.

A technological plateau is strictly necessary. To give the simplest example; we lucked out on nukes. The next decade alone contains potential for several existential threats - readily made bioweapons, miniaturized drones, AI abuse - that I question our ability to consistently adapt too, particularly one after another.

We might get it, if our tech jumps thanks to exponential progress.

2JBlack
No, it is definitely not a strictly necessary requirement for near-term survival. To be "strictly necessary for near-term survival", such future technologies would have to be guaranteed to kill all of humanity, and soon. That's ridiculous hyperbole. There are risks ahead, even existential risks, from other non-AI technologies but not to nearly that extent.