I encountered ideas like this when I was a teenager. I decided that the highest-value thing a person could do was to dismantle civilization as quickly as possible to at least retard the scary things that technology could do to us. I put a lot of work into figuring out how to do that.
Later, I found LW and the Singleton/FAI solution. Much better solution, IMO, and easier as well. Still quite difficult, but I am converted.
I'm interested in why Kaj doesn't think FAI is a viable solution. Or maybe just agrees with luke that the mainline possibility is failure?
Human value is definitely the something to protect, and business as usual will destroy us. Excuse me, I need to go save the world.
I'm interested in why Kaj doesn't think FAI is a viable solution. Or maybe just agrees with luke that the mainline possibility is failure?
This might be clearer once the survey paper about proposed FAI approaches (as well as other approaches to limiting AI risk) we're writing becomes public, but suffice to say, IMO nobody so far has managed to propose an FAI approach that wouldn't be riddled with serious problems. Almost none of them work if we have a hard takeoff, and a soft takeoff might not be any better, due to allowing lots of different AGIs to compete and leading to the kind of evolutionary scenarios like described in the post. If there's a hard takeoff, you need to devote a lot of time and effort into making the design safe and also be the first one to have your AGI undergo a hard takeoff, two mutually incompatible goals. That's assuming that you even have a clue of what kind of a design would be safe - something CEV-like could qualify as safe, but currently it remains so vaguely specified that it reads more like a list of applause lights than an actual design, and even getting to the point where we could call it a design feels like it requires solving numerous difficult problems, some of which have remained unsolved for thousands of years, and our remaining time might be counted in tens of years rather than thousands or even hundreds... and so on and so on.
Not saying that it's impossible, but there are far more failure scenarios than successful ones, and an amazing amount of things would all have to go right in order for us to succeed.
Scary.
What can be done to improve our chances? I assume more funding for SI is a good idea, and I don't know how much I can do beyond that (math/philosophy/AI are not my expertise).
Waterline stuff is important, too.
We'll have some suggestions of potentially promising research directions in our survey paper. But if you're asking about what you yourself can do, then I don't have anything very insightful to suggest besides the normal recommendations of raising the waterline, spreading the word about these issues, and seeing if there's any other volunteer work that you could do.
There is something I am missing here.
Get rid of enough constraints, and you’ll get the equivalent of a Spiegelman’s monster, no longer even remotely human.
And this is bad how?
Human value is definitely the something to protect, and business as usual will destroy us.
What do you mean by "destroy us"? Change 21-century human animals into something better adapted to survive in the new Universe?
EDIT: I guess I should articulate my confusion better: what's wrong with gradually becoming an Egan's jewelhead (sounds like an equivalent of uploading to me) or growing an earring-based prosthetic neocortex?
I guess I should articulate my confusion better: what's wrong with gradually becoming an Egan's jewelhead (sounds like an equivalent of uploading to me) or growing an earring-based prosthetic neocortex?
I don't think those outcomes would be particularly bad: they're still keeping most constraints in place. If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious, now that sounds much worse.
If all that remained of humanity were mere replicators who only cared about making more copies of themselves and might not even be conscious
Adopting a somewhat external view: would not an alien looking at the earthlings describe them exactly like that?
No, why do you think so? The alien might of course be simply mistaken about the consciousness, but unless you're going to assert that humans are not in fact conscious, an alien who did say that would actually be making a mistake. And it seems clear that humans care about a lot of things besides reproduction, or birth rates would not fall in wealthy countries.
The alien might of course be simply mistaken about the consciousness
What behavior would unambiguously tell an alien that humans are conscious?
birth rates would not fall in wealthy countries
This can be simply an instinctive reaction related to saturation of some resource or a chemical reaction due presence of some inhibitor (e.g. auto emissions).
What behavior would unambiguously tell an alien that humans are conscious?
I have no idea, but there needn't be one. The alien may be just out of luck. He'll still be mistaken. My point is that you cannot use an outside view that you know to be mistaken, as an argument for anything in particular.
This can be simply an instinctive reaction related to saturation of some resource or a chemical reaction due presence of some inhibitor (e.g. auto emissions).
Well yes, it could; but are you genuinely asserting that this is in fact the case? If not, what's your point?
I don't understand what you're trying to argue here. You presumably do not actually believe that humans are non-conscious and care only about replication. So where are you going with the alien?
My point was that, were we to see the "future of humanity", what may look to us now as "replicators who only cared about making more copies of themselves and might not even be conscious" could be nothing of the sort, just like the current humanity looking to an alien as "replicators" is nothing of the sort. We are the alien and have no capabilities to judge the future.
Ok, but we are discussing hypothetical scenarios and can define the hypotheticals as we like; we are not directly observing the posthumans and thus liable to be misled by what we see. You cannot be mistaken about something you're making up! In short, you're just fighting the hypothetical. I suggest that this is not productive.
you're just fighting the hypothetical.
Am i? Fighting the hypothetical is unproductive when you challenge the premises of the hypothetical scenario. Kaj Sotala's hypothetical was "If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious". I pointed out that we are in no position to judge the future replicators based on our current understanding of humanity and its goals. Or what "being conscious" might mean. Does this count as challenging the premises?
And this is bad how?
People are somewhat flexible. If they're highly optimized for a particular set of constraints, then the human race is more likely to get wiped out.
This seems like the least of our concerns here. I think a far-flung, spacefaring strain of highly efficient mindless replicators well-protected against all forms of existential risk is still a horrifying future for humanity.
I probably have a stronger belief in unknown unknowns than you do, but I agree that either outcome is undesirable.
Ah, I see. I did not read the original post or Yvain's examples as necessarily resulting in the loss of flexibility, but I can see how this can be a fatal side effect in some cases. I guess this would be akin to sacrificing far mode for near mode, though not as extreme as wireheading.
Second thought: Is there any conceivable way of increasing human flexibility, or would it get borked by Goodhart's Law?
Increase average total human wealth significantly, such that a greater proportion of the total population has more ability to meaningfully try new things or respond to novel challenges in a stabilizing manner.
(The caveats pretty much write themselves.)
I think we end up with a singleton no matter what, and it's only a question of whether we choose the singleton, or try to maintain an impossible balance of power and thereby fail to control who builds it. Once it's possible to replicate directly using only computational resources, murder becomes an instantaneous and massively scalable means of reproduction. And once that's true, it just isn't possible to have any semblance of a balance of power; either someone takes over the world, or someone destroys the world accidentally while trying to take it over.
My perspective as a programmer (not specifically focused on security, but definitely not a layman), is that computer security won't ever be good enough for those circumstances, no matter how many resources are thrown at it, and that throwing resources at computer security is only a mixed blessing anyways.
It's not clear that a singleton is stable. It might suffer from (some type of) Denebola Collapse.
Fascinating topic, and a topic that's going to loom larger as we progress. I've just registered in order to join in with this discussion (and hopefully many more at this wonderful site). Hi everybody! :)
Surely an intelligent entity will understand the necessity for genetic/memetic variety in the face of the unforeseen? That's absolutely basic. The long-term, universal goal is always power (to realize whatever); power requires comprehensive understanding; comprehensive understanding requires sufficient "generate" for some "tests" to hit the mark.
The question then, I guess is, can we sort of "drift" into being a mindless monoculture of replicators?
Articles like this, or like s-f in general, or even just thought experiments in general (again, on the "generate" side of the universal process) shows that we are unlikely to, since this is already a warning of potential dangers.
I don't think these will be problems.
We would still have a use:
This is relevant because if we have any use at all, there will be some evolutionary pressure to optimize that. Even if there's no other reason for it than that useful people are sexier.
To differentiate ourselves, I think we would specialize in feeling. That would be the main difference between us and the technology, and feeling would be the reason for all of it to exist - our happiness. If we didn't need to trade money or time, instead we'd give everything a purpose. I think a lot of people would become artists - and I don't think technological advancements would make human art worthless to people. Unlike most other things, art is often valued BECAUSE it is hand-made or shows human imperfections. If we make machines that feel, then things get hairy. I predict that I would probably choose to get implants in that future, so that I could still do meaningful, goal-directed work. I think a lot of other people would, too.
Or it may be that the humans who reproduce the most are the ones with the kinds of genes to take the best implants. Who knows what those would be. High intelligence might be a factor, as more intelligent brains may be able to take more input and therefore handle more sophisticated implants.
We would not devolve by accident, and probably not even on purpose:
Also, I imagine we would have the ability to use eugenics or gene therapy to prevent the human gene pool from devolving into primordial ooze. On it's own, it might do that, but you've got to remember, the offspring will be born to humans, who so far, have shown they're very attached to the human form and who, in this scenario, won't sacrifice something like eyesight just to save a few calories a day or whatever. I think we're a heck of a lot more likely to evolve ourselves into an ever-increasing number of new species than into slime. After enough generations had passed that the taboo of being a new species was gone, I see us expressing our imaginations, adding wings, blue hair, sparkles or as yet unimagined alterations. But not slime. Even if someone chose to devolve their offspring (if that were even legal), their offspring may choose to get gene therapy later, and even if they forego that, they may use eugenics on their children, or the children may choose gene therapy, and so on.
Consider sexual desirability, also. Are you more likely to mate with someone who is halfway between you and ooze, or someone nearer to your own abilities? Even if a few genetic lines fall through the cracks, I don't think the majority of humans would. And we may have some failsafe for that, like free gene therapy for anyone who is disabled to the extent that someone else has to have power of attorney over them because they are not able to make decisions for themselves anymore. We have that already for seriously disabled people, and those people can choose to give their wards medical treatments. Any devolved humans would probably just be given gene therapy.
Outsorcing everything would be boring, so we wouldn't:
According to the author of "Flow: the psychology of optimal experience", doing something that gives you a challenge (that is not too hard or too easy) is an important pleasurable experience and an important key to happiness. (He explains this in his TED video). If we outsourced all of our thinking tasks, we would immediately realize that we were bored. I suspect our solution to this will be to play games or do tasks that are still challenging for a human even if considered simple by the standards of the day.
Perhaps what will maximize fitness in the future will be nothing but non-stop high-intensity drudgery, work of a drab and repetitive nature
How could that possibly happen in a world where computers were so much more advanced?
Get rid of enough constraints, and you’ll get the equivalent of a Spiegelman’s monster, no longer even remotely human.
Flying, blue-haired, sparkling rock stars and super-intelligent, goal-directed altruists are not remotely like viruses. I agree, though, that if technology progresses enough, we'll evolve ourselves into a whole bunch of stuff. Maybe it will even become so easy to evolve that we'll all try out different forms. This virus interpretation is way off, I think. I think it's more likely that we'd become a race of shape-shifters than Spiegelman’s monsters.
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents.
If we are able to make ourselves intelligent enough to correlate the contents of our minds, we'll be able to make ourselves intelligent enough to process the revelations, or at least, to pace the correlations in such a way that it prevents madness. This is similar to the problem posed by a trait called "low latent inhibition" which means you take in more information and have more ideas. If you can't process it all, you are likely to develop schizophrenia. But, if your IQ is high enough (Harvard link) it results in creativity. So perhaps Lovecraft made more connections than he could process and was rightly terrified of making any more, but that doesn't mean the brain designers in the future will get the balance wrong.
I think the specific concerns in this blog post, although they broach a topic that's really interesting to think about, will ultimately be irrelevant.
Related to: Kaj Sotala's Posts, Blogs by LWers
By fellow LessWronger Kaj_Sotala on his blog.