Comment author: DeterminateJacobian 31 October 2014 04:32:10AM 2 points [-]

Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.

...Or even discover ways that suggest that availability of negentropy is not an actual limiter on the ability to do things. Does anyone know the state of that argument? Is it known to be true that the universe necessarily runs out of things for superintelligences to do because of thermodynamics?

Comment author: dougclow 31 October 2014 07:07:11AM 2 points [-]

Empirically we seem to be converging on the idea that the expansion of the universe continues forever (see Wikipedia for a summary of the possibilities), but it's not totally slam-dunk yet. If there is a Big Crunch, then that puts a hard limit on the time available.

If - as we currently believe - that doesn't happen, then the universe will cool over time, until it gets too cold (=too short of negentropy) to sustain any given process. A superintelligence would obviously see this coming, and have plenty of time to prepare - we're talking hundreds of trillions of years before star formation ceases. It might be able to switch to lower-power processes to continue in attenuated form, but eventually it'll run out.

This is, of course, assuming our view of physics is basically right and there aren't any exotic possibilities like punching a hole through to a new, younger universe.

Comment author: shminux 30 October 2014 11:17:20PM 24 points [-]

What you are describing is an accidental Clippy, just like humans are accidental CO2 maximizers. Which is a fair point, if we meet what looks like an alien Clippy, we should not jump to conclusions that paperclip maximizing is its terminal value.

Also, just to nitpick, if you have a lot of mass available, it would make sense to lump all this iron together and make a black hole, as you can extract a lot more energy from throwing stuff toward it than from the nuclear fusion proper. Or you can use fusion first, then throw the leftover iron bricks into the accreting furnace.

So the accidental Clippy would likely present as a black hole maximizer.

Comment author: dougclow 31 October 2014 06:53:00AM 2 points [-]

Yes, good point that I hadn't thought of, thanks. It's very easy to imagine far-future technology in one respect and forget about it entirely in another.

To rescue my scenario a little, there'll be an energy cost in transporting the iron together; the cheapest way is to move it very slowly. So maybe there'll be paperclips left for a period of time between the first pass of the harvesters and the matter ending up at the local black hole harvester.

Maybe you want to maximise paperclips too

43 dougclow 30 October 2014 09:40PM

As most LWers will know, Clippy the Paperclip Maximiser is a superintelligence who wants to tile the universe with paperclips. The LessWrong wiki entry for Paperclip Maximizer says that:

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented

I think that a massively powerful star-faring entity - whether a Friendly AI, a far-future human civilisation, aliens, or whatever - might indeed end up essentially converting huge swathes of matter in to paperclips. Whether a massively powerful star-faring entity is likely to arise is, of course, a separate question. But if it does arise, it could well want to tile the universe with paperclips.

Let me explain.

paperclips

To travel across the stars and achieve whatever noble goals you might have (assuming they scale up), you are going to want energy. A lot of energy. Where do you get it? Well, at interstellar scales, your only options are nuclear fusion or maybe fission.

Iron has the strongest binding energy of any nucleus. If you have elements lighter than iron, you can release energy through nuclear fusion - sticking atoms together to make bigger ones. If you have elements heavier than iron, you can release energy through nuclear fission - splitting atoms apart to make smaller ones. We can do this now for a handful of elements (mostly selected isotopes of uranium, plutonium and hydrogen) but we don’t know how to do this for most of the others - yet. But it looks thermodynamically possible. So if you are a massively powerful and massively clever galaxy-hopping agent, you can extract maximum energy for your purposes by taking up all the non-ferrous matter you can find and turning it in to iron, getting energy through fusion or fission as appropriate.

You leave behind you a cold, dark trail of iron.

That seems a little grim. If you have any aesthetic sense, you might want to make it prettier, to leave an enduring sign of values beyond mere energy acquisition. With careful engineering, it would take only a tiny, tiny amount of extra effort to leave the iron arranged in to beautiful shapes. Curves are nice. What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.

Over time, the amount of space that you’ve visited and harvested for energy will increase, and the amount of space available for your noble goals - or for anyone else’s - will decrease. Gradually but steadily, you are converting the universe in to artfully-twisted pieces of iron. To an onlooker who doesn’t see or understand your noble goals, you will look a lot like you are a paperclip maximiser. In Eliezer’s terms, your desire to do so is an instrumental value, not a terminal value. But - conditional on my wild speculations about energy sources here being correct - it’s what you’ll do.

Comment author: brazil84 25 October 2014 03:53:45PM 0 points [-]

Core temperature might vary between people by only a few degrees, but surface temperature varies much more widely.

That's an interesting point. Would you agree that if a person has a higher metabolism, one would expect that under your theory, their skin temperature would be expected to be higher?

Comment author: dougclow 28 October 2014 07:08:50PM 0 points [-]

That, and/or increased sweating, and/or larger temperature gain between inspired and expired air, or wearing fewer/thinner clothes. There's lots of ways to dump heat.

I would definitely expect someone with a faster metabolism to put out more total net heat, which is measurable with difficulty, and also consume oxygen faster (and produce carbon dioxide faster) which is measurable with some difficulty, but a lot less.

Comment author: brazil84 18 October 2014 07:28:05PM 4 points [-]

I used to think that overweight was caused by slow metabolism, i.e. that generally speaking fat people are people who have slow metabolisms and thin people are people who have fast metabolisms.

I believed this because (1) it is the conventional wisdom; (2) it is consistent with the observation that some people seem to be thin even though they stuff their faces; and (3) it makes sense from a thermodynamic perspective that someone with a slow metabolism would be prone to putting on weight and someone with a fast metabolism would be prone to staying thin.

Putting aside the fact that this belief was wrong, there does seem to be a certain degree of irrationality about it given the observation that people don't vary all that much in terms of body temperature. Therefore they must not vary all that much in terms of metabolism.

Comment author: dougclow 20 October 2014 08:14:33PM 1 point [-]

Therefore they must not vary all that much in terms of metabolism.

I don't think that follows, or at least not without a lot of other explanation, even if you grant that temperature doesn't vary in any significant way between people (which I'm not sure I do). The body has multiple mechanisms for maintaining temperature, of which metabolic rate is only one. It seems entirely plausible to me that people run their metabolisms at different rates and adjust their peripheral vasodilation and sweating rate to balance it all out near 37 C/98 F. Core temperature might vary between people by only a few degrees, but surface temperature varies much more widely.

Comment author: Adele_L 19 July 2014 09:03:29PM 14 points [-]

That number sounded suspicious to me when I first heard it, and it turns out that according to the International AIDS Society president, it was more like six. Still a terrible loss.

Comment author: dougclow 20 July 2014 06:56:53AM 6 points [-]

Also, they were not just AIDS researchers but AIDS activists and campaigners. The conference they were going to was expecting 12-15,000 delegates (depending on the report); it's the most prominent international conference in the area but far from the only one. As you say, a terrible loss, particularly for those close to the dead. The wider HIV/AIDS community will be sobered, but it will not be sunk. If nothing else, they coped with far higher annual death rates before effective therapies became widespread in the developed world.

The story of this story does helpfully remind us that the other 'facts' about this situation - which we know from the same media sources - may be similarly mistaken.

Comment author: CasioTheSane 01 May 2014 11:12:12AM *  0 points [-]

I have been attempting to do this with biology and medicine, seriously for about 5 years now. Not by actually repeating experiments, but in trying to understand the original evidence, and see if I agree that it was interpreted correctly. Of course this is nearly impossible as biology is too broad and complex for one person to understand all of the details.

It's a confusing mess, but I think I am still learning a lot. Even if I come to agree with most of the mainstream ideas, I'd like to think I'd then understand them more deeply, in a way that is more functionally useful.

For much of medicine, there really isn't any biological basis or evidence to review. Much of modern medicine involves covering up symptoms with drugs proven to do this, without understanding the underlying cause of the symptom.

Comment author: dougclow 01 May 2014 03:52:19PM *  7 points [-]

Much of modern medicine involves covering up symptoms with drugs proven to do this, without understanding the underlying cause of the symptom.

What, really? There certainly is a lot of that approach around, but it's not what I think of when I think of modern medicine, as opposed to more traditional forms. Can you give examples?

Most of the ones I can think of are things that have fallen to the modern turn to evidence-based practice. The poster-child one in my head is the story of H. pylori and how a better understanding of the causes of gastritis and gastric ulcers has led to better treatments than the old symptom-relieving approaches. (And I'll tell you what, although Zantac/Ranitidine is only a symptomatic reliever, it was designed to do that job based on a thorough understanding of how that symptom comes about, and it's bloody good at it, as anyone who's had it for bad heartburn or reflux can attest.)

When I think of modern medicine, I think of things like Rituximab, which is a monoclonal antibody designed with a very sophisticated understanding of how the body's immune system works - it targets B cells specifically, and has revolutionised drug treatment for diseases like non-Hodgkin's lymphomas where you want to get rid of B cells. So much so that for some of those lymphomas, we don't have very robust 5 year survival data, because the improvement over traditional chemotherapy alone is so large that the old survival data is no use (we know people will live much longer than that), and Rituximab hasn't been widely used for long enough to get new data. In the last 25 years our understanding of cancer has gone from "it's mutations in the genes, probably these ones" to vast databases of which specific mutations at which specific locations on which specific genes are associated with which specific cancer symptoms, and how those are correlated with prognosis and treatment. And as a result cancer survival rates have improved markedly. We don't have "A Cure For Cancer", and we now know we never will, any more than we can have "A Cure For Infection", but we do have a good enough understanding of how it happens to get much better at reducing its impact.

Even modern medical disasters like Vioxx are hardly a result of a lack of understanding the underlying cause, but more us learning more about other complexities of human biology. Admittedly we don't yet fully understand how pain works, but we do know enough to know that targeting COX-2 exclusively (rather than COX-1 as well, which looks after your gut lining) would be safer for your gut. This is understanding down at the molecular level. It turns out in large scale studies that they are safer for your gut, but of course they're not very safe for your heart, so we've stopped using them. And actually doing the full-scale research on modern rationally-designed drugs like Vioxx suggests that similar old drugs (that we never bothered to test) have the same effect on hearts.

Comment author: dougclow 01 May 2014 02:13:49PM 1 point [-]

Interesting stuff, thanks; looking forward to the rest of the series.

As an aside, this makes the benefits of being able to rely on trust most of the time very apparent. Jack and Jill can coordinate very simply and quickly if they trust each other to honestly disclose their true value for the project. They don't even need to be able to trust 100%, just trust enough that on average they lose no more to dishonesty than the costs of more complex and sophisticated methods of bargaining. (Which require more calculating capacity than unaided humans have evolved.)

Comment author: Gunnar_Zarncke 30 April 2014 09:11:44AM 3 points [-]

This is strongly related to parenting where attention to negative behavior reinforces it - partly no doubt by making it more the topic of thought. So every solution invented by parenting should be applicable to the question of how to 'fetch positive queries',

This is the main topic of the following article: http://www.motherinc.com.au/magazine/kids/kids-education/476-negative-and-positive-commands

There is the following advice:

Parents frequently ask, “Is it ever useful to say what we don’t want?” Sometimes we can get an idea across better if we start by saying what we don’t want. This is particularly true if the child is already doing an undesired behaviour. After saying “Don’t do that,” to get her to stop, it’s very important to immediately tell our child what we do want: “That’s too loud, Amy. I’d like to have you talk a little softer, like this (demonstrating with your voice), OK?”

This contains the initial insight that you have to first notice (in this case helped by the parent) that you are following a negative query. And then go on to follow the positive opposite.

Children don’t learn by being told what not to do, they learn by seeing and hearing what to do.

This of course is the same as looking and doing what other people do in that situation.

If a mother rushes to a child on a ledge and shrieks “Be careful! Be careful!” with terror in her voice, the negative outcome is expressed nonverbally.

This matches the aspect that the words alone aren't the key to whether your brain is put into 'negative fetching' mode. You emotional state is key to that too.

Putting it together this means that you should be able to train positive fetching by using standard positive reinforcement techniques. This should work best with a partner providing the feedback.

Comment author: dougclow 01 May 2014 12:27:16PM 2 points [-]

I find similar techniques help with my children.

It seems closely related to the technique where, to stop them doing something you don't want them to do, you encourage them to do something else that prevents them from doing the first thing. (There's a snappy name for this that I've forgotten.) So, for example, stopping them from bothering another child by getting them interested in an entirely different activity.

Comment author: XiXiDu 30 April 2014 08:48:55AM 1 point [-]

I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.

How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU's etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale.

Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world's infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world.

As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same.

Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world).

Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk.

A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.

Comment author: dougclow 01 May 2014 11:38:47AM 2 points [-]

I really don't think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we've imagined in the first place. There are shedloads of robots around at the moment - the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren't autonomous, but they don't need to be if we've assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.

Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.

View more: Next