Comment author: Ghatanathoah 29 March 2013 01:08:06AM -1 points [-]

Sympathy is only relevant to social entities -- but why not create solitary ones as well?

A creature that loves solitude might not necessarily be bad to create. But it would still be good to give it capacity for sympathy for pragmatic reasons, to ensure that if it ever did meet another creature it would want to treat it kindly and avoid harming it.

As for boredom, what makes a population of entities that seek variety in their lives better than one of entities who each have highly specialized interests (all different from each other)? As a whole, wouldn't the latter display more variation than the former?

It's not about having a specialized interest and exploring it. A creature with no concept of boredom would would, (to paraphrase Eliezer), "play the same screen of the same level of the same fun videogame over and over again." They wouldn't be like an autistic savant who knows one subject inside and out. They'd be little better than a wirehead. Someone with narrow interests still explores every single aspect of that interest in great detail. A creature with no boredom would find one tiny aspect of that interest and do it forever.

Or how about creating psychopaths and putting them in controlled environments that they can destroy at will, or creating highly violent entities to throw in fighting pits? Isn't there a point where this is preferable to creating yet another conscious creature capable of sympathy and boredom?

Yes, I concede that if there is a sufficient quantity of creatures with humane values, it might be good to create other types of creatures for variety's sake. However, such creatures could be potentially dangerous, we'd have to be very careful.

Comment author: Broolucks 29 March 2013 01:46:56AM *  0 points [-]

A creature that loves solitude might not necessarily be bad to create. But it would still be good to give it capacity for sympathy for pragmatic reasons, to ensure that if it ever did meet another creature it would want to treat it kindly and avoid harming it.

Fair enough, though at the level of omnipotence we're supposing, there would be no chance meetups. You might as well just isolate the creature and be done with it.

A creature with no concept of boredom would would, (to paraphrase Eliezer), "play the same screen of the same level of the same fun videogame over and over again."

Or it would do it once, and then die happy. Human-like entities might have a lifespan of centuries, and then you would have ephemeral beings living their own limited fantasy for thirty seconds. I mean, why not? We are all bound to repeat ourselves once our interests are exhausted -- perhaps entities could be made to embrace death when that happens.

Yes, I concede that if there is a sufficient quantity of creatures with humane values, it might be good to create other types of creatures for variety's sake. However, such creatures could be potentially dangerous, we'd have to be very careful.

I agree, though an entity with the power to choose the kind of creatures that come to exist probably wouldn't have much difficulty doing it safely.

Comment author: Ghatanathoah 28 March 2013 04:48:10PM *  0 points [-]

That is to say, it would kill all humans, restructure the whole planet, and then repopulate the planet with human beings devoid of cultural biases, ensuring plentiful resources throughout. But the genetic makeup would stay the exact same.

That would be bad, but it would still be way better than replacing us with paperclippers or orgasmium.

The society we have now is the result of social progress that elders have fought tooth and nail against.

That's true, but if it's "progress" then it must be progress towards something. Will we eventually arrive at our destination, decide society is pretty much perfect, and then stop? Is progress somehow asymptotic so we'll keep progressing and never quite reach our destination?

The thing is, it seems to me that what we've been progressing towards is greater expression of our human natures. Greater ability to do what the most positive parts of our natures think we should. So I'm fine with future creatures that have something like human nature deciding some new society I'm kind of uncomfortable with is the best way to express their natures. What I'm not fine with is throwing human nature out and starting from scratch with something new, which is what I think a utilitarian AI would do.

Because we are humans and we want more of ourselves, so of course we will work towards that particular goal. You won't find any magical objective reason to do it. Sure, we are sentient, intelligent, complex, but if those were the criteria, then we would want to make more AI, not more humans.

I didn't literally mean humans, I meant "Creatures with the sorts of goals, values, and personalities that humans have." For instance, if given a choice between creating an AI with human-like values, and creating a human sociopath, I would pick the AI. And it wouldn't just be because there was a chance the sociopath would harm others. I simply consider the values of the AI more worthy of creation than the sociopath's.

Personally, I can't see the utility of plastering the whole universe with humans who will never see more than their own little sector, so I would taper off utility with the number of humans, so that eventually you just have to create other stuff. Basically, I would give high utility to variety. It's more interesting that way.

I don't necessarily disagree. If having a large population of creatures with humane values and high welfare was assured then it might be better to have a variety of creatures. But I still think maybe there should be some limits on the sort of creatures we should create, i.e. lawful creativity. Eliezer has suggested that consciousness, sympathy, and boredom are the essential characteristics any intelligent creature should have. I'd love for there to be a wide variety of creatures, but maybe it would be best if they all had those characteristics.

Comment author: Broolucks 28 March 2013 10:31:16PM *  1 point [-]

That's true, but if it's "progress" then it must be progress towards something. Will we eventually arrive at our destination, decide society is pretty much perfect, and then stop? Is progress somehow asymptotic so we'll keep progressing and never quite reach our destination?

It's quite hard to tell. "Progress" is always relative to the environment you grew up in and on which your ideas and aspirations are based. At the scale of a human life, our trajectory looks a lot like a straight line, but for all we know, it could be circular. At every point on the circle, we would aim to follow the tangent, and it would look like that's what we are doing. However, as we move along, the tangent would shift ever so subtly and over the course of millennia we would end up doing a roundabout.

I am not saying that's precisely what we are doing, but there is some truth to it: human goals and values shift. Our environment and upbringing mold us very deeply, in a way that we cannot really abstract away. A big part of what we consider "ideal" is therefore a function of that imprint. However, we rarely ponder the fact that people born and raised in our "ideal world" would be molded differently and thus may have a fundamentally different outlook on life, including wishing for something else. That's a bit contrived, of course, but it would probably be possible to make a society which wants X when raised on Y, and Y when raised on X, so that it would constantly oscillate between X and Y. We would have enough foresight to figure out a simple oscillator, but if ethics were a kind of semi-random walk, I don't think it would be obvious. The idea that we are converging towards something might be a bit of an illusion due to underestimating how different future people will be from ourselves.

The thing is, it seems to me that what we've been progressing towards is greater expression of our human natures. Greater ability to do what the most positive parts of our natures think we should.

I suspect the negative aspects of our natures occur primarily when access to resources is strained. If every human is sheltered, well fed, has access to plentiful energy, and so on, there aren't really be any problems to blame on anyone, so everything should work fine (for the most part, anyway). In a sense, this simplifies the task of the AI: you ask it to optimize supply to existing demand and the rest is smooth sailing.

I didn't literally mean humans, I meant "Creatures with the sorts of goals, values, and personalities that humans have."

Still, the criterion is explicitly based on human values. Even if not human specifically, you want "human-like" creatures.

Eliezer has suggested that consciousness, sympathy, and boredom are the essential characteristics any intelligent creature should have. I'd love for there to be a wide variety of creatures, but maybe it would be best if they all had those characteristics.

Still fairly anthropomorphic (not necessarily a bad thing, just an observation). In principle, extremely interesting entities could have no conception of self. Sympathy is only relevant to social entities -- but why not create solitary ones as well? As for boredom, what makes a population of entities that seek variety in their lives better than one of entities who each have highly specialized interests (all different from each other)? As a whole, wouldn't the latter display more variation than the former? I mean, when you think about it, in order to bond with each other, social entities must share a lot of preferences, the encoding of which is redundant. Solitary entities with fringe preferences could thus be a cheap and easy way to increase variety.

Or how about creating psychopaths and putting them in controlled environments that they can destroy at will, or creating highly violent entities to throw in fighting pits? Isn't there a point where this is preferable to creating yet another conscious creature capable of sympathy and boredom?

Comment author: DaFranker 28 March 2013 01:58:35PM 1 point [-]

Now that is a good argument that doesn't miss the point. My priors would say it's not even "a few centuries" - I'd expect less than one earth-century on average, with most of the variance due to the particular economic variations and social phenomena derived from the details of the species.

Comment author: Broolucks 28 March 2013 03:39:05PM *  2 points [-]

Without any other information, it is reasonable to place the average to whatever time it takes us (probably a bit over a century), but I wouldn't put a lot of confidence in that figure, having been obtained from a single data point. Radio visibility could conceivably range from a mere decade (consider that computers could have been developed before radio -- had Babbage been more successful -- and expedite technological advances) to perhaps millennia (consider dim-witted beings that live for centuries and do everything we do ten times slower).

Several different organizational schemes might also be viable for life and lead to very different time tables: picture a whole ant colony as a sentient being, for instance (ants being akin to neurons). Such beings would be inherently less mobile than humans. That may skew their technological priorities in such a way that they develop short range radio before they even expand out of their native island, in which case their radio visibility window would be nil because by the time they have an use to long range communication, they would already have the technology to do it optimally.

Furthermore, an "ant neuron" is possibly a lot more sophisticated than each neuron in our brain, but also much slower, so an "ant brain" might be the kind of slow, "dim-witted" intelligence that would go through the same technological steps orders of magnitude slower than we do while retaining very high resiliency and competitiveness.

Comment author: Ghatanathoah 28 March 2013 03:17:18AM *  -1 points [-]

1) 100 A 2) 1H + 100A - 50A 3) 0 4) 1H - 50A

I wouldn't actually condone the move from 1 to 2. I would not condone inflicting huge harms on animals to create a moderately well-off human. But I would condone never creating some happy animals in the first place. Not creating is not the same as harming. The fact that TU treats not creating an animal with 50 utility to be equivalent to inflicting 50 points of disutility on an animal that has 50 utility is a strike against it. If we create an animal we have a responsibility to make it happy, if we don't we're free to make satisfied humans instead (to make things simpler I'm leaving the option of painlessly killing the animals off the table).

My argument is existing animals and existing humans have equal moral significance (or maybe humans have somewhat more if you're actually right about humans being capable of greaters level of welfare), but when deciding which to create, human creation is superior to animal (or paperclipper or wirehead) creation.

FWIW, I think the best fix is empirical: just take humans to be capable of much greater positive welfare states than animals, such that you have to make really big trades of humans to animals to be worth it. Although these leaves open the possibility we'd end up with little complex value, it does not seem a likely possibliity

I lack your faith in this fix. I consider it almost certain that if we were to create a utilitarian AI it would kill the entire human race and replace it with creatures whose preferences are easier to satisfy. And by "easier to satisfy" I mean "simpler and less ambitious," not that the creatures are more mentally and physically capable of satisfying humane desires.

In addition to this, total utilitarianism has several other problems that need fixing, the most obvious being that it considers it a morally good act to kill someone destined to live a good long life and replace them with a new person whose overall utility is slightly higher than the utility of the remaining years of that person's life would have been (I know in practice doing this is impossible without side-effects that create large dis-utilities, but that shouldn't matter, it's bad because it's bad, not because it has bad side-effects).

Quite frankly I consider the Repugnant Conclusion to be far less horrible than the "Kill and Replace" conclusion. I'd probably accept total utilitarianism as a good moral system if the RC was all it implied. The fact that TU implies something strange like the RC in an extremely unrealistic toy scenario isn't that big a deal. But that's not all TU implies. The "Kill and Replace" conclusion isn't a toy scenario, there are tons of handicapped people that could be killed and replaced with able people right now. But we don't do that, because it's wrong.

I don't spend my days cursing the fact that various disutilitous side effects prevent me from killing handicapped people and creating able people to replace them. Individual people, once they are created, matter. Once someone has been brought into existence we have a greater duty to make sure they stay alive and happy then we do to create new people. There may be some vastly huge amount of happy people that it's okay to kill one slightly-less-happy-person in order to create, but that number should be way, way, way, way, bigger than 1.

As a second recommendation (with pre-emptive apologies for offense: I can't find a better way of saying this), I'd recommend going back to the population ethics literature (and philosophy generally) rather than trying to reconstitute ethical theory yourself.

I've looked at some of the literature, and I've noticed there does not seem to be much interest in the main question I am interested in, which is, "Why make humans and not something else?" Peter Singer mentions it a few times in one essay I read, but didn't offer any answers, he just seemed to accept it as obvious. I thought it was a fallow field that I might be able to do some actual work in.

Peter Singer also has the decency to argue that human beings are not replaceable, that killing one person to replace them with a slightly happier one is bad. But I have trouble seeing how his arguments work against total utilitarianism.

Comment author: Broolucks 28 March 2013 05:30:50AM *  2 points [-]

I consider it almost certain that if we were to create a utilitarian AI it would kill the entire human race and replace it with creatures whose preferences are easier to satisfy. And by "easier to satisfy" I mean "simpler and less ambitious," not that the creatures are more mentally and physically capable of satisfying humane desires.

It would not necessarily kill off humanity to replace it by something else, though. Looking at the world right now, many countries run smoothly, and others horribly, even though they are all inhabited and governed by humans. Even if you made the AI "prefer" human beings, it could still evaluate that "fixing" humanity would be too slow and costly and that "rebooting" it is a much better option. That is to say, it would kill all humans, restructure the whole planet, and then repopulate the planet with human beings devoid of cultural biases, ensuring plentiful resources throughout. But the genetic makeup would stay the exact same.

Once someone has been brought into existence we have a greater duty to make sure they stay alive and happy then we do to create new people. There may be some vastly huge amount of happy people that it's okay to kill one slightly-less-happy-person in order to create, but that number should be way, way, way, way, bigger than 1.

Sure. Just add the number of deaths to the utility function with an appropriate multiplier, so that world states obtained through killing get penalized. Of course, an AI who wishes to get rid of humanity in order to set up a better world unobstructed could attempt to circumvent the limitation: create an infertility epidemic to extinguish humanity within a few generations, fudge genetics to tame it (even if it is only temporary), and so forth.

Ultimately, though, it seems that you just want the AI to do whatever you want it to do and nothing you don't want it to do. I very much doubt there is any formalization of what you, me, or any other human really wants. The society we have now is the result of social progress that elders have fought tooth and nail against. Given that in general humans can't get their own offspring to respect their taboos, what if your grandchildren come to embrace some options that you find repugnant or disagree with your idea of utopia? What if the AI tells itself "I can't kill humanity now, but if I do this and that, eventually, it will give me the mandate"? Society is an iceberg drifting along the current, only sensing the direction it's going at the moment, but with poor foresight as to what the direction is going to be after that.

I've noticed there does not seem to be much interest in the main question I am interested in, which is, "Why make humans and not something else?"

Because we are humans and we want more of ourselves, so of course we will work towards that particular goal. You won't find any magical objective reason to do it. Sure, we are sentient, intelligent, complex, but if those were the criteria, then we would want to make more AI, not more humans. Personally, I can't see the utility of plastering the whole universe with humans who will never see more than their own little sector, so I would taper off utility with the number of humans, so that eventually you just have to create other stuff. Basically, I would give high utility to variety. It's more interesting that way.

Comment author: shminux 27 March 2013 11:21:28PM *  2 points [-]

One of the key aspects of this theory is that it does not necessarily rate the welfare of creatures with simple values as unimportant. On the contrary, it considers it good for their welfare to be increased and bad for their welfare to be decreased. Because of this, it implies that we ought to avoid creating such creatures in the first place, so it is not necessary to divert resources from creatures with humane values in order to increase their welfare.

If you assign any positive utility at all, no matter how small, to creating happy low-complexity life, you end up having to create lots of happy viruses (they are happy when they can replicate). If you put a no-value threshold somewhere below humans, you run into other issues, like "it's OK to torture a cat".

Comment author: Broolucks 28 March 2013 04:12:32AM *  0 points [-]

You would only create these viruses if the total utility of the viruses you can create with the resources at your disposal exceeds the utility of the humans you could make with these same resources. For instance, if you give a utility of 1 to a steel paperclip weighing 1 gram, then assuming a simple additive model (which I wouldn't, but that's besides the point) making one metric ton of paperclips has an utility of 1,000,000. If you give an utility of 1,000,000,000 to a steel sculpture weighing a ton, it follows that you will never make any paperclips unless you have less than a ton of iron. You will always make the sculpture, because it gives 1,000 times the utility for the exact same resources.

Comment author: DaFranker 27 March 2013 05:06:06PM *  3 points [-]

TL;DR: I think the main arguments in the article miss the point; radio is one of the few classic-physics-allowed means of communication, communication gives a survival advantage, it's highly likely that organisms will develop classic-physics communications and use radio long before they develop other means of communication we don't know about, and thus any species that develops environment-altering intelligence will probably gain an advantage and probably eventually have something like radios. So it's highly unlikely, IMO, that radio is a human "species-specific" trait. This comment is based on my first reading of the linked article.

From the article, the primary argument against the possibility of non-human intelligence-that-builds-radios seems divided in two parts: 1) There is no visible "trend towards" human-like intelligence anywhere on Earth. 2) Even if some species develop "intelligence" and grow that trait, the specific subsets of this that a) give sentience or b) build radios are species-specific traits of humans.

Now, 1) is rather odd and has already been deconstructed elsewhere, and doesn't support the point that strongly, if I understand any of this. A mutation needs to have a noticeable effect on survival or reproductive rates in order to fix in - and I suspect that the intersection of these and mutations that lead towards human-like intelligence is rather small, in the space of possible mutations the various earth organisms can retain. A cursory scan of wikipedia's page on the evolution of human intelligence indicates that even this is rather poorly understood, and my first impression is that many models and theories indicate that our intelligence might have been a runaway feedback loop that would not be observable as a gradual process in other species (apart from observing the gradual development and growth of the requirements, e.g. hands with grip and brain sizes for some models).

As for 2), things here are a bit weird. AFAIK, there's a limited number of methods that individual organisms can transmit information to other individual organisms. I'm willing to consider exotic things that don't resemble anything here on earth, but really, communication between organisms confers some major advantages at many levels, from telling someone else where the predators are hunting to coordinating teamwork to teaching stuff to later generations.

I mean, there aren't 2^700 different possible techniques for transmitting information across organisms, even less to another planet. You've got to send radiations / matter / stuff from A to B. Unless you magically find some way to teleport information. I doubt any organism or intelligence or goal-optimizer or probability constrainer is going to invent teleportation before ever using any form of classical radiation-based information transmission. It's not out of the realm of the possible, but AFAICT the base-minimum configuration of matter required to even entangle two particles and then measure them is far, far more complex than the base-minimum configuration of matter required to transceive light or radio signals. And I hope any apparatus that can actually FTL/teleport/etc information in means unknown to us yet is at least as complex as entangling and measuring qubits (otherwise, what the hell have we been doing? Shouldn't that search subspace have been completely covered already? Feel free to correct me on this if you have better understanding though).

So in the space of things you can do, you can have super-organisms that don't require communication to coordinate, or are just so damn smart and powerful that they don't even need to coordinate to overwhelm all pressures and enemies and predators, or some other communication-obsoleting unlikely scenario... or you have species that communicate, because the game-theoretic advantages they gain from communicating let them win more often, which translates into more of them surviving at the expense of other species or organisms.

And for that communication to happen, you've got to use one of the means that physics allows. Radio and light and other forms of radiation are pretty much hard to avoid in this field, AFAIK. There aren't 2^700 means to get data from A to B. I'm probably missing stuff, but overall if I'm playing the Natural Selection™ game: having multiple reproduction-capable individuals that communicate with eachother regularly using energy radiation / vibrations / rapid matter transfer is probably better, ceteris paribus, than not having those two advantages.

Therefore it stands to reason that in the long run, on average, planet-dominant species will have some form of communication, perhaps akin to language or perhaps more exotic, using known forms of communication at some point in their history, and there will be planet-dominant species whenever one species develops these advantages naturally. Conjecture, but that I believe reasonably argued and more likely than its complement set of possibilities.

Comment author: Broolucks 28 March 2013 02:30:09AM 4 points [-]

On the other hand, based on our own experience, broadcasting radio signals is a waste of energy and bandwidth, so it is likely an intelligent society would quickly move to low-power, focused transmissions (e.g. cellular networks or WiFi). Thus the radio "signature" they broadcast to the universe would peak for a few centuries at most before dying down as they figure out how to shut down the "leaks". That would explain why we observe nothing, if intelligent societies do exist in the vicinity. Of course, these societies might also evolve rapidly soon after, perhaps go through some kind of singularity, and might lose interest for "lower life forms" -- which would then explain why they might not look for our signals or leave them unanswered if they listen for them.

Comment author: endoself 26 March 2013 07:32:03PM *  0 points [-]

When we talk about the complexity of an algorithm, we have to decide what resources we are going to measure. Time used by a multi-tape Turing machine is the most common measurement, since it's easy to define and generally matches up with physical time. If you change the model of computation, you can lower (or raise) this to pretty much anything by constructing your clock the right way.

Comment author: Broolucks 26 March 2013 09:48:28PM 1 point [-]

Ah, sorry, I might not have been clear. I was referring to what may be physically feasible, e.g. a 3D circuit in a box with inputs coming in from the top plane and outputs coming out of the bottom plane. If you have one output that depends on all N inputs and pack everything as tightly as possible, the signal would still take Ω(sqrt(N)) time to reach. From all the physically doable models of computation, I think that's likely as good as it gets.

Comment author: loup-vaillant 26 March 2013 07:24:54AM 1 point [-]

I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.

I like your second point even more: it's actionable. We could work on the security of personal computers.

That last one is incorrect however. The AI only have to access its object code in order to copy itself. That's something even current computer viruses can do. And we're back to boxing it.

Comment author: Broolucks 26 March 2013 08:17:56AM *  2 points [-]

If the AI is a learning system such as a neural network, and I believe that's quite likely to be the case, there is no source/object dichotomy at all and the code may very well be unreadable outside of simple local update procedures that are completely out of the AI's control. In other words, it might be physically impossible for both the AI and ourselves to access the AI's object code -- it would be locked in a hardware box with no physical wires to probe its contents, basically.

I mean, think of a physical hardware circuit implementing a kind of neuron network -- in order for the network to be "copiable", you need to be able to read the values of all neurons. However, that requires a global clock (to ensure synchronization, though AI might tolerate being a bit out of phase) and a large number of extra wires connecting each component to busses going out of the system. Of course, all that extra fluff inflates the cost of the system, makes it bigger, slower and probably less energy efficient. Since the first human-level AI won't just come out of nowhere, it will probably use off-the-shelf digital neural components, and for cost and speed reasons, these components might not actually offer any way to copy their contents.

This being said, even if the AI runs on conventional hardware, locking it out of its own object code isn't exactly rocket science. The specification of some programming languages already guarantee that this cannot happen, and type/proof theory is an active research field that may very well be able to prove the conformance of implementation to specification. If the AI is a neural network emulated on conventional hardware, the risks that it can read itself without permission are basically zilch.

In response to comment by Baughn on Why AI may not foom
Comment author: endoself 26 March 2013 03:40:27AM 0 points [-]

Actually, only the output; sometimes you only need the first few bits. Your equation holds if you know you need to read the end of the input.

Comment author: Broolucks 26 March 2013 07:01:04AM 0 points [-]

And technically you can lower that to sqrt(M) if you organize the inputs and outputs on a surface.

In response to comment by V_V on Why AI may not foom
Comment author: loup-vaillant 25 March 2013 02:00:09AM *  3 points [-]

I think you miss the part where the team of millions continues its self-copying until it eats up every available computing power. If there's any significant computing overhang, the AI could easily seize control of way more computing power than all the human brains put together.

Also, I think you underestimate the "highly coordinated" part. Any copy of the AI will likely share the exact same goals, and the exact same beliefs. Its instances will have common knowledge of this fact. This would creates an unprecedented level of trust. (The only possible exception I can think of are twins. And even so…)

So, let's recap:

  • Thinks 100 times faster than a human, though no better.
  • Can copy itself over many times (the exact amount depends on computing power available).
  • The resulting team forms a nearly perfectly coordinated group.

Do you at least concede that this is potentially more dangerous than a whole country armed up with nukes? Would you rely on it being less dangerous than that?

Comment author: Broolucks 26 March 2013 06:49:51AM 1 point [-]

There are a lot of "ifs", though.

  • If that AI runs on expensive or specialized hardware, it can't necessarily expand much. For instance, if it runs on hardware worth millions of dollars, it can't exactly copy itself just anywhere yet. Assuming that the first AI of that level will be cutting edge research and won't be cheap, that gives a certain time window to study it safely.

  • The AI may be dangerous if it appeared now, but if it appears in, say, fifty years, then it will have to deal with the state of the art fifty years from now. Expanding without getting caught might be considerably more difficult then than it is now -- weak AI will be all over the place, for one.

  • Last, but not least, the AI must have access to its own source code in order to copy it. That's far from a given, especially if it's a neural architecture. A human-level AI would not know how it works any more than we know how we work, so if it has no read access to itself or no way to probe its own circuitry, it won't be able to copy itself at all. I doubt the first AI would actually have fine-grained access to its own inner workings, and I doubt it would have anywhere close to the amount of resources required to reverse engineer itself. Of course, that point is moot if some fool does give it access...

View more: Prev | Next