All of timeholmes's Comments + Replies

Very helpful! Thank you, Katja, for your moderation and insights. I will be returning often to reread portions and follow links to more. I hope there will be more similar opportunities in the future!

I'm very concerned with the risk, which I feel is at the top of catastrophic risks to humanity. With an approaching asteroid at least we know what to watch for! As an artist, I've been working mostly on this for the last 3 years, (see my TED talk "The Erotic Crisis", on YouTube) trying to think of ways of raise awareness and engage people in dialog. The more discussion the better I feel! And I'm very grateful for this forum and all who participated!

Human beings suffer from a tragic myopic thinking that gets us into regular serious trouble. Fortunately our mistakes so far have so far don't quite threaten our species (though we're wiping out plenty of others.) Usually we learn by hindsight rather than robust imaginative caution; we don't learn how to fix a weakness until it's exposed in some catastrophe. Our history by itself indicates that we won't get AI right until it's too late, although many of us will congratulate ourselves that THEN we see exactly where we went wrong. But with AI we only get one... (read more)

I long to hear a discussion of the overarching issues of the prospects of AI as seen from the widest perspective. Much as the details covered in this discussion are fascinating and compelling, it also deserves an approach from the perspective not only of the future of this civilization and humanity at large, but of our relationship with the rest of nature and the cosmos. ASI would essentially trump earthly "nature" as we know it (through evolution, geo-engineering, nanotech, etc., though certainly not the laws of nature). Thereby will be raised a... (read more)

There is no doubt that given the concept of the Common Good Principle, everyone would be FOR it prior to complete development of ASI. But once any party gains an advantage they are not likely to share, particularly with those they see as their competitors or enemies. This is an unfortunate fact of human nature that has little chance of evolving toward greater altruism in the necessary timescale. In both Bostrom's and Brundage's arguments there are a lot of "ifs". Yes, it would be great if we could develop AI for the Greater Good, but human nature... (read more)

The most important issue comes down to the central question of human life: what is a life worth living? To me this is an inescapably individual question the answer to which changes moment by moment in a richly diverse world. To assume there is a single answer to "moral rightness" is to assume a frozen moment in an ever-evolving universe from the perspective of a single sentient person! We struggle even for ourselves from one instant to the next to determine what is right for this particular moment! Even reducing world events to an innocuous quest... (read more)

The older I get and the more I think of the AI issues the more I realize how perfectly our universe is designed! I think about the process of growing up: I cherish the time I spent in each stage of life, unaware of what's to come later, because there are things to be learned that can only derive from that particular segment's challenges. Each stage has its own level of "foolishness", but that is absolutely necessary for those lessons to be learned! So too I think of the catastrophes I have endured that I would not have chosen, but that I would no... (read more)

What Davis points out needs lots of expansion. The value problem becomes ever more labyrinthine the closer one looks. For instance, after millions of years of evolution and all human history, we ourselves still can't agree on what we want! Even within 5 minutes of your day your soul is aswirl with conflicts over balancing just the values that pertain to your own tiny life, let alone the fate of the species. Any attempt to infuse values into AI will reflect human conflicts but at a much simpler and more powerful scale.

Furthermore, the AI will figure out tha... (read more)

Because what any human wants is a moving target. As soon as someone else delivers exactly what you ask for, you will be disappointed unless you suddenly stop changing. Think of the dilemma of eating something you know you shouldn't. Whatever you decide, as soon as anyone (AI or human) takes away your freedom to change your mind, you will likely rebel furiously. Human freedom is a huge value that any FAI of any description will be unable to deliver until we are no longer free agents.

5DefectiveAlgorithm
What would an AI that 'cares' in the sense you spoke of be able to do to address this problem that a non-'caring' one wouldn't?

This is Yudkowsky's Hidden Complexity of Wishes problem from the human perspective. The concept of "caring" is rooted so deeply (in our flesh, I insist) that we cannot express it. Getting across the idea to AI that you care about your mother is not the same as asking for an outcome. This is why the problem is so hard. How would you convince the AI, in your first example, that your care was real? Or in your #2, that your wish was different from what it delivered? And how do you tell, you ask? By being disappointed in the result! (For instance in Y... (read more)

1gjm
What I wrote wasn't intended as a defense of anything; it was an attempt to understand what you were saying. Since you completely ignored the questions I asked (which is of course your prerogative), I am none the wiser. I think you may have misunderstood my conjecture about prejudice; if an AI professes to "care" but doesn't in fact act in ways we recognize as caring, and if we conclude that actually it doesn't care in the sense we meant, that's not prejudice. (But it is looking at "outcomes", which you disdained before.)

I keep returning to one gnawing question that haunts the whole idea of Friendly AI: how do you program a machine to "care"? I can understand how a machine can appear to "want" something, favoring a certain outcome over another. But to talk about a machine "caring" is ignoring a very crucial point about life: that as clever as intelligence is, it cannot create care. We tend to love our kid more than someone else's. So you could program a machine to prefer another in which it recognizes a piece of its own code. That may LOOK lik... (read more)

3gjm
How can you distinguish "recognizing something truly metaphysical" from (1) "falsely claiming to recognize something truly metaphysical" and (2) "sincerely claiming to recognize something truly metaphysical, but wrong because actually the thing in question isn't real or is very different from what seems to have been recognized"? Perhaps "caring" offers a potential example of #1. A machine says it cares about us, it consistently acts in ways that benefit us, it exhibits what look like signs of distress and gratification when we do ill or well -- but perhaps it's "just an outcome" (whatever exactly that means). How do you tell? (I am worried that the answer might be "by mere prejudice: if it's a machine putatively doing the caring, then of course it isn't real". I think that would be a bad answer.) Obvious example of #2: many people have believed themselves to be in touch with gods, and I guess communion with a god would count as "truly metaphysical". But given what those people have thought about the gods they believed themselves to be in touch with, it seems fairly clear that most of them must have been wrong (because where a bunch of people have mutually-fundamentally-incompatible ideas about their gods, at most one can be right).
5DefectiveAlgorithm
Leaving aside other matters, what does it matter if an FAI 'cares' in the sense that humans do so long as its actions bring about high utility from a human perspective?

We too readily extrapolate our past into our future. Bostrom talks a lot about the vast wealth AI will bring, turning even the poor into trillionaires. But he doesn't connect this with the natural world, which, however much it once seemed to, does not expand no matter how much money is made. Wealth only comes from two sources: nature and human creativity. Wealth will do little to squeeze more resources out of a limited planet. Even so you maybe bring home an asteroid of pure diamond. Wealth is not the same as life well-lived! Looks to me like without a rap... (read more)

Glad you mentioned this. I find Bostrom's reduction of art to the practical quite chilling! This sounds like a view of art from the perspective of a machine, or one who cannot feel. In fact it's the first time I've ever heard art described this way. Yes, such an entity (I wouldn't call them a person unless they are perhaps autistic) could only see UTILITY in art. According to my best definition of art [https://sites.google.com/site/relationalart/Home] –refined over a lifetime as a professional artist–art is necessarily anti-utilitarian. Perhaps I can't see... (read more)

1gjm
I think you may be interpreting "utility" more narrowly than is customary here. The usual usage here is that "utility" is a catch-all term for everything one values. So if art provides me with wonder and humour and pathos and I value those (which, as it happens, it does and I do) then that's positive utility for me. If art provides other people with wonder and humour and pathos and they like that and I want them to be happy (which, as it happens, it does and they do and I do) then that too is positive utility. If it provides other people with those things and it makes them better people and I care about that (which it does, and maybe it does, and I do) then that too is positive utility. To an AI that doesn't care about those things, yes. To an AI that cares about those things, no. There's no reason why an AI shouldn't care about them. Of course at the moment we don't understand them, or our reactions to them, well enough to make an AI that cares about them. But then, we can't make an AI that recognizes ducks very well either.
1KatjaGrace
Why do you suppose an AI would tend to prefer grey straight lines?
5[anonymous]
What are you talking about?

It might be that a tool looks like an agent or v.v., according to one's perspective. I worry that something unexpected could bite us, like while concentrating on generating a certain output, the AI might tend to a different but invisible track that we can't see because it parallels our own desires. (The "goal" of EFFICIENCY we share with the AI, but if pursued blindly, our own goal will end up killing us!)

For instance, while being pleased that the Map gives us a short route, maybe the AI is actually giving us a solution based instead on a minimu... (read more)

I am a sculptor of the human body and a deeply religious person. So I come from a sector far from most others here. That's why I believe I may have a useful perspective. Primarily this might surface as a way of looking at reality that includes things that might be invisible to many in our increasingly mind-driven world. I believe that intelligence comes with a frightening blind spot that causes me increasing concern (outlined in my TED talk, "The Erotic Crisis" on YouTube). The body's intelligence is every bit as complex and sophisticated as the ... (read more)

3ChristianKl
To be useful you actually have to be able to argue your perspective in more depth. It's quite easy to say that you find the human body important, but alone that's no reason for other people to also find it important.

I find our hubris alarming. To me it's helpful to think of AI not as a thing but more like a superintelligent Hitler that we are awakening slowly. As we refine separate parts of the AI we hope we can keep the whole from gaining autonomy before we suspect any danger, but does it really work that way? While we're trying to maximize its intel what's to keep us from awakening some scheming part of its awareness? It might start secretly plotting our overthrow in the (perhaps even distant) future without leaking any indication of independence. It could pick up o... (read more)

Exactly! Bostrom seems to start the discussion from the point of humans having achieved a singleton as a species; in which case a conversation at this level would make more sense. But it seems that in order to operate as a unit, competing humans would have to work on the principle of a nuclear trigger where separate agents have to work in unison in order to launch. Thus we face the same problem with ourselves: how to know everyone in the keychain is honest? If the AI is anywhere near capable of taking control it may do so even partially and from there coul... (read more)

I think of delineating human values as an impossible task. Any human is a living river of change and authentic values only apply to one individual at one moment in time. For instance, much as I want to eat a cookie (yum!), I don't because I'm watching my weight (health). But then I hear I've got 3 months to live and I devour it (grab the gusto). There are three competing authentic values shifting into prominence within a short time. Would the real value please stand up?

Authentic human values could only be approximated in inverse proportion to their detail.... (read more)

0SteveG
The picture of Superintelligence as having and allowing a single values systems is a Yudowsky/Bostrom construct. They go down this road because they anticipate disaster along other roads. Meanwhile, people invariably will want things that get in the way of other people's wants. With or without AGI, some goods will be scarce. Government and commerce will still have to distribute these goods among people. For example, some people will wish to have as many children or other progeny as they can afford, and AI and medical technology will make it easier for people to feed and care for more children. There is no way to accommodate all of the people who want as many children as possible exactly when they want them. What values scheme successfully trades off among the prerogatives of all people who want many progeny? After a point, if they persist in thinking this, the many people who share this view eventually need to compromise through some mechanism. The child-wanters will also be forced to trade off their goals with those who hope to preserve a pristine environment as much as possible. There is no reconciling these people's goals completely. Maybe we can arbitrate between them and prevent outcomes which satisfy nobody. Sometimes, we can show that one or another person's goals are internally inconsistent. There is no obvious way to show that the child-wanter's view is superior to the environment-preserver's view, either. Both will occasionally find themselves in conflict with those people who personally want to live for as long as they possibly can. Neither AGI nor "Coherent Extrapolated Volition" solves the argument among child-wanters, and it does not solve the argument between child-wanters, environment-preservers and long-livers. Perhaps some parties could be "re-educated" or medicated out of their initial belief and find themselves just as happy or happier in the end. Perhaps at critical moments before people have fully formulated their values, it is OK fo

Absolutely! It's helpful to remember we are talking about an intelligence that is comparable to our own. (The great danger only arises with that proximity.) So if you would not feel comfortable with the AI listening in on this conversation (and yes it will do its research, including going back to find this page), you have not understood the problem. The only safety features that will be good enough are those designed with the full knowledge that the AI is sitting at the table with us, having heard every word. That requires a pretty clever answer and cleve... (read more)

Yes, continued development of AI seems unstoppable. But this brings up another very good point: if humanity cannot become a Singleton in our search for good egalitarian shared values, what is the chance of creating FAI? After years of good work in that direction and perhaps even success in determining a good approximation, what prevents some powerful secret entity like the CIA from hijacking it at the last minute and simply narrowing its objectives for something it determines is a "greater" good?

Our objectives are always better than the other gu... (read more)

1SodaPopinski
On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.