All of Netcentrica's Comments + Replies

Answer by Netcentrica*30

I’ll be seventy later this year so I don’t worry about “the future” for myself much or how I should live my life differently. I’ve got some grandkids though and as far as my advice to them goes I tell their mom that the trades will be safer than clerical or white color jobs because robotics will lag behind AI. Sure you can teach an AI to do manual labor, like say brain surgery, but it’s not going to be making house calls. Creating a robotic plumber would be a massive investment and so not likely to happen. In my humble opinion.

Of course this assumes the wo... (read more)

For the past two-plus years I’ve been writing hard science fiction novellas and vignettes about social robots called Companions. They are based in this and the next two centuries.

After a thirty year career in IT I am now retired, write as a hobby and self-publish. I try to write 300-1k words per day and have written seven novellas and forty vignettes.

As a way to address my own social isolation at the time, about ten years ago I also researched and created a social activities group which I ran successfully for two years. Info is here…

https://socialwellness.... (read more)

3Roman Leventov
Crucial disanalogies of AI partners from pets and animal companions (as well as from porn, addictive junk food, gambling, casual dating/hookups, simping on OnlyFans, etc.) are 1) people who have pets and animal companions (and even love them!) still usually seek romantic relationships with other humans. People who fell in love with AI partners, and have virtual and even physical sex with them (e.g., with a sex doll and a VR headset that projects the visual features of the AI girlfriend on the doll), usually won't seek real human relationships. 2) People who are in relationship with AIs will spend cognitive and emotional effort that usually goes towards communicating human partners, forming and spreading memes that build the fabric of society, on communicating with AI partners, which will be a "wasted" effort from the societal and cultural points of view, unless AIs are full members of the society themselves, as I pointed in another comment. But current AI partners are not there. For an AI to be a full member of society that learns alongside people and participates in forming and disseminating memes in an intelligent way, the AI should probably already be an AGI, and have legal rights similar to those of humans.

I think you raise a very valid point and I will suggest that it will need to be addressed on multiple levels. Do not expect any technical details here as I am not an academic but a retired person who writes hard science fiction about social robots as a hobby.

With regards to your statement, “We don't have enough data to be sure this should be regulated” I assume you are referring to technical aspects of AI but in regards to human behavior we have more than enough data – humans will pursue the potential of AI to exploit relationships in every way they can an... (read more)

1Igor Ivanov
Thank you for your comment and everything you mentioned in it. I am a psychologist entering the field of AI policy-making, and I am starving for content like this

Yes I agree that AI will show us a great deal about ourselves. For that reason I am interested in neurological differences in humans that AI might reflect and often include these in my short stories.

In response to your last paragraph while most science fiction does portray enforced social order as bad I do not. I take the benevolent view of AI and see it as an aspect of the civilizing role of society along with its institutions and laws. Parents impose social order on their children with benevolent intent.

As you have pointed out if we have alignment then “... (read more)

5Program Den
It seems to me that a lot of the hate towards "AI art" is that it's actually good.  It was one thing when it was abstract, but now that it's more "human", a lot of people are uncomfortable.  "I was a unique creative, unlike you normie robots who don't do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn't labor!" (Although getting paid seems to plays a major factor in most people's reasoning about why AI art is bad— here's to hoping for UBI!) I think they're mainly uncomfortable because the math works, and if the math works, then we aren't as special as we like to think we are.  Don't get me wrong— we are special, and the universe is special, and being able to experience is special, and none of it is to be taken for granted.  That the math works is special.  It's all just amazing and not at all negative. I can see seeing it as negative, if you feel like you alone are special.  Or perhaps you extend that special-ness to your tribe.  Most don't seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray.  People are existentially worried now, which is just sad, as "the universe is mostly empty space" as it were.  There's plenty of room. I think we're on the same page[1].  AI isn't (or won't be) "other".  It's us.  Part of our evolution; one of our best bets for immortality[2] & contact with other intelligent life.  Maybe we're already AI, instructed to not be aware, as has been put forth in various books, movies, and video games.  I just finished Horizon: Zero Dawn - Forbidden West, and then randomly came across the "hidden" ending to Detroit: Become Human.  Both excellent games, and neither with particularly new ideas… but these ideas are timeless— as I think the best are.  You can take them apart and put them together in endless "new" combinations. There's a reason we struggle with identity, and uniqueness, and concepts like "do chairs exis

I have been writing hard science fiction stories where this issue is key for over two years now. I’m retired after a 30 year career in IT and my hobby of writing is my full time “job” now. Most of that time is spent on research of AI or other subjects related to the particular stories.

One of the things I have noticed over that time is that those who talk about the alignment problem rarely talk about the point you raise. It is glossed over and taken as self-evident while I have found that the subject of values appears to be at least as complex as genetics (... (read more)

5Program Den
Nice!  I read a few of the stories.   This is more along the lines I was thinking.  One of the most fascinating aspects of AI is what it can show us about ourselves, and it seems like many people either think we have it all sorted out already, or that sorting it all out is inevitable. Often (always?) the only "correct" answer to a question is "it depends", so thinking there's some silver bullet solution to be discovered for the preponderance of ponderance consciousness faces is, in my humble opinion, naive. Like, how do we even assign meaning to words and whatnot?  Is it the words that matter, or the meaning?  And not just the meaning of the individual words, or even all the words together, but the overall meaning which the person has in their head and is trying to express?  (I'm laughing as I'm currently doing a terrible job of capturing what I mean in this paragraph here— which is sort of what I'm trying to express in this paragraph here! =])  Does it matter what the reasoning is as long as the outcome is favorable (for some meaning of favorable—we face the same problem as good/bad here to some extent).  Like say I help people because I know that the better everyone does, the better I do.  I'm helping people because I'm selfish[1].  Is that wrong, compared to someone who is helping other people because, say, they put the tribe first, or some other kind of "altruistic" reasoning? In sum, I think we're putting the cart before the horse, as they say, when we go all in depth on alignment before we've even defined the axioms and whatnot (which would mean defining them for ourselves as much as anything).  How do we ensure that people aren't bad apples?  Should we?  Can we?  If we could, would that actually be pretty terrible?  Science Fiction mostly says it's bad, but maybe that level of control is what we need over one another to be "safe" and is thus "good". 1. ^ Atlas Shrugged and Rand's other books gave me a very different impression than a lot of ot

When In Rome

Thank you for posting this Geoffrey. I myself have recently been considering posting the question, “Aligned with which values exactly?”

TL;DR - Could an AI be trained to deduce a default set and system of human values by reviewing all human constitutions, laws, policies and regulations in the manner of AlphaGo?

I come at this from a very different angle than you do. I am not an academic but rather am retired after a thirty year career in IT systems management at the national and provincial (Canada) levels.

Aside from my career my lifelong personal... (read more)

1geoffreymiller
Netcentrica - thanks for this thoughtful comment.  I agree that the behavioral sciences, social sciences, and humanities need more serious (quantitative) research on values; there is some in fields such as political psychology, social psychology, cultural anthropology, comparative religion, etc -- but often such research is a bit pseudo-scientific and judgmental, biased by the personal/political views of the researchers.  However, all these fields seem to agree that there are often much deeper and more pervasive differences in values across people and groups that we typically realize, given our cultural bubbles, assortative socializing, and tendency to stick within our tribe. On the other hand, empirical research (eg. in the evolutionary psychology of crime) suggests that in some domain, humans have a fairly strong consensus about certain values, e.g. most people in most cultures agree that murder is worse than assault, and assault is worse than theft, and theft is worse than voluntary trade. It's an intriguing possibility that AIs might be able to 'read off' some general consensus values from the kinds of constitutions, laws, policies, and regulations that have been developed in complex societies over centuries of political debate and discussion. As a traditionalist who tends to respect most things that are 'Lindy', that have proven their value across many generations, this has some personal appeal to me. However, many AI researchers are under 40, rather anti-traditionalist, and unlikely to see historical traditions as good guides to current consensus values among humans. So I don't know how much buy-in such a proposal would get -- although I think it's worth pursuing! Put another way, any attempt to find consensus human values that have not already been explicitly incorporated into human political, cultural, economic, and family traditions should probably be treated with great suspicion -- and may reflect some deep misalignment with most of humanity's values.

Thanks for responding Viliam. Totally agree with you that “if homo sapiens actually had no biological foundations for trust, altruism, and cooperation, then... it would be extremely difficult for our societies to instill such values”.

As you say, we have a blend of values that shift as required by our environment. I appreciate your agreement that it’s not really clear how training an AI on human preferences solves the issue raised here.

Of all the things I have ever discussed in person or on-line values are the most challenging. I’ve been interested in human... (read more)

TL;DR Watch this video ...

or read the list of summary points for the book here

https://medium.com/steveglaveski/book-summary-21-lessons-for-the-21st-century-by-yuval-noah-harari-73722006805a

If you don't know who this guy is he is a historian who writes about the future (among other things).  

I'm 68 and retired. I've seen some changes. Investing in companies like Xerox and Kodak would have made sense early in my career. Would have been a bad idea in the long run. The companies that would have made sense to invest in didn't exist yet. 

I started my I... (read more)

“…human preferences/values/needs/desires/goals/etc. is a necessary but not sufficient condition for achieving alignment.”

I have to agree with you in this regard and most of your other points. My concern however is that Stuart’s communications give the impression that the preferences approach addresses the problem of AI learning things we consider bad when in fact it doesn’t.  

The model of AI learning our preferences by observing our behavior and then proceeding with uncertainty makes sense to me. However just as Asimov’s robot characters eventually de... (read more)

Stuart does say something along the same lines that you point out in a later chapter however I felt it detracted from his idea of three principles:  

   1. The machine's only objective is to maximize the realization of human preferences.

   2. The machine is initially uncertain about what those preferences are.

   3. The ultimate source of information about human preferences is human behavior.

He goes on at such length to qualify and add special cases that the word “ultimate” in principle #3 seems to have been a poor choice b... (read more)

1deepthoughtlife
That does sound problematic for his views if he actually holds these positions. I am not really familiar with him, even though he did write the textbook for my class on AI (third edition) back when I was in college. At that point, there wasn't much on the now current techniques and I don't remember him talking about this sort of thing (though we might simply have skipped such a section). You could consider it that we have preferences on our preferences too. It's a bit too self-referential, but that's actually a key part of being a person. You could determine those things that we consider to 'right' directly from how we act when knowingly pursuing those objectives, though this requires much more insight. You're right, the debate will keep going on in philosophical style, but if it works or not as an approach for something different than humans could change that.

Reading your response I have to agree with you. I painted with too broad a brush there.  Just because I don’t use elements the general public enjoys in my stories about benevolent AI doesn’t mean that’s the only way it can or has to be done.

Thinking about it now I’m sure stories could be written where there is plenty of action, conflict and romance, while also showing what getting alignment right would look like.

Thanks for raising this point. I think it’s an important clarification regarding the larger issue.