For the past two-plus years I’ve been writing hard science fiction novellas and vignettes about social robots called Companions. They are based in this and the next two centuries.
After a thirty year career in IT I am now retired, write as a hobby and self-publish. I try to write 300-1k words per day and have written seven novellas and forty vignettes.
As a way to address my own social isolation at the time, about ten years ago I also researched and created a social activities group which I ran successfully for two years. Info is here…
I think you raise a very valid point and I will suggest that it will need to be addressed on multiple levels. Do not expect any technical details here as I am not an academic but a retired person who writes hard science fiction about social robots as a hobby.
With regards to your statement, “We don't have enough data to be sure this should be regulated” I assume you are referring to technical aspects of AI but in regards to human behavior we have more than enough data – humans will pursue the potential of AI to exploit relationships in every way they can an...
Yes I agree that AI will show us a great deal about ourselves. For that reason I am interested in neurological differences in humans that AI might reflect and often include these in my short stories.
In response to your last paragraph while most science fiction does portray enforced social order as bad I do not. I take the benevolent view of AI and see it as an aspect of the civilizing role of society along with its institutions and laws. Parents impose social order on their children with benevolent intent.
As you have pointed out if we have alignment then “...
I have been writing hard science fiction stories where this issue is key for over two years now. I’m retired after a 30 year career in IT and my hobby of writing is my full time “job” now. Most of that time is spent on research of AI or other subjects related to the particular stories.
One of the things I have noticed over that time is that those who talk about the alignment problem rarely talk about the point you raise. It is glossed over and taken as self-evident while I have found that the subject of values appears to be at least as complex as genetics (...
When In Rome
Thank you for posting this Geoffrey. I myself have recently been considering posting the question, “Aligned with which values exactly?”
TL;DR - Could an AI be trained to deduce a default set and system of human values by reviewing all human constitutions, laws, policies and regulations in the manner of AlphaGo?
I come at this from a very different angle than you do. I am not an academic but rather am retired after a thirty year career in IT systems management at the national and provincial (Canada) levels.
Aside from my career my lifelong personal...
Thanks for responding Viliam. Totally agree with you that “if homo sapiens actually had no biological foundations for trust, altruism, and cooperation, then... it would be extremely difficult for our societies to instill such values”.
As you say, we have a blend of values that shift as required by our environment. I appreciate your agreement that it’s not really clear how training an AI on human preferences solves the issue raised here.
Of all the things I have ever discussed in person or on-line values are the most challenging. I’ve been interested in human...
TL;DR Watch this video ...
or read the list of summary points for the book here
If you don't know who this guy is he is a historian who writes about the future (among other things).
I'm 68 and retired. I've seen some changes. Investing in companies like Xerox and Kodak would have made sense early in my career. Would have been a bad idea in the long run. The companies that would have made sense to invest in didn't exist yet.
I started my I...
“…human preferences/values/needs/desires/goals/etc. is a necessary but not sufficient condition for achieving alignment.”
I have to agree with you in this regard and most of your other points. My concern however is that Stuart’s communications give the impression that the preferences approach addresses the problem of AI learning things we consider bad when in fact it doesn’t.
The model of AI learning our preferences by observing our behavior and then proceeding with uncertainty makes sense to me. However just as Asimov’s robot characters eventually de...
Stuart does say something along the same lines that you point out in a later chapter however I felt it detracted from his idea of three principles:
1. The machine's only objective is to maximize the realization of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behavior.
He goes on at such length to qualify and add special cases that the word “ultimate” in principle #3 seems to have been a poor choice b...
Reading your response I have to agree with you. I painted with too broad a brush there. Just because I don’t use elements the general public enjoys in my stories about benevolent AI doesn’t mean that’s the only way it can or has to be done.
Thinking about it now I’m sure stories could be written where there is plenty of action, conflict and romance, while also showing what getting alignment right would look like.
Thanks for raising this point. I think it’s an important clarification regarding the larger issue.
I’ll be seventy later this year so I don’t worry about “the future” for myself much or how I should live my life differently. I’ve got some grandkids though and as far as my advice to them goes I tell their mom that the trades will be safer than clerical or white color jobs because robotics will lag behind AI. Sure you can teach an AI to do manual labor, like say brain surgery, but it’s not going to be making house calls. Creating a robotic plumber would be a massive investment and so not likely to happen. In my humble opinion.
Of course this assumes the wo... (read more)