Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.
JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.
Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!
ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.
I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. "make 1000 paperclips", not just "make paperclips"), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.
On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified.
In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.