[Policy makers & ML researchers]
Expecting AI to automatically care about humanity is like expecting a man to automatically care about a rock. Just as the man only cares about the rock insofar as it can help him achieve his goals, the AI only cares about humanity insofar as it can help it achieve its goals. If we want an AI to care about humanity, we must program it to do so. AI safety is about making sure we get this programming right. We may only get one chance.
[Policy makers & ML researchers]
AI safety is about developing an AI that understands not what we say, but what we mean. And it’s about doing so without relying on the things that we take for granted in inter-human communication: shared evolutionary history, shared experiences, and shared values. If we fail, a powerful AI could decide to maximize the number of people that see an ad by ensuring that ad is all that people see. AI could decide to reduce deaths by reducing births. AI could decide to end world hunger by ending the world.
(The first line is a slightly tweaked version of a different post by Linda Linsefors, so credit to her for that part.)
[Policy makers & ML researchers]
"There isn’t any spark of compassion that automatically imbues computers with respect for other sentients once they cross a certain capability threshold. If you want compassion, you have to program it in" (Nate Soares). Given that we can't agree on whether a straw has two holes or one...We should probably start thinking about how program compassion into a computer.
[Policy makers & ML researchers]
Expecting AI to know what is best for humans is like expecting your microwave to know how to ride a bike.
[Insert call to action]
-
Expecting AI to want what is best for humans is like expecting your calculator to have a preference for jazz.
[Insert call to action]
(I could imagine a series riffing based on this structure / theme)
[Policymakers & ML researchers]
A virus doesn't need to explain itself before it destroys us. Neither does AI.
A meteor doesn't need to warn us before it destroys us. Neither does AI.
An atomic bomb doesn't need to understand us in order to destroy us. Neither does AI.
A supervolcano doesn't need to think like us in order to destroy us. Neither does AI.
(I could imagine a series riffing based on this structure / theme)
[Policy makers & ML researchers]
“If a distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong” (Arthur Clarke). In the case of AI, the distinguished scientists are saying not just that something is possible, but that it is probable. Let's listen to them.
[Insert call to action]
[Policy makers & ML researchers]
“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings” (Elon Musk).
[Insert call to action]
[Tech executives]
If you could not fund that initiative that could turn us all into paperclips...that'd be great.
[Insert call to action]
--
If you could not launch the project that could raise the AI kraken...that'd be great.
[Insert call to action]
--
If you could not build the bot that will treat us the way we treat ants...that'd be great.
[Insert call to action]
--
(I could imagine a series riffing based on this structure / theme)
[ML researchers]
Given that we can't agree on whether a hotdog is a sandwich or not...We should probably start thinking about how to tell a computer what is right and wrong.
[Insert call to action on support / funding for AI governance / regulation etc.]
-
Given that we can't agree on whether a straw has two holes or one...We should probably start thinking about how to explain good and evil to a computer.
[Insert call to action on support / funding for AI governance / regulation etc.]
(I could imagine a series riffing based on this structure / theme)
[Policymakers]
"If we imagine a space in which all possible minds can be represented, we must imagine all human minds as constituting a small and fairly tight cluster within that space. The personality differences between Hannah Arendt and Benny Hill might seem vast to us, but this is because the scale bar in our intuitive judgment is calibrated on the existing human distribution. In the wider space of all logical possibilities, these two personalities are close neighbors. In terms of neural architecture, at least, Ms. Arendt and Mr. Hill are nearly identic... (read more)