jcp29
jcp29 has not written any posts yet.

[Policy makers & ML researchers]
Expecting AI to automatically care about humanity is like expecting a man to automatically care about a rock. Just as the man only cares about the rock insofar as it can help him achieve his goals, the AI only cares about humanity insofar as it can help it achieve its goals. If we want an AI to care about humanity, we must program it to do so. AI safety is about making sure we get this programming right. We may only get one chance.
[Policy makers & ML researchers]
Our goal is human flourishing. AI’s job is to stop at nothing to accomplish its understanding of our goal. AI safety is about making sure we’re really good at explaining ourselves.
[Policy makers & ML researchers]
AI safety is about developing an AI that understands not what we say, but what we mean. And it’s about doing so without relying on the things that we take for granted in inter-human communication: shared evolutionary history, shared experiences, and shared values. If we fail, a powerful AI could decide to maximize the number of people that see an ad by ensuring that ad is all that people see. AI could decide to reduce deaths by reducing births. AI could decide to end world hunger by ending the world.
(The first line is a slightly tweaked version of a different post by Linda Linsefors, so credit to her for that part.)
Thanks Trevor - appreciate the support! Right back at you.
[Policy makers & ML researchers]
"There isn’t any spark of compassion that automatically imbues computers with respect for other sentients once they cross a certain capability threshold. If you want compassion, you have to program it in" (Nate Soares). Given that we can't agree on whether a straw has two holes or one...We should probably start thinking about how program compassion into a computer.
[Policy makers & ML researchers]
Expecting AI to know what is best for humans is like expecting your microwave to know how to ride a bike.
[Insert call to action]
-
Expecting AI to want what is best for humans is like expecting your calculator to have a preference for jazz.
[Insert call to action]
(I could imagine a series riffing based on this structure / theme)
Good idea to check Tim Urban's article, Trevor. It seems like he has thought hard on how to make this stuff visual and intuitive and compelling.
Good idea! I could imagine doing something similar with images generated by DALL-E.
[Policy makers]
We don't let companies use toxic chemicals without oversight.
Why let companies use AI without oversight?
[Insert call to action on support / funding for AI governance or regulation]
[Policymakers]
"If we imagine a space in which all possible minds can be represented, we must imagine all human minds as constituting a small and fairly tight cluster within that space. The personality differences between Hannah Arendt and Benny Hill might seem vast to us, but this is because the scale bar in our intuitive judgment is calibrated on the existing human distribution. In the wider space of all logical possibilities, these two personalities are close neighbors. In terms of neural architecture, at least, Ms. Arendt and Mr. Hill are nearly identical. Imagine their brains laying side by side in quiet repose. The differences would appear minor and you would quite readily recognize... (read 364 more words →)