All of jcp29's Comments + Replies

[Policymakers]

"If we imagine a space in which all possible minds can be represented, we must imagine all human minds as constituting a small and fairly tight cluster within that space. The personality differences between Hannah Arendt and Benny Hill might seem vast to us, but this is because the scale bar in our intuitive judgment is calibrated on the existing human distribution. In the wider space of all logical possibilities, these two personalities are close neighbors. In terms of neural architecture, at least, Ms. Arendt and Mr. Hill are nearly identic... (read more)

[Policy makers & ML researchers]

Expecting AI to automatically care about humanity is like expecting a man to automatically care about a rock. Just as the man only cares about the rock insofar as it can help him achieve his goals, the AI only cares about humanity insofar as it can help it achieve its goals. If we want an AI to care about humanity, we must program it to do so. AI safety is about making sure we get this programming right. We may only get one chance.
 

[Policy makers & ML researchers]

Our goal is human flourishing. AI’s job is to stop at nothing to accomplish its understanding of our goal. AI safety is about making sure we’re really good at explaining ourselves.

[Policy makers & ML researchers]

AI safety is about developing an AI that understands not what we say, but what we mean. And it’s about doing so without relying on the things that we take for granted in inter-human communication: shared evolutionary history, shared experiences, and shared values. If we fail, a powerful AI could decide to maximize the number of people that see an ad by ensuring that ad is all that people see. AI could decide to reduce deaths by reducing births. AI could decide to end world hunger by ending the world. 

(The first line is a slightly tweaked version of a different post by Linda Linsefors, so credit to her for that part.)

Thanks Trevor - appreciate the support! Right back at you.

[Policy makers & ML researchers]

"There isn’t any spark of compassion that automatically imbues computers with respect for other sentients once they cross a certain capability threshold. If you want compassion, you have to program it in" (Nate Soares). Given that we can't agree on whether a straw has two holes or one...We should probably start thinking about how program compassion into a computer.

1trevor
MIRI peope quotes are great, they aren't easy to find like EY's one ultra-famous paper from 2006, please add more MIRI people quotes (I probably will too). Don't give up, keep commenting, this contest has been cut off from most people's visibility so it needs all the attention and entries it can get.

[Policy makers & ML researchers]

Expecting AI to know what is best for humans is like expecting your microwave to know how to ride a bike.

[Insert call to action]

-

Expecting AI to want what is best for humans is like expecting your calculator to have a preference for jazz.

[Insert call to action]

(I could imagine a series riffing based on this structure / theme)

Good idea to check Tim Urban's article, Trevor. It seems like he has thought hard on how to make this stuff visual and intuitive and compelling.

Good idea! I could imagine doing something similar with images generated by DALL-E.

1FinalFormal2
That's a very good idea, I think one limitation of most AI arguments is that they seem to lack urgency. GAI seems like it's a hundred years away at least, and showing the incredible progress we've already seen might help to negate some of that perception.

[Policy makers]

We don't let companies use toxic chemicals without oversight.

Why let companies use AI without oversight?

[Insert call to action on support / funding for AI governance or regulation]

[Policymakers & ML researchers]

A virus doesn't need to explain itself before it destroys us. Neither does AI.

A meteor doesn't need to warn us before it destroys us. Neither does AI.

An atomic bomb doesn't need to understand us in order to destroy us. Neither does AI.

A supervolcano doesn't need to think like us in order to destroy us. Neither does AI.

(I could imagine a series riffing based on this structure / theme)

[Policy-makers & ML researchers]

In 1901, the Chief Engineer of the US Navy said “If God had intended that man should fly, He would have given him wings.” And on a windy day in 1903, Orville Wright proved him wrong.

Let's not let AI catch-us by surprise.

[Insert call to action]

[Policy makers & ML researchers]

“If a distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong” (Arthur Clarke). In the case of AI, the distinguished scientists are saying not just that something is possible, but that it is probable. Let's listen to them. 

[Insert call to action]

[Policy makers & ML researchers]

If you aren't worried about AI, then either you believe that we will stop making progress in AI or you believe that code will stop having bugs...which is it?

1trevor
I really like this one but I think it can be made even better, not sure how atm

[Policy makers & ML researchers]

“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings” (Elon Musk).

[Insert call to action]

[Tech executives]

If you could not fund that initiative that could turn us all into paperclips...that'd be great.

[Insert call to action]

--

If you could not launch the project that could raise the AI kraken...that'd be great.

[Insert call to action]

--

If you could not build the bot that will treat us the way we treat ants...that'd be great.

[Insert call to action]

--

(I could imagine a series riffing based on this structure / theme)

[ML researchers]

Given that we can't agree on whether a hotdog is a sandwich or not...We should probably start thinking about how to tell a computer what is right and wrong.

[Insert call to action on support / funding for AI governance / regulation etc.]

-

Given that we can't agree on whether a straw has two holes or one...We should probably start thinking about how to explain good and evil to a computer.

[Insert call to action on support / funding for AI governance / regulation etc.]

(I could imagine a series riffing based on this structure / theme)

[ML researchers]

"We're in the process of building some sort of god. Now would be a good time to make sure it's a god we can live with" (Sam Harris, 2016 Ted Talk).

1trevor
In my experience, religious symbolism is a losing strategy, especially for policymakers and executives. From their bayesian perspective, they are best off prejudicing against anything that sounds like a religious cult. It's generally best to avoid imagery of an eldritch abombination that spawns from our #1 most valuable industry and ruins everything forever everywhere. Even for ML researchers, even the old kurzweillians would be put off.