We are in local optimum of ideas about AI alignement now. They are a lot of ideas how to safe humanity outside the field of the direct research of AI alignment. Just few ideas I had in 5 minute brainstorm:

  1. Create a youtube channel. If we spread ideas of alignment enough, we will have the regulation of AI development, so we win. We are rationalists, so we can be more convincing than others.
  2. Create a political party. Change an USA party from the inside by a series of individual debates to make them want AGI alignment too.
  3. Change opinions of AI developers by a series of individual debates.
  4. ...hack the USA nuclear weapon and boom the Taiwan, that produce 85% of chips?.. Okey, I'm joking. But if I really believed it's the only way to stop danger AI research, I would think about it.
  5. To develop rationality with hope to search a way to take over the world in 2 days.
  6. Create an organisation. We can be more effective if we coordinate our actions. It would be nice to create several coordination platforms and distribute responsibilities. We can assign 100 people to try to convince other people of the need for AGI approval, especially those who were initially against it. These peoples could share their experiences on a special platform. When they have enough experience, they will be able to start online debate.
  7. [Do you have others ideas about what can we do? It's important.]

Most of my ideas are about spread of information, because we are rationalists, and truth is on our side. Even it is enough, but in addition we have youtube, so we can win by one good video.

It's just few example ideas, not even the best. I can't put the best in the public domain.

But why does everyone look so demotivated? Why, if they are so sure that AGI will kill us, don't they think about the strangest ideas that can come to mind? The idea of spreading information about the dangers of not aligned AGI is not so counterintuitive. Why does everyone write articles like MIRI announces new "Death With Dignity" strategy or Raising children on the eve of AI?

Rationalists opened a lot of startups, so we are talented. We have an advantage over others peoples because of rationality and enormous number of knowledge in different fields. We are a community of the smartest peoples in the world, and we have around 10 years.

So get up and fight!

Edit: an other idea/project you can join: Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible

New Comment
19 comments, sorted by Click to highlight new comments since:

Create a youtube channel.

Some people are already working on it, e.g. Robert Miles, some videos by Rational Animations.

Create a political party.

I am not saying this is literally impossible, but please consider how many people are already trying to do this, and how many of them actually succeed. Many political causes wish they had their own party at the parliament. Many people who desire wealth and power wish they had the ability to control the country. Some of these have thousands of devoted followers. Some of these have tons of money to spend on experts and marketing. And still, most of them fail.

Change opinions of AI developers by a series of individual debates.

I suppose people are already doing this?

hack the USA nuclear weapon and boom the Taiwan

If someone is working on this, they are probably not going to reply here. But, ignoring the difficulty of the task, it is not sure whether doing so would actually improve our chances. On one hand, yeah, humanity could get a few extra years to figure out alignment. On the other hand, I am afraid that the debate around alignment would be utterly poisoned; for most people, the word "alignment" would start to mean "a dangerous terrorist". So during those extra years there probably wouldn't be a lot of alignment research done.

To develop rationality with hope to search a way to take over the world in 2 days.

That was kinda the original plan of Less Wrong, which in hindsight probably seems too optimistic. (Even Putin expected three days to take over Ukraine.)

Create an organisation

Something like MIRI?

truth is on our side

It seems like in real world this is not very important. (Perhaps if we had sufficiently popular and legal prediction markets...)

Rationalists opened a lot of startups, so we are talented.

Different tasks require different levels of talent. Compared to saving the world, creating a successful startup is trivial.

Basically, the answer to "why aren't people trying harder?" is that many are already trying harder, for years, some of them for decades, and... well, the predictions are not very optimistic.

If you want to create a popular YouTube channel about alignment, definitely go ahead, and I hope you succeed. But this meta work of yelling at people to work harder is not really useful.

  1. I mostly agree with you.
  2. Thanks for information about rational youtube channels and other. I have update myself.
  3. In fact, even if someone already do this, I wrote this article to say "it is too early to capitulate". Even if we have small chances to surviving, we should to do something, not write articles like MIRI announces new "Death With Dignity" strategy and accept the defeat. Because if you accept defeat, you would do nothing after and you won't increase our chances to surviving (and you would if you don't accept).

Basically, the answer to "why aren't people trying harder?" is that many are already trying harder, for years, some of them for decades, and... well, the predictions are not very optimistic.

I see, it's a interesting point of view I didn't think about. But it's a bias. Even if fight have little sense, considering importance of space colonisation, everything else have even less sens. How can you think about "death with dignity", if your actions can increase probability of human Milky Way on 0.001%?

If someone is working on this, they are probably not going to reply here. But, ignoring the difficulty of the task, it is not sure whether doing so would actually improve our chances. On one hand, yeah, humanity could get a few extra years to figure out alignment. On the other hand, I am afraid that the debate around alignment would be utterly poisoned; for most people, the word "alignment" would start to mean "a dangerous terrorist". So during those extra years there probably wouldn't be a lot of alignment research done.


OK, it was too radical. But what's about "coordinate action of 200 peoples, that obtain a Taiwan visa, start to work as guardians on all chips fabrics and research laboratories. And, if all other's our plans fall and humanity are about to extinct, this 200 peoples synthesize a lot of nitroglycerin in a garage..."

That's I mean by organisation we need. If MIRI did it, I wouldn't know, but my intuition say MIRI did not do it.

If the humanity is already about to extinct, the chips were already produced, and you need to (also) blow up the data centers.

Thanks I would note it

My factual disagreement:

I suppose people are already doing this?

So do it more instead of writing articles "How Spend Last 5 Years Of LIfe".

That was kinda the original plan of Less Wrong, which in hindsight probably seems too optimistic. (Even Putin expected three days to take over Ukraine.)

Continuing this plan is better than nothing (than accept defeat). And... good joke.

Something like MIRI?

MIRI is working on the direct alignment only, isn't?

Different tasks require different levels of talent. Compared to saving the world, creating a successful startup is trivial

Taboo "saving the world". I don't want someone to "save the world", I just want someone to "create the best youtube channel ever in using a boring theme"... ok, maybe that's impossible. But maybe that's possible, who knows.

 


 

To save humanity from AI you need to do more than just convince people that saving humanity from AI is important. You actually need a plan to solve the problem of AI alignment.

If you focus too much more convincing other people instead of solving the actual problem you are unlikely to solve the actual problem.

If videos convince random people, then they will convince a certain number of politicians and AI developers.

If enough people are convinced of the need for AGI alignment, politicians will start promoting AGI alignment in order to get votes.

If we do videos well, the regulations of AI development will be introduced. If we do videos really well, the government can directly allocate money for research on alignment.

Spreading this idea will increase the number of our resources (more peoples will work on it).

All of those things can happen and the result is still that AI kills humanity.

While all things are equal more resources and people is nice, it alone is not solving alignment. Reality does not grade based on the amount of effort you put in. 

So even if politicians make regulation we need and increase number of AI alignment researchers it doesn't increase our chances a lot?

Why?

The problem is that we don't know what regulations we need to actually achieve the goal. 

There are many ways to increase the number of AI alignment researchers that then lead to those focusing on questions like algorithmic gender and race bias without actually making progress on the key problem.

The problem is that we don't know what regulations we need to actually achieve the goal. 

Will it work to ban all research to increase AI capabilities except those that bring us closer to alignment? Also ban the creation of AI systems with a capacity greater than X, with a gradual decrease in X.

There are many ways to increase the number of AI alignment researchers that then lead to those focusing on questions like algorithmic gender and race bias without actually making progress on the key problem.

The idea is to create videos fully describing the goals of AGI alignment, so viewers would understand the context.
 

"Will it work?" is a question where we don't really know the answer.

As far as "ban all research to increase AI capabilities except those that bring us closer to alignment" goes, that's not something you can write into a law. A law needs a mechanism. It needs definitions about what research is allowed and what isn't. 

Also laws by their nature only affect a country. 

I agree that there are pitfalls, and it will take several attempts for the laws to start working.

If the US government allocates a significant amount of money for (good) AI alignment research in combination with the ban, then our chances will increase from 0% to 25% in a scenario without black swans.

The problem is not whether a law works but whether it does what's needed. If you look at the laws that exist in our society they usually do something but at the same time they don't solve problems completely. 

Politicians are quite quick to pass a law to "do something" but that does not mean that the problem is solved effectively. The more political the debate it is, the less likely it often is that the law actually does what it is indented to do.

To summarize our discussion:
There may be a way to get the right government action and greatly improve our chances of alignment. But it requires a number of actions, some of which may have never been done by our society before. They may be impossible.
These actions include: 1: learning how to effectively change people's minds by videos (maybe something bordering on dark epistemology); 2: convincing tens of percent of the population of the right memes about alignment by social media (primarily youtube); 3: changing the minds of interlocutors in political debates (telling epistemological principles in the introduction to the debate??); 4: Using on broad public support to lobby for adequate laws helps alignment.
So, we need to allocate a few people to think through this option to see if we can accomplish each step. If we can, then we should communicate this plan to as many rationalists as possible so that as many talented video makers as possible can try to implement this plan.

It's not at all clear that if you convince someone on a superficial level that they should care about AI alignment, that will result in the right actions. On the other hand, thinking on that level can be quite corrosive for your own understanding. The soldier mindset is not useful for thinking about efficient mechanisms. 

Our discussion look like:

Me: we can do X, that mean do X1, X2 and X3.

You: we can fall on X2 by way Y.

Do you mean "we should to think about Y before realize plan X" or "plan X definitely fall because of Y"?

 

A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?

Basically, you are saying "we can do X and I hope it will do A, B and C" without any regard for the real world consequences.

A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?

Will likely go down as engaging in politics is mind-killing and it's important to think clearly to achieve AI alignment.