Crazy philosopher

16 years old, I'm interested in AI alignment, rationaluty & philosophy, economy and politics.

Wiki Contributions

Comments

Sorted by

Eliezer Yudkowsky is trying to prevent the creation of recursively self-improved AGI because he doesn't want competitors.

So if one day you decided that P of X ≈ 1, you would remember "it's true but I'm not sure" after one year?

Coral should to try to be a white hacker for Mr. Topaz company. Mr. Topaz would agree, because Coral say, that if she didn't success she don't take money, so he lose nothing. After few times, when Coral hacked all drons software in one hour after presentation of its new version, mr. Topaz would understand, that security is important.

Can you tell us what exactly led to "something" explosion? Does something change in your life before?

Our discussion look like:

Me: we can do X, that mean do X1, X2 and X3.

You: we can fall on X2 by way Y.

Do you mean "we should to think about Y before realize plan X" or "plan X definitely fall because of Y"?

 

A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?

To summarize our discussion:
There may be a way to get the right government action and greatly improve our chances of alignment. But it requires a number of actions, some of which may have never been done by our society before. They may be impossible.
These actions include: 1: learning how to effectively change people's minds by videos (maybe something bordering on dark epistemology); 2: convincing tens of percent of the population of the right memes about alignment by social media (primarily youtube); 3: changing the minds of interlocutors in political debates (telling epistemological principles in the introduction to the debate??); 4: Using on broad public support to lobby for adequate laws helps alignment.
So, we need to allocate a few people to think through this option to see if we can accomplish each step. If we can, then we should communicate this plan to as many rationalists as possible so that as many talented video makers as possible can try to implement this plan.

I agree that there are pitfalls, and it will take several attempts for the laws to start working.

If the US government allocates a significant amount of money for (good) AI alignment research in combination with the ban, then our chances will increase from 0% to 25% in a scenario without black swans.

The problem is that we don't know what regulations we need to actually achieve the goal. 

Will it work to ban all research to increase AI capabilities except those that bring us closer to alignment? Also ban the creation of AI systems with a capacity greater than X, with a gradual decrease in X.

There are many ways to increase the number of AI alignment researchers that then lead to those focusing on questions like algorithmic gender and race bias without actually making progress on the key problem.

The idea is to create videos fully describing the goals of AGI alignment, so viewers would understand the context.
 

I don't understand the specific mechanism that makes us need rest days. I don't see gears.

So even if politicians make regulation we need and increase number of AI alignment researchers it doesn't increase our chances a lot?

Why?

Load More