All of knowsnothing's Comments + Replies

3johnswentworth
As I mentioned at the end, it's not particularly relevant to my own models either way, so I don't particularly care. But I do think other people should want to run this experiment, based on their stated models.

Thank you for doing this. Would you mind if this is added to the Misalignment Database?

" For the most part, Roko's posts not only fail to engage with any scientific literature on the subject, but employ an extremely naive and ultimately misleading model that does not hold up to empirical and theoretical scrutiny. "

Can be applied generally.

Been doing this. Reading less. Writing a LOT less. Memory has improved a lot.

The alienation is something I felt for a bit, until I started working on my project and working with folk, talking to folk, etc. Also, been very pleasantly surprised how receptive non AI/non-tech folk are when talking to them about AI risk, as long as it's framed in a down to earth, relatable manner, introduced organically, etc.

5Igor Ivanov
Thanks for sharing your experience. My experience is that talking with non-AI safety people is similar to talks about global warming. If someone tells me about that, I say that this is an important issue, but I honestly don't invest that much effort to fight against it. This is my experience, and yours might be different.

I think a lot of people think Sydney/Bing Chat is GPT 4

Human can manipulate animals and make them do what they want. So could AI

4the gears to ascension
then we must end loneliness...
-2Nathan Helm-Burger
Oh, wonderful, all we have to do is make sure that nobody in charge of the dangerously powerful future AI is ever... lonely or otherwise emotionally vulnerable enough to be temporarily deceived and thus make a terrible error that can't be taken back. Um, I hope your comment was just sarcasm in poor taste and not actually a statement about why you are hopeful that nothing is going to go wrong.

Or they're desperate. And/or don't trust themselves to be ok if the idea fails.

"mistrust" is not the best approach here. Mistrusting yourself or your ideas can lead to misery and feeling lackadaisical. Could lead to lacking motivation to pursue an idea as hard as you otherwise might.

"Openness" is a better idea imo. Openness to the idea failing or taking some adjustment to reach success, openness to it succeeding as well. Looking at ideas not just as a way to achieve success, but test the view you have on the world, a way to learn something new about the world through testing it and working on it.

Is trying to reduce internet usage and maybe reducing the amount of data AI companies have to work with something that is at all feasible?

1Super AGI
Reducing internet usage and limiting the amount of data available to AI companies might seem like a feasible approach to regulate AI development. However, implementing such measures would likely face several obstacles.  E.g. * AI companies purchase internet access like any other user, which makes it challenging to specifically target them for data reduction without affecting other users. One potential mechanism to achieve this goal could involve establishing regulatory frameworks that limit the collection, storage, and usage of data by AI companies. However, these restrictions might inadvertently affect other industries that rely on data processing and analytics. * A significant portion of the data utilized by AI companies is derived from open-source resources like Common Crawl and WebText2. These companies have normally already acquired copies of this data for local use, meaning that limiting internet usage would not directly impact their access to these datasets. * If any country were to pass a law limiting the network data available to AI-based companies, it is likely that these companies would relocate to other countries with more lenient regulations. This would render such policies ineffective on a global scale, while potentially harming the domestic economy and innovation in the country implementing the restrictions. In summary, while the idea of reducing the amount of data AI companies have to work with might appear feasible, practical implementation faces significant hurdles. A more effective approach to regulating AI development could involve establishing international standards and ethical guidelines, fostering transparency in AI research, and promoting cross-sector collaboration among stakeholders. This would help to ensure the responsible and beneficial growth of AI technologies without hindering innovation and progress.