The people that make general artificial intelligence models, believe that these models could mean the end of mankind. As an example of how that could happen, a future version of ChatGPT might be so smart that the advanced chatbot could create enormous danger. It could disseminate information like how to...
I'd love to explain but sadly I was just told not to
Estimating the probability of the moonrise problem is impossible and system failure is world-ending. The US and China have agreed on not using AI in nuclear recently and I sleep better at night for it. Under what circumstances would you choose a machine over humans for this?
https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/
Utopians are on their way to end life on earth because they don't understand that iterative x-risk leads to x.
What do you think is realistic if alignment is possible? Would the large corporations make a loving machine or a money-and-them-aligned machine?
Did you use EFA to conclude that EFA is the worst, common bad argument?
How would this work with European airlines or airlines from countries where there are much less credit card payments?
What if you're wrong?
Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.
https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate
Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don't know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me.
People also asked the same kind of 'why not ...' question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.
There is now one hunger striker in front of Anthropic and two in front of Google Deepmind.
https://x.com/DSheremet_/status/1964749851490406546
The people that make general artificial intelligence models, believe that these models could mean the end of mankind. As an example of how that could happen, a future version of ChatGPT might be so smart that the advanced chatbot could create enormous danger. It could disseminate information like how to easily build weapons of mass destruction or even just build those weapons itself. For instance, after ChatGPT was first made, many people used it as the basis for agents that could roam the internet by themselves. One, called ChaosGPT, was tasked with creating a plan for taking over the world. The idea was funny, but would have been less funny if the... (read 1067 more words →)
Could you show some examples and/or say how you come up with a comment that gets a lot of likes?