good point. I do not personally think that knowing that there is a possibility you will die without you able to do anything to reverse course adds any value unless you mean worldwide social revolt against all nations to stop AI labs?
but how do we get this message accross ? it can reinforce the point of my article that not enough is being done, only in obscure LW forums.
why would something that you admit that is temporary (for now) matter in an exponential curve? It's like saying it's ok to go out for drinks on March 5 2020. Ok sure but 10 days after it wasn't. The argument must stand for a very long period of time or it's better not said. And that is the best argument for why we should be cautious, because a) we don't know for sure and b) things change extremely fast.
Sure thing you can contact me. In Greek we use several Spanish words in their original form and I was wondering was that because we had many shipping ties with Spain and was it the marineros that brought them? Word s like timón, barca, galleta etc.
I actually find my latín american Friends easier to understand, they do use some unknown words but speak muchuch slower especially vs the Andalusians
great article. I hope you realize your startup research/idea. One comment, I think the salaries derail the whole budget plan, afaik from startup world I have been involved, founders make big sacrifices to get their thing going in return for a big equity in the startup they believe someday will become a unicorn.
Regardless of content I would say that me among I suspect the majority of people have a natural aversion to titles starting with "No." It is confrontational and shows that the author has a strong conviction about something that is clearly not binary and wants to shove the negative word to start off in your face. I would urge everyone to refrain from having a title like that.
Has anyone seen MI7? I guess Tom is not the most popular guy in this forum but the storyline of a rogue AI as presented (within the limits of a mission impossible block buster) sounds not only plausible but also a great story to bring awareness to crowds about the dangers. It talks about the inability of governments to stop it (although obviously it will be stopped in the upcoming movie) and also their eagerness to control it to rule over the world while the AI just wants to bring chaos (or does it have an ultimate goal?) and also how some humans will be aligned with and obey it regardless if it takes them to their own doom too. Thoughts?
"I suspect that AGI is decades away at minimum". But can you talk more about this? I mean if I say something against the general scientific consensus which is a bit blurry right now but certainly most of the signatories of the latest statements do not think it's that far away, I would need to think myself to be at least at the level of Bengio, Hinton or at least Andrew Ng. How can someone that is not remotely as accomplished as all the labs producing the AI we talk about can speculate contrary to their consensus? I am really curious.
Another example w...
After Hinton's and Bengio's articles that I consider a moment in history, I struggle to understand how most people in tech dismiss them. If Einstein wrote an article about the dangers of nuclear weapons in 1939 you wouldn't have people saying "nah, I don't understand how such a powerful explosion can happen" without a physics background. Hacker News is supposed to be *the place for developers, startups and such and you can see comments that despare me. The comments go from "alarmism is boring" to "I have programmed MySQL databases and I know tech and this can't happen". Should I update my view on the intelligence and biases of humans right now I wonder much.
I think the stoic's (Seneca's letters, Meditations) talk a lot about how to live in the moment while awaiting probable death. Then the classic psychology book The Denial of Death would also be relevant. I guess The Myth of Sisiphus would also be relevant but I haven't read it yet. The metamorphosis of prime intellect is also a very interesting book talking about mortality being preferable to immortality and so on.
What is the duration of P(doom)?
What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?
To me it's fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It's confusing.
no 2. is much more important than academic ML researchers which is the majority of the surveys done. When someone delivers a product and is the only one building it and they tells you X, you should belive X unless there is a super strong argument for the contrary and there just isn't.