About BOINC
Has anyone here done any volunteer computing work by the Berkeley Open Infrastructure for Network Computing (BOINC) software? If yes, which project did you choose and why.
Has anyone here done any volunteer computing work by the Berkeley Open Infrastructure for Network Computing (BOINC) software? If yes, which project did you choose and why.
Is the Jeffrey Epstein who donated 50000 dollar to SIAI, the same as the present famous one. Ps. I think that this question is irrelavent to the normal environment of our forum, but I could not control my curiosity.
Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?
It is not the first time that people have tried to stop an economic process by campaigning. It is not the last time that it will fail.
The development of AI is a race.
The accelerationist and the doomer souls living in one body apparently.
How about coordinate legal actions to have chatbots banned, like ChatGPT was banned in Italy? Is it possible?
If successful this would weaken the incentive for devealopment of new AI models.
As per a survey, 7% Nigerian Christians like ISIS. That is another example of this phenomenon.
QA sessions.
Some people in this community are thinking out of the box. I fully support this, all options must be on the table.
However, one cannot make universal statements. The efficacy of violent and nonviolent methods depend upon the exact context. If someone believes in an imminent hard takeoff, and gives high credence to Doom, violent activity may be rational.
Exactly my thought, but how to get it done?
Difference in criteria.
Has anyone here done any volunteer computing work by the Berkeley Open Infrastructure for Network Computing (BOINC) software? If yes, which project did you choose and why.
Is the Jeffrey Epstein who donated 50000 dollar to SIAI, the same as the present famous one.
Ps. I think that this question is irrelavent to the normal environment of our forum, but I could not control my curiosity.
Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?
It is not actually insanse. Sir Winston Churchill also advocated a preventive nuclear war against USSR in the late 1940s. The rapid rise of World Communism, whether in Europe (including record high vote shares in West Europe), in China, Korea, Indochina, other parts of the world, convinced them that the biggest ethical goal was 'Destruction of USSR'.
Stalin remained a thoroughly criminal despot in this period - the Leningrad Purge, Doctor's Plot, Night of the Murdered Poets, the establishment of Communist Dictatorships across Eastern Europe, with their own ruthless purges and repressions.
USA had the atomic bomb. It seemed the Only Solution to protect liberty.