André Ferretti

What do AI risks, pandemics, and animal welfare have in common? They're all in my quizzes! Test your knowledge on global health, animal welfare, and existential risks at Quizmanity. Who knew saving the world could be as simple as acing a quiz?

Wiki Contributions

Comments

Sorted by

Thanks for the suggestion! I updated the title to match the original wording.

Hi Connor,

Thank you for taking the time to share your insights. I've updated the post to incorporate your comment. I removed the phrase suggesting a change in beliefs between EleutherAI and Conjecture, and added a paragraph that clarifies EleutherAI's approach to open sourcing. I also made sure to clearly state that CarperAI is a spinoff of EleutherAI but operates independently. 

I appreciate your feedback, and I hope these changes better represent EleutherAI's position.

Hi Chris,

Thank you for your comment. I am not entirely convinced that open-sourcing advanced AI, akin to nukes, is a good idea. My preference is for such powerful technologies to remain difficult to access in order to mitigate potential risks.

That being said, I agree that it's important to explore solutions that align with the open source community's strengths, such as violet teaming. I'll consider your input as I continue to refine my thoughts on this matter.

Do you mean confirmation bias, or implicit bias? I cannot find 'bias of no self-skepticism' on Google.

Thanks for the insightful comment! I added these four biases to the Doc:

Single Cause Fallacy
You think there's one cause when there are many.

Attribute Substitution
You value insurance on terrorism higher than on death of any kind.

Introspection Illusion
You confidently but falsely explain the origin of your beliefs.

Status Quo Bias (aka homeostatic prior)
You want things to stay as they are. 

Quote for you on inaction bias: "The mistakes that have been most extreme in Berkshire's history are mistakes of omission." -Warren Buffett