Note: I am completely unaffiliated with ARIA. I figured I'd post this since applications are closing soon and I didn't see anyone post about this. My Takeaways: * ARIA is funding the development of Safeguarded AI which is an update to and specific implementation of davidad's Open Agency Architecture. *...
Summary Language model agents (LMAs) like AutoGPT have promising safety characteristics compared to traditional conceptions of AGI. The LLMs they are composed of plan, think, and act in highly transparent and correctable ways, although not maximally so, and it is unclear whether safety will increase or decrease in the future....
Background I've been working on a knowledge management/representation system for ~1.5 years with the initial goal of increasing personal and collective intelligence. I discovered that this work could be highly applicable to AI safety several weeks ago through Eric Drexler's work on QNRs, which pertains to knowledge representation, and then...
Hi LessWrong, Two years ago, when I travelled to Belize, I came up with an idea for a self-sufficient scalable program to address poverty. I saw how many people in Belize were unemployed or getting paid very low wages, but I also saw how skilled they were, a result of...
Hi Less Wrong, I've got the opportunity to promote books and other forms of content to a largely teenage audience. I'm looking for some good book recommendations and recommendations for a limited amount of other media (websites, movies, etc) that will spread awareness of positive ideas and issues in the...
Article Prerequisite: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality Introduction The goal of this post is to explore the idea of rationality training, feedback and ideas are greatly appreciated. Less Wrong’s stated mission is to help people become more rational, and it has made progress toward that...