By 'bedrock liberal principles', I mean things like: respect for individual liberties and property rights, respect for the rule of law and equal treatment under the law, and a widespread / consensus belief that authority and legitimacy of the state derive from the consent of the governed. Note that "consent...
My sense is that an increasingly common viewpoint around here is that the last ~20 years of AI development and AI x-risk discourse are well-described by the following narrative: > Eliezer Yudkowsky (and various others who were at least initially heavily influenced by his ideas) developed detailed models of key...
Update 2: this is kind of stale given more recent developments and I'm not planning to update it further; probably any further discussion should happen in OpenAI: Facts from a Weekend or other newer posts. Update: un-paywalled article from the Verge: https://www.theverge.com/2023/11/20/23967515/sam-altman-openai-board-fired-new-ceo This is still breaking as of 9:30 PT...
> "There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to...
Max H commented on the dialogues announcement, and with both of us being interested in having a conversation where we try to do something like "explore the basic case for AI X-risk, without much of any reference to long-chained existing explanations". I've found conversations like this valuable in the past,...
Introduction Much has been written about the implications and potential safety benefits of building an AGI based on one or more Large Language Models (LLMs). Proponents argue that because LLMs operate on human-readable natural language, they will be easier to understand, control, and scale safely compared to other possible approaches...
Background: this post is a response to a recent post by @Zack_M_Davis, which is itself a response to a comment on another post. I intentionally wrote this post in a way that tries to decontextualize it somewhat from the original comment and its author, but without being at least a...