I have discussions that ignore the future disruptive effects of AI all the time. The national debt is a real problem. Social security will collapse. The environment is deteriorating. You haven't saved enough for pension. What is my two year old going to do when she is twenty. Could Israel...
Weak AGIs Kill Us First aka. Deadly pre-Superintelligence: Paths, Dangers, Strategies Epistemic status: This has not been edited or commented on by anyone other than chatGPT as of publishing, I hope the arguments stand on their own. When people talk about the dangers of AGI, they often refer to the...
Short and long term goals have different implications regarding instrumental convergence . If I have the goal of immediately taking a bite of an apple that is in my hand right now, I don't need to gather resources or consider strategies, I can just do it. On the other hand,...
Title from: https://twitter.com/arankomatsuzaki/status/1529278581884432385. This is not quite a linkpost for this paper. Nonetheless, the abstract is: > Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a...