A festival of truth-seeking, optimization, and blogging. We'll have writing workshops, rationality classes, puzzle hunts, and thoughtful conversations across a sprawling fractal campus of nooks and whiteboards.
It is easier to ask than to answer.
That’s my whole point.
It is much cheaper to ask questions than answer them so beware of situations where it is implied that asking and answering are equal.
Let's say there is a maths game. I get a minute to ask questions. You get a minute to answer them. If you answer them all correctly, you win, if not, I do. Who will win?
Preregister your answer.
Okay, let's try. These questions took me roughly a minute to come up with.
What's 56,789 * 45,387?
What's the integral from -6 to 5π of sin(x cos^2(x))/tan(x^9) dx?
What's the prime factorisation of 91435293173907507525437560876902107167279548147799415693153?
Good luck. If I understand correctly, that last one's gonna take you at least an hour1 (or however long it takes to threaten...
Here's an example of a cheap question I just asked on twitter. Maybe Richard Hanania will find it cheap to answer too, but part of the reason I asked it was because I expect him to find it difficult to answer.
If he can't answer it, he will lose some status. That's probably good - if his position in the OP is genuine and well-informed, he should be able to answer it. The question is sort of "calling his bluff", checking that his implicitly promised reason actually exists.
Co-Authors: @Rocket, @Ryan Kidd, @LauraVaughan, @McKennaFitzgerald, @Christian Smith, @Juan Gil, @Henry Sleight, @Matthew Wearden
The ML Alignment & Theory Scholars program (MATS) is an education and research mentorship program for researchers entering the field of AI safety. This winter, we held the fifth iteration of the MATS program, in which 63 scholars received mentorship from 20 research mentors. In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs.
Key details about the Winter Program:
Thanks for this (very thorough) answer. I'm especially excited to see that you've reached out to 25 AI gov researchers & already have four governance mentors for summer 2024. (Minor: I think the post mentioned that you plan to have at least 2, but it seems like there are already 4 confirmed and you're open to more; apologies if I misread something though.)
A few quick responses to other stuff:
Fecal Microbiota Transplant (FMT) is a procedure that involves transferring the stool of healthy people to the guts of unhealthy people. The bacteria in the healthy person’s stool helps to rebalance the unhealthy person’s dysbiotic (imbalanced) gut microbiome, making their microbiome healthier, disease-resistant, and more youthful. Think of FMTs as a kind of super probiotic to optimize your gut health!
Since the microbiome affects almost all aspects of human health, functioning, and development, FMTs are a promising treatment for a huge variety of health conditions, including multiple sclerosis, ALS, neurodegenerative diseases like Alzheimer's, autism, chronic fatigue syndrome, long Covid, and many more. FMTs from young donors might even have anti-aging effects!
FMTs can easily and safely be done at home without a doctor - both for the donor and recipient.
FMT treatment could...
Authors: David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum
Abstract:
...Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components:
This sounds really intriguing. I would like someone who is familiar with natural abstraction research to comment on this paper.
Ilya Sutskever and Jan Leike have resigned. They led OpenAI's alignment work. Superalignment will now be led by John Schulman, it seems. Jakub Pachocki replaced Sutskever as Chief Scientist.
Reasons are unclear (as usual when safety people leave OpenAI).
The NYT piece and others I've seen don't really have details. Archive of NYT if you want to read it anyway.
OpenAI announced Sutskever's departure in a blogpost.
Pure speculation: The timing of these departures being the day after the big, attention-grabbing GPT-4o release makes me think that there was a fixed date for Ilya and Jan to leave, and OpenAI lined up the release and PR to drown out coverage. Especially in light of Ilya not (apparently) being very involved with GPT-4o.
Caspar Oesterheld came up with two of the most important concepts in my field of work: Evidential Cooperation in Large Worlds and Safe Pareto Improvements. He also came up with a potential implementation of evidential decision theory in boundedly rational agents called decision auctions, wrote a comprehensive review of anthropics and how it interacts with decision theory which most of my anthropics discussions built on, and independently decided to work on AI some time late 2009 or early 2010.
Needless to say, I have a lot of respect for Caspar’s work. I’ve often felt very confused about what to do in my attempts at conceptual research, so I decided to ask Caspar how he did his research. Below is my writeup from the resulting conversation.
Yes! Edited the main text to make it clear
In an online discussion elsewhere today someone linked this article which in turn linked the paper Gignac & Zajenkowski, The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual differences data (PDF) (ironically hosted on @gwern's site).
And I just don't understand what they were thinking.
Let's look at their methodology real quick in section 2.2 (emphasis added):
...2.2.1. Subjectively assessed intelligence
Participants assessed their own intelligence on a scale ranging from 1 to 25 (see Zajenkowski, Stolarski, Maciantowicz, Malesza, & Witowska, 2016). Five groups of five columns were labelled as very low, low, average, high or very high, respectively (see Fig. S1). Participants' SAIQ was indexed with the marked column counting from the first to the left; thus, the scores ranged from 1 to
This is the first in a sequence of four posts taken from my recent report: Why Did Environmentalism Become Partisan?
In the United States, environmentalism is extremely partisan.
It might feel like this was inevitable. Caring about the environment, and supporting government action to protect the environment, might seem like they are inherently left-leaning. Partisanship has increased for many issues, so it might not be surprising that environmentalism became partisan too.
Looking at the public opinion polls more closely makes it more surprising. Environmentalism in the United States is unusually partisan, compared to other issues, compared to other countries, and compared to the United States itself at other times.
The partisanship of environmentalism was not inevitable.
Environmentalism is one of the, if not the, most partisan issues in the...
It's making environmentalism bi-partisan.
It's too late to make environmentalism never have been partisan in the first place. And you can't just persuade current people in the environmentalist movement to stop caring about all the other issues, except environment. Neither it will work, nor I think it will be net positive thing to do.
But there is still an opportunity to have its own branch of environmentalism for republicans.