What are examples of “knowledge of building systems that are broadly beneficial and safe while operating in the human capabilities regime?”
I assume the mentioned systems are institutions like courts, government, corporations, or universities
Charlotte thinks that humans and advanced AIs are universal Turing machines, so predicting capabilities is not about whether a capability is present at all, but whether it is feasible in finite time with a low enough error rate.
I have a similar thought. If AI has human-level capabilities, and a part of its job is to write texts, but it writes large texts in seconds and can do it 24/7, then is it still within the range of human capabilities?
Conjecture recently released an AI safety proposal. The three of us spent a few hours discussing the proposal and identifying questions that we have. (First, we each re-read the post and independently brainstormed a few questions we had. Then, we discussed the post, exchanged questions/uncertainties, and consolidated our lists).
Conjecture's post is concise, which means it leaves out many details. Many of our questions are requests for more details that would allow us (and others) to better understand the proposal and evaluate it more thoroughly.
Requesting examples and details
Conceptual questions
Other questions
It is possible that satisfactory answers to some of these questions would involve revealing infohazards, but we’re hopeful that some of them could be addressed without revealing infohazards.