(cross-posted from EAF)
appreciate you sharing your impression of the post. It’s definitely valuable for us to understand how the post was received, and we’ll be reflecting on it for future write-ups.
1) We agree it's worth taking into account aspects of an organization other than their output. Part of our skepticism towards Conjecture – and we should have made this more explicit in our original post (and will be updating it) – is the limited research track record of their staff, including their leadership. By contrast, even if we accept for the sake o...
(cross-posted from the EA Forum)
Regarding your specific concerns about our recommendations:
1) We address this point in our response to Marius (5th paragraph)
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizatio...
(crossposted from the EA Forum)
We appreciate your detailed reply outlining your concerns with the post.
Our understanding is that your key concern is that we are judging Conjecture based on their current output, whereas since they are pursuing a hits-based strategy we should expect in the median case for them to not have impressive output. In general, we are excited by hits-based approaches, but we echo Rohin's point: how are we meant to evaluate organizations if not by their output? It seems healthy to give promising researchers sufficient ...
Thanks for commenting and sharing your reactions Mishka.
Some quick notes on what you've shared:
Although one has to note that their https://www.conjecture.dev/a-standing-offer-for-public-discussions-on-ai/ is returning a 404 at the moment. Is that offer still standing?
In their response to us they told us this offer was still standing.
A lot of upvotes on such a post without substantial comments seems... unfair?
As of the time of your comment, we believe there were about 8 votes and 30 karma and the post had been up a few hours. We are not sure what voti...
Hi TurnTrout, thanks for asking this question. We're happy to clarify:
We do not consider Conjecture at the same level of expertise as other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.
This isn't quite the right thing to look at IMO. In the context of talking to governments, an "AI safety expert" should have thought deeply about the problem, have intelligent things to say about it, know the range of opinions in the AI safety community, have a good understanding of AI mor...
Quick updates:
We've crossposted the full text on LessWrong here: https://www.lesswrong.com/posts/SuZ6Guuos7CjfwRQb/critiques-of-prominent-ai-safety-labs-redwood-research
Note that we don't criticize Connor specifically, but rather the lack of a senior technical expert on the team in general (including Connor). Our primary criticisms of Connor don't have to do with his leadership skills (which we don't comment on this at any point in the post).