steven0461

Steven K

Wiki Contributions

Load More

Comments

partly as a result of other projects like the Existential Risk Persuasion Tournament (conducted by the Forecasting Research Institute), I now think of it as a data-point that “superforecasters as a whole generally come to lower numbers than I do on AI risk, even after engaging in some depth with the arguments.”

I participated in the Existential Risk Persuasion Tournament and I disagree that most superforecasters in that tournament engaged in any depth with the arguments. I also disagree with the phrase "even after arguing about it" - barely any arguing happened, at least in my subgroup. I think much less effort went into these estimates than it would be natural to assume based on how the tournament has been written about by EAs, journalists, and so on.

Thanks, yes, this is a helpful type of feedback. We'll think about how to make that section make more sense without background knowledge. The site is aimed at all audiences, and this means we'll have to navigate tradeoffs about text leaving gaps in justifying claims vs. being too long vs. not having enough scope to be an overview. In this case, it does look like we could make the tradeoff on the side of adding a bit more text and links. Your point about the glossary sounds reasonable and I'll pass it along. (I guess the tradeoff there is people might see an unexplained term and not realize that an earlier instance of it had a glossary link.)

You're right that it's confusing, and we've been planning to change how collapsing and expanding works. I don't think specifics have been decided on yet; I'll pass your ideas along.

I don't think there should be "random" tabs, unless you mean the ones that appear from the "show more questions" option at the bottom. In some cases, the content of child questions may not relate in an obvious way to the content of their parent question. Is that what you mean? If questions are appearing despite not 1) being linked anywhere below "Related" in the doc corresponding to the question that was expanded, or 2) being left over from a different question that was expanded earlier, then I think that's a bug, and I'd be interested in an example.

Quoting from our Manifund application:

We have received around $46k from SHfHS and $54k from LTFF, both for running content writing fellowships. We have been offered a $75k speculation grant from Lightspeed Grants for an additional fellowship, and made a larger application to them for the dev team which has not been accepted. We have also recently made an application to Open Philanthropy.

if there's interest in finding a place for a few people to cowork on this in Berkeley, please let me know

Thanks, I made a note on the doc for that entry and we'll update it.

Traffic is pretty low currently, but we've been improving the site during the distillation fellowships and we're hoping to make more of a real launch soon. And yes, people are working on a Stampy chatbot. (The current early prototype isn't finetuned on Stampy's Q&A but searches the alignment literature and passes things to a GPT context window.)

Yes, but we decided to reschedule it before making the announcement. Apologies to anyone who found the event in some other way and was planning on it being around the 11th; if Aug 25-27 doesn't work for you, note that there's still the option to participate early.

Since somebody was wondering if it's still possible to participate without having signed up through alignmentjam.com:

Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.

Load More