It's not my paper — I just wanted to share the link on LW as it seemed likely to be of interest to some people here. Thanks for the feedback, and sorry for the confusion!
In the future, there should be some organization or some group of individuals in the LW community who raise awareness about these sorts of opportunities and offer content and support to ensure submissions from the most knowledgeable and relevant actors. This seems like a very low-hanging fruit and is something several groups I know are doing.
As I understand it, there are several studies showing that it works for depression. Of course, it may be the case that this is only for people for whom anxiety caused the depression.
E.g.,
https://pubmed.ncbi.nlm.nih.gov/26718792/
https://link.springer.com/article/10.1007/s00406-024-01783-2
I recommend anyone considering SSRIs to consider Silexan. See this write-up from Scott Alexander which covers the research. Someone I know has replaced their SSRI with Silexan. Several people I know have had good experiences. The main, and not always common, side effect is lavender flavored burps. Not everyone gets it. I don't, and it wasn't bad when I did. No-one I know has had a bad experience. Overall, the downside risk appears to be relatively low.
I also strongly endorse this based on my experience. I was a research consultant who created evidence summaries for decision makers in industry and government. This usually involved searching for published content. Anything that wasn't indexed by Google Scholar/publication repositories was almost always excluded.
Yeah, I found this helpful. In general, I'd like to see more of these dialogues. I think that they do a good job of synthesing different arguments in an accessible way. I feel that's increasingly important as more arguments emerge.
As an aside, I like the way that the content goes from relatively accessible high level discussion and analogy into more specific technical detail. I think this makes it much more accessible to novice and non-technical readers.
[Reposting from a Facebook thread discussing the article because my thoughts may be of interest]
I woke to see this shared by Timnit Gebru on my Linkedin and getting 100s of engagements. https://twitter.com/xriskology/status/1642155518570512384
It draws a lot of attention to the airstrikes comment which is unfortunate.
Stressful to read
A quick comment on changes that I would probably make to the article:
Make the message less about EY so it is harder to attack the messenger and undermine the message.
Reference other supporting authorities and sources of eviden...
Anonymous submission: I have pretty strong epistemics against the current approach of “we’ve tried nothing and we’re all out of ideas”. It’s totally tedious seeing reasonably ideas get put forward, some contrarian position gets presented, and the community reverts to “do nothing”. That recent idea of a co-signed letter about slowing down research is a good example of the intellectual paralysis that annoys me. In some ways it feels built on perhaps a good analytical foundation, but a poor understanding of how humans and psychology and policy change actually work.
Thanks for this.
Is anyone working on understanding LLM Dynamics or something adjacent? Is there early work that I should read? Are there any relevant people whose work I should follow?
Hey Hoagy, thanks for replying, I really appreciate it!
I fixed that link, thanks for pointing it out.
Here is a quick response to some of your points:
My feeling with the posts is that given the diversity of situations for people who are currently AI safety researchers, there's not likely to be a particular key set of understandings such that a person could walk into the community as a whole and know where they can be helpful.
I tend to feel that things could be much better with little effort. As an analogy, consider the difference between trying ...
Anonymous submission:
I only skimmed your post so I very likely missed a lot of critical info. That said, since you seem very interested in feedback, here are some claims that are pushing back against the value of doing AI Safety field building at all. I hope this is somehow helpful.
- Empirically, the net effects of spreading MIRI ideas seems to be squarely negative, both from the point of view of MIRI itself (increasing AI development, pointing people towards AGI), and from other points of views.
- The view of AI safety as expounded by MIRI, Nick Bostrom, e...
I just want to say that this seems like a great idea, thanks for proposing it.
I have a mild preference for you to either i) do this in collaboration with a project like Stampy or ii) plan how to integrate what you do into with another existing project in the future.
In general, I think that we should i) minimise the number of education providers and ii) maximise uniformity of language and understanding within the AI existential risk educational ecosystem.
Also, just as feedback (which probably doesn't warrant any changes being made unless similar feedback provided), I will flag that it would be good to be able to see posts that this is mentioned in ranked by recency rather than total karma.
Is there a plan to review and revise this to keep it up to date? Or is there something similar that I can look at which is more updated? I have this saved as something to revisit, but I worry not that it could be out of date and inaccurate given the speed of progress.
Thanks! Quick responses:
I think these results, and the rest of the results from the larger survey that this content is a part of, have been interesting and useful to people, including Collin and I. I'm not sure what I expected beforehand in terms of helpfulness, especially since there's a question "helpful with respect to /what/", and I expect we may have different "what"s here.
Good to know. When discussing some recent ideas I had for surveys, several people told me that their survey results underperformed their expectations, so I was curious if you would ...
Yeah, I agree with Kaj here. We do need to avoid the risk of using misleading or dishonest communication. However it also seems fine and important to optimise relevant communication variables (e.g., tone, topic, timing, concision, relevance etc) to maximise positive impact.
Thanks for doing/sharing this Vael. I was excited to see it!
I am currently bringing something of a behaviour change/marketing mindset to thinking about AI Safety movement building and therefore feel that testing how well different messages and materials work for audiences is very important. Not sure if it will actually be as useful as I currently think though.
With that in mind, I'd like to know:
Two ideas I wonder if it would be valuable to first test predictions...
Thanks for writing this up Simeon, it's given me a lot to think about. The table is particularly helpful.
Hi, thanks for writing this. Sorry to hear that things are hard. I would really like if you can help me to understand these points:
A few days later, I saw this post. And it reminded me of everything that bothers me about the EA community. Habryka covered the object level problems pretty well, but I need to communicate something a little more... delicate.
What bothers you about the EA community specifically? At times, I am not sure if you are talking about the EA community, the AIS technical research community, the rationalist community or the Berkeley...
I have updated this and made some explainer videos - please see here (will add to post when I get a chance)
https://www.linkedin.com/posts/peterslattery1_annual-reviewplanner-spreadsheet-2020-template-activity-6752440729035005952-iez9
This post might be useful for you . See the last paragraph where I linked to my daily trackers. I have some comments in them.
Let me know if you have any questions!
Glad it is useful! Did you see the comments in the google sheet? Just hover over the cells. Overall is your measure of how the day was overall as a hedonic experience while life satisfaction is satisfaction with life as a whole on that particular day.
Thanks! Link sharing should be fixed now. Let me know if not!
I think you should make a new version of this for your website. You are now becoming more of a public figure, and better communicating your forecasting record will help make you, and outputs like AI 2027, more credible.