Have you considered booking a call with AI Safety support or applying to speak to 80,000 hours?
You can also express interest for the next round of the AGI Safety Fundamentals course.
BTW, great job with the babble! Just have to make sure you do a good job with the prune.
As a Babble this is excellent, and many of these (e.g. optimizing income streams, motivating/participating-in groups) seem to be necessary prerequisites for being in a position to make progress on X-risk problems.
But I think the nature of such problems (as ones that have been attempted by many other individuals with at least some centralized organizations where these individuals share their experiences to avoid duplication of effort, that is) means that any undirected Babble will primarily encounter lines of inquiry that have already been addressed, as many of the more direct (non-resource-gathering) suggestions seem to be.
As a point of methodology, I would suggest trying for much larger Babble lists when approaching these problems, perhaps on the scale of a few hundred ideas, or alternatively making multiple recursive layers of Babbles for each individual point at every recursive level (e.g. 100 points, each with 100 points, each with 100 points...), so that the process is more likely to produce unique [and thus useful] approaches.
I became aware of AI safety as a cause area relatively recently, and despite it likely being the 11th hour, I want to contribute.
PSA: Lots of people disagree with Eliezer about timelines, and Eliezer famously does not want you to adopt his positions without questioning.
Great context that I wasn’t aware of! Changed language to reflect a level of uncertainty, since I’ve yet to form my own solid timeline. Also because likely has an actual meaning as a word that I didn’t consider.
Background
It isn’t obvious what contributing to The Good Future from a non-standard background looks like.
Especially on a short timeline.
I am a first time poster with a background in the arts. I became aware of AI safety as a cause area relatively recently, and despite it possibly being the 11th hour, I want to contribute. I would consider myself a baby rationalist - there is years worth of content to digest still. This feels like relevant information to this post because I expect the results of my Babble session and reflection to be very different than (what I perceive to be) the average less wrong poster.
Why even try?
Because humans are worth it, and if I can tip the scale even a little bit then I should.
I am motivated to post now for the first time because of the Good Hearts experiment. One of the desirable outcomes seems to be pulling us lurkers out of woodwork to see what the results of the week look like, and as someone who fits that description I feel that I can contribute to the experiment. Normally I feel like I should not participate in posting because of the very high standard of discourse, but this experiment feels like permission to post and try for that high standard anyway (while being okay with being corrected on wrongness).
Process
My intention is to write in the ballpark of 50 items of ways I can contribute to obtaining The Good Future without auto-pruning, especially on my instincts of “reasonableness” and “feasibility.” I will then go through and analyze my thinking by categorizing the items with the intention of learning what pathways are under-represented in my thoughts, or that I may be completely blind to. I may find other ways to evaluate the Babble that aren’t obvious to me beforehand.
I also hope to find something actionable.
Babble
50 Ways To Contribute
Reflection
Since time is off the essence, one obvious way to evaluate this challenge is to look at what is actionable now vs in the future. When I reorder this list with that in mind, there is are a few items I can do immediately (optimizing at home); a large group of items that are right outside of my scope of abilities and could be executed in the near future (learning relavent math words for transcription); and few things that are true shots in the dark and outside of by current abilities (becoming an expert). That middle section is what interests me most, because I feel like I’m juuuust lacking the right information to evaluate what would be most effective. I am unsure how to gain that information.
Around half of the items require heavy communication with others. This isn’t something I would have named as skill I could utilize before hand, and so feels like a surprising result! When I reflect on why this may be a common theme for me, I want to say it is because I have time to try and make connections; I identify lack of communication as an obvious gap; and my model of myself includes “great supporting character”. I have some evidence that this model is true, and could lean into that skill.
I would evaluate my babble items as being 80% usual and 20% unusual (for me) thinking. The things that fall into the usual category include things that my model of a typical LWer might say (write ratfic; be a datapoint in tests productivity methods). When I was able to push more toward unusual thinking, they are also ideas I would evaluate as much less feasible (survive paperclip maximization). I expect that pushing more into the unusual is where a truely good idea hides; I will need to more work to get there, however. That work may include longer babble, because the end of the list feels like it pushed my thinking more than the beginning.
Having free time on my hands is one thing I would say is my most valuable available resource. I can tackle many of these items from utilizing that resource. Perhaps this is an area to concentrate on.
It is more difficult to identify what it is that I’m not seeing as categories of possibilities; perhaps this feels obvious in retrospect, but I did state at the beginning that I expected to gain some information about this. However, I’m still expecting this to crop up at some point while reflecting in the future, since I will be on the look out for ideas that don’t fit those common categories. Maybe that is naive, and I’m taking this singular exercise as weak evidence that I shouldn’t expect better ideas through pure reflection (rather than action).
Did I find this exercise useful?
Yes! I am surprised at how many of these ideas seem both actually useful and doable. I feel capable of making actual if tiny progress - and when it comes to x-risk, I am completely biased to say it is worth adding my little bit of contribution to the pile. (Are our actions additive when we compile them? Gosh, so much to learn.)