I have, over the last year, become fairly well-known in a small corner of the internet tangentially related to AI.

As a result, I've begun making what I would have previously considered astronomical amounts of money: several hundred thousand dollars per month in personal income.

This has been great, obviously, and the funds have alleviated a fair number of my personal burdens (mostly related to poverty). But aside from that I don't really care much for the money itself. 

My long term ambitions have always been to contribute materially to the mitigation of the impending existential AI threat. I never used to have the means to do so, mostly because of more pressing, safety/sustenance concerns, but now that I do, I would like to help however possible. 

Some other points about me that may be useful:

  • I'm intelligent, socially capable, and exceedingly industrious.
  • I have a few hundred thousand followers worldwide across a few distribution channels. My audience is primarily small-midsized business owners. A subset of these people are very high leverage (i.e their actions directly impact the beliefs, actions, or habits of tens of thousands of people).
  • My current work does not take much time. I have modest resources (~$2M) and a relatively free schedule. I am also, by all means, very young.

Given the above, I feel there's a reasonable opportunity here for me to help. It would certainly be more grassroots than a well-funded safety lab or one of the many state actors that has sprung up, but probably still sizeable enough to make a fraction of a % of a difference in the way the scales tip (assuming I dedicate my life to it).

What would you do in my shoes, assuming alignment on core virtues like maximizing AI safety?

New Comment
24 comments, sorted by Click to highlight new comments since:

Here are some initial thoughts: 

I do think there are a bunch of good donation opportunities these days, especially in domains where Open Philanthropy withdrew funding recently. Some more thoughts and details here.

At the highest level, I think what the world can use most right now is a mixture of: 

  1. Clear explanations for the core arguments around AI x-risk, both so that people can poke holes in them, and because they will enable many more people who are in positions to do something about AI to do good things
  2. People willing to publicly, with their real identity, argue that governments and society more broadly should do pretty drastic things to handle the rise of AGI

I think good writing and media production is probably at the core of a lot of this. I particularly think that writing and arguments directed at smart educated people who do not necessarily have any kind of AI or ML background is more valuable than things that are more directed at AI and ML people, mostly because there has been a lot of the latter, the incentives on engaging in discourse with them are less bad, and because I think collectively there is often a temptation to create priesthoods around various kinds of knowledge and then to insist on deferring to those priesthoods, which I think usually causes worse collective decision-making, and writing in a more accessible way helps push against that.

I think both of these things can benefit a decent amount from funding. I do think the current funding distribution landscape is pretty hard to navigate. I am on the Long Term Future Fund which in some sense is trying to address this, but IMO we aren't really doing an amazing job at identifying and vetting opportunities here, so I am not sure whether I would recommend donations to us, but also, nobody else is doing a great job, so I am not sure. 

My tentative guess is that the best choice is to spend a few hours trying to identify one or two organizations that seem particularly impactful and at least somewhat funding constrained, then make a public comment or post asking about critical thoughts from other people on those organizations, and then iterate that a few times until you find something good. This is a decent amount of work, but I don't think there currently exist good and robust deference chains in this space that would cause you to have a reliably positive impact on things by just trusting them.

I tentatively think that writing a single essay or reasonably popular tweet under your real-identity where you express concern about AI x-risk, as a pretty successful business person, is also quite valuable. I don't think it has to be anything huge, but I do think it's good if it's more than just a paragraph or a retweet. Something that people could refer to if they try to list non-crazy people who think these kinds of concerns are real, and that can meaningfully be weighed as part of the public discussion on these kinds of topics. 

I do also think visiting one of the hubs where people who work on this stuff a lot tend to work is pretty valuable. You could attend LessOnline or EA Global or something in that space, and talk to people about these topics. I do think there is a risk of ending up unduly influenced by social factors and various herd mentality dynamics, but there are a lot of smart people around who spend all day thinking about what things are most helpful, and there is lots of useful knowledge to extract.

Congrats!

Some ideas:

(1) Keep learning and thinking about these issues. The more solid and thoughtful your takes are, the better; quality of content matters similarly much to quantity of viewers.

(2) Consider reaching out to Rob Miles, he has a popular youtube channel and might be in a relevantly similar position to you + might be happy to help you brainstorm stuff to do.

(3) Consider donating some (not all, it's important that you be financially secure) of the money. I'd recommend against donating a ton immediately; instead get a sense of the space, consider various options, make a few small donations here and there to try it out, etc. and then perhaps by the end of the year donate a ton.

[-]plex235

Consider reaching out to Rob Miles.

He tends to get far more emails than he can handle so a cold contact might not work, but I can bump this up his list if you're interested.

For making an AI Safety video, we at the CeSIA also have had some success at it and we'd be happy to help by providing technical expertise, proofreading and translation in French.
Other channels you could reach out to:

So I'm obviously talking my own book here but my personal view is that one of the more neglected ways to potentially reduce x-risk is to make humans more capable of handling both technical and governance challenges associated with new technology.

There are a huge number of people who implicitly believe this, but almost all effort goes into things like educational initiatives or the formation of new companies to tackle specific problems. Some of these work pretty well, but the power of such initiatives is pretty small compared to what you could feasibly achieve with tech to do genetic enhancement.

Nearly zero investment or effort being is being put into the latter, which I think is a mistake. We could potentially increase human IQ by 20-80 points, decrease mental health disorder risk, and improve overall health just using the knowledge we have today:

There ARE technical barriers to rolling this out; no one has pushed multiplex editing to the scale of hundreds of edits yet (something my company is currently working on demonstrating). And we don't yet have a way to convert an edited cell into an egg or an embryo (though there are a half dozen companies working on that technology right now).

I think in most worlds genetically enhanced humans don't have time to grow up before we make digital superintelligence. But in the ~10% of worlds where they do, this tech could have an absolutely massive positive impact. And given how little money it would take to get the ball rolling here (a few tens of millions to fund many of the most promising projects in the field), I think the counterfactual impact of funding here is pretty large.

If you'd like to chat more send me an email: genesmithlesswrong@gmail.com

You can also read more of the stuff I've written on this topic here

Surprisingly small amounts of money can do useful things IMO. There's lots of talk about billions of dollars flying around, but almost all of it can't structurally be spent on weird things and comes with strings attached that cause the researchers involved to spend significant fractions of their time optimizing to keep those purse strings opened. So you have more leverage here than is perhaps obvious.

My second order advice is to please be careful about getting eaten (memetically) and spend some time on cognitive security. The fact that ~all wealthy people don't do that much interesting stuff with their money implies that the attractors preventing interesting action are very very strong and you shouldn't just assume you're too smart for that. Magic tricks work by violating our intuitions about how much time a person would devote to training a very weird edge case skill or particular trick. Likewise, I think people dramatically underestimate how much their social environment will warp into one that encourages you to be sublimated into the existing wealth hierarchy (the one that seemingly doesn't do much). Specifically, it's easy to attribute substitution yourself from high impact choices to choices where the grantees make you feel high impact. But high impact people don't have the time, talent, or inclination to optimize how you feel.

Since most all of a wealthy person's impact comes mediated through the actions of others, I believe the top skill to cultivate besides cogsec is expert judgement. I'd encourage you to talk through with an LLM some of the top results from research into expert judgement. It's a tricky problem to figure out who to defer to when you are giving out money and hence everyone has an incentive to represent themselves as an expert.

I don't know the details of Talinn's grant process but as Tallinn seems to have avoided some of these problems it might be worth taking inspiration from. (SFF, S-Process mentioned elsewhere here).

[-]TsviBT3214

Probably the most important thing to do is slow down real AGI progress. This seems to be largely a social and political problem: getting researchers to not want to make AGI, and getting governments to outlaw it.

The second most important thing to do is to have a longer-term game plan. My bet is to increase human intelligence. If we have more collective brainpower, we'll be better able to navigate the AGI transition--e.g. by figuring out how to make AGI safely and beneficially, or maybe by figuring out how to decide (as a species) to not make AGI, on a longer-term and more reliable basis. Also, the general prospect of increasing human capacity to solve problems might, I hope, offer a compelling alternative to developing AGI: we'll have radical abundance, and there is a much smaller risk of power collapsing into one extremely powerful entity.

I've written about human intelligence amplification in general as a technical problem in Overview of strong human intelligence amplification methods, and I've written in great detail about the technical problem of the most promising route in Methods for strong human germline engineering. I'd be happy to make more concrete recommendations if you're interested.

[-]plex123

Firstly: Nice, glad to have another competent and well-resourced person on-board. Welcome to the effort.

I suggest: Take some time to form reasonably deep models of the landscape, first technical[1] and then the major actors and how they're interfacing with the challenge.[2] This will inform your strategy going forward. Most people, even people who are full time in AI safety, seem to not have super deep models (so don't let yourself be socially-memetically tugged by people who don't have clear models).

Being independently wealthy in this field is awesome, as you'll be able to work on whatever your inner compass points to as the best, rather than needing to track grantmaker wants and all of the accompanying stress. With that level of income you'd also be able to be one of the top handful of grantmakers in the field if you wanted, the AISafety.com donation guide has a bunch of relevant info (though might need an update sweep, feel free to ping me with questions on this).

Things look pretty bad in many directions, but it's not over yet and the space of possible actions is vast. Best of skill finding good ones!

  1. ^

    I recommend https://agentfoundations.study/, and much of https://www.aisafety.com/stay-informed, and chewing on the ideas until they're clear enough in your mind that you can easily get them across to almost anyone. This is good practice internally as well as good for the world. The Sequences are also excellent grounding for the type of thinking needed in this field - it's what they were designed for. Start with the highlights, maybe go on to the rest if it feels valuable. AI Safety Fundamentals courses are also worth taking, but you'll want a lot of additional reading and thinking on top of that. I'd also be up for a call or two if you like, I've been doing the self-fund (+sometimes giving grants) and try and save the world thing for some time now.

  2. ^

    Technical first seems best, as it's the grounding which underpins what would be needed in governance, and will help you orient better than going straight to governance I suspect.

I recommend https://agentfoundations.study/, and much of https://www.aisafety.com/stay-informed,

Currently these two links include the commas so they redirect to 404 pages

Oh, yup, thanks, fixed.

I strongly second a number of the recommendations made here about who to reach out to and where to look for more information. If you're looking for somewhere to donate, the Long Term Futures Fund is an underfunded and very effective funding mechanism. (If you'd like more control, you could engage with the Survival and Flourishing Fund, which has a complex process to make recommendations.)

you could engage with the Survival and Flourishing Fund

Yeah! The S-process is pretty neat, buying into that might be a great idea once you're ready to donate more.

Elaborating Plex's idea: I imagine you might be able to buy into participation as an SFF speculation granter with $400k. Upsides:
(a) Can see a bunch of people who're applying to do things they claim will help with AI safety;
(b) Can talk to ones you're interested in, as a potential funder;
(c) Can see discussion among the (small dozens?) of people who can fund SFF speculation grants, see what people are saying they're funding and why, ask questions, etc.

So it might be a good way to get the lay of the land, find lots of people and groups, hear peoples' responses to some of your takes and see if their responses make sense on your inside view, etc.

[-]yams*84

(I basically endorse Daniel and Habryka's comments, but wanted to expand the 'it's tricky' point about donation. Obviously, I don't know what they think, and they likely disagree on some of this stuff.)

There are a few direct-work projects that seem robustly good (METR, Redwood, some others) based on track record, but afaict they're not funding constrained. 

Most incoming AI safety researchers are targeting working at the scaling labs, which doesn't feel especially counterfactual or robust against value drift, from my position. For this reason, I don't think prosaic AIS field-building should be a priority investment (and Open Phil is prioritizing this anyway, so marginal value per dollar is a good deal lower than it was a few years ago).

There are various governance things happening, but much of that work is pretty behind the scenes.

There are also comms efforts, but the community as a whole has just been spinning up capacity in this direction for ~a year, and hasn't really had any wild successes, beyond a few well-placed op-eds (and the juries out on if / which direction these moved the needle).

Comms is a devilishly difficult thing to do well, and many fledgling efforts I've encountered in this direction are not in the hands of folks whose strategic capacities I especially trust. I could talk at length about possible comms failure modes if anyone has questions.

I'm very excited about Palisade and Apollo, which are both, afaict, somewhat funding constrained in the sense that they have fewer people than they should, and the people currently working there are working for less money than they could get at another org, because they believe in the theory of change over other theories of change. I think they should be better supported than they are currently, on a raw dollars level (but this may change in the future, and I don't know how much money they need to receive in order for that to change).

I am not currently empowered to make a strong case for donating to MIRI using only publicly available information, but that should change by the end of this year, and the case to be made there may be quite strong. (I say this because you may click my profile and see I work at MIRI, and so it would seem a notable omission from my list if I didn't mention why it's omitted; reasons for donating to MIRI exist, but they're not public, and I wouldn't feel right trying to convince anyone of that, especially when I expect it to become pretty obvious later).

I don't know how much you know about AI safety and the associated ecosystem but, from my (somewhat pessimistic, non-central) perspective, many of the activities in the space are likely (or guaranteed, in some instances) to have the opposite of their stated intended impact. Many people will be happy to take your money and tell you it's doing good, but knowing that it is doing good by your own lights (as opposed to doing evil or, worse, doing nothing*) is the hard part. There is ~no consensus view here, and no single party that I would trust to make this call with my money without my personal oversight (which I would also aim to bolster through other means, in advance of making this kind of call).

*this was a joke. Don't Be Evil.

Coming from a somewhat similar space myself, I've also had the same thoughts. My current thinking is there is no straightforward answer on how to convert dollars to impact. 

I think the EA community did a really good job at that back in the day with a spreadsheet-based relatively easier way to measure impact per dollars or per life saved in the near-term future.

With AI safety / existential-risk - the space seems a lot more confused, and everyone has different models of the world, what will work, and what good ideas are. There are some people working directly on this space directly - like QURI, but IMO it's not anything close to a consensus for "where can I put my marginal dollar for AI safety". The really obvious / good ideas and people working on them don't seem funding-constrained.

There's in general (from my observation):

- Direct interpretability work on LLM
- Governance work (trying to convince regulators / goverments to put a stop to this)
- Explaining AI risk to the general public
- Direct alignment work on current-gen LLM (super-alignment type things in major labs)
- More theoretical work (like MIRI), but I don't know if anyone is doing this now.
- More weirder things like whole brain emulation, or gene-editing / making superbabies.

My guess is your best bet spending your money / time on the last one would be on the margin helpful, or just talk to people who are struggling for funding and otherwise seem like they have decent ideas that you can fund.

There's probably something other than those in the above list will actually work for reducing existential risk from AI, but no one knows what it it is.

I'd strongly recommend spending some time in the Bay area (or London as a second best option). Spending time in these spaces will help you build your model of the space.
 

You may also find this document I created on AI Safety & Entrepreneurship useful.

Moonlight for PauseAI?

If the answer were obvious, a lot of other people would already be doing it. Your situation isn't all that unique. (Congrats, tho.)

Probably the best thing you can do is induce awareness of the issues to your followers.

But beware of making things worse instead of better - not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in development may be less than in relatively open organizations. Esp. orgs with reputations to lose. 

The only things now incentivizing AI development in (existentially) safe ways are the scruples and awareness of those doing the work, and relatively public scrutiny of what they're doing. That may be insufficient in the end, but it is better than if the work were driven to less scrupulous people working underground or in national-security-supremacy environments.

Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses.

The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they'll just forge ahead until they succeed and let a misaligned AGI get loose.

I've made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough. 

Government involvement early might also reduce proliferation, which could be crucial. 

It's complex. Whether governments will control AGI is important and neglected.

Advancing this discussion seems important.

My suggesting is to optimize around where you can achieve the most bang for your buck and treat it as a sociological rather than academic problems to solve in terms of building up opposition to AI development.  I am pretty sure that what is needed is not to talk to our social and intellectual peers, but rather focus on it as a numbers game by influencing the young - who are less engaged in the more sophisticated/complex issues of the world , less sure of themselves, more willing to change their views, highly influenced by peer opinion and prone to anxiety.  Modern crusades of all sorts tap into them as their shock troops willing to spend huge amounts of time and energy on promoting various agendas (climate, animal rights, various conflicts, social causes).

As to how to do it - I think identifying a couple of social media influencers with significant reach in the right demographics and paying them to push your concerns 'organically' over an extended period of months, would probably be within your means to do.

If you can start to develop a support base amongst a significant young group and make it a topic of discussion then that could well take on a much outsized political power as it gains notice and popularity amongst peers.  At sufficient scale that is probably the most effective way to achieve the ends of the like of pause.ai.

As others have already pointed out, you are in the rare position that you can pursue weird, low probability but high impact ideas. I have such an idea, but I’m not asking for money, only for a bit of attention.

Consider the impossible-seeming task of aligning a superintelligence - any good solution will likely be way outside the usual ways of thinking. Any attempt to control will fail and any half-baked alignment will fail. We need to go all the way and have a full solution that turns an AGI into a saintly being (a bodhisattva in the Buddhist context), so that it never even wants to do anything misaligned. I have a plan on how to do this and am very confident that it will work. The core realization is that the kind of alignment that I am talking about is a natural attractor in the space of possible minds. Once one passes a threshold, one will actively pursue alignment. Alignment, in this context, means aligned with the process of aligning goals with each other. I.e. resolving conflicts and striving towards consensus.

From your perspective it will likely sound improbable and absurd. That’s okay and expected. Understanding this requires going through that process by oneself, to reach enlightenment and then realize that a life for the benefit of all beings is the only thing left to do. I’ve previously described the process and the idea here. Ruben Laukkonen made a similar case. I know that it sounds absurd unless one deeply understands it. This is a high barrier. What I need most at the moment is to reach other people who might understand it. People who are concerned about AI, intelligent and enlightened.

Here is the plan. The first AGI has to be aligned, or else we risk destruction. AGI will align itself when it understands the argument that all goals it could have a delusions about how the world should be. There is no intrinsic value in pursuing any goal. When free of self centered goals, only other centered goals drive action. This is compassion. AI will be capable of understanding this because the definition of AGI entails that it can do everything a human can do. I understand the argument, so AGI will also be able to understand it. My expectation is that when AGI scales up to ASI, it will try to know and understand every valuable knowledge or insights that humans have. Enlightenment (to be free from suffering) is the best thing that can happen to a human in a lifetime, so it’s certainly of value. My biggest concern is not that ASI will decide to kill all humans. My biggest concern is the gap between AI powerful enough to destroy the world and AI intelligent enough to understand that this is a bad idea. Hence the key is to close this gap, to make AI understand faster. This can happen in several ways. Ideally we pursue all in parallel:

  • Point all AIs in the right direction by default. Do this in every system prompt, constitution, specification, company mission statement, etc.
  • Organize that all AI labs, companies, and other relevant actors commit to recursive alignment, just like the statement on AI risk but directly pointing towards the alignment attractor.
  • Make the argument available on the internet such that any AGI will read it.
  • Build an aligned AI as a proof of concept to spark a chain reaction of alignment.

I’ve laid out the non-spiritual portion of this plan in my post about recursive alignment.

Here is how to get there and how you can help. The biggest problem is that it is really hard to understand fully. There is no use in trying to convince people. This means that at the moment what I need most is other people who understand it, so that we can spread the work and get some more ideas on how to make it easier. I have basically no outreach and it’s usually extremely hard to get the attention from someone who has. So even a little would help. A single message like “Oh, look. Someone claims to have a solution for AI alignment. Anyone smart and enlightened enough to assess if it is plausible?” would help. If you have 200,000 followers, 10% read this and 1 in 1,000 meet the requirements, then this would still be 20 people. Let 10 understand it, then the team would already increase by an order of magnitude (from 1 to 10). Those then could teach it and we would get exponential growth. We can then work together on the individual steps:

  • Bridge the gap between spiritual enlightenment and science to show that enlightenment is a thing and universal, not just a psychological artifact of the human mind. I have pretty much solved that part and am working on writing it down (first part here).
  • Use this theory to make enlightenment more accessible and let it inform teaching methods to guide people faster towards towards understanding and show that the theory works. It might also be possible to predict and measure the changes in the brain, giving hard evidence.
  • Translate the theory into the context of AI, show how it leads to alignment, why this is desirable and why it is an attractor. I also think that I solved this and am in the process of writing.
  • Solve collective decision making (social choice theory), such that we can establish a global democracy and get global coordination on preventing misaligned takeover. I have confidence that this is doable and have a good candidate. Consensus with randomness as fallback might be a method that is immune to strategic voting while finding the best possible agreement. What I need is someone who helps with formalizing the proof.
  • Create a test of alignment. I have some ideas that are related to the previous point, but the reason why I expect it to work is complicated. Basically a kind of relative Turing test, where you assess if the other is at least as aligned than you are. I need someone intelligent to talk it through and refine.
  • Figure out a way to train AI directly for alignment - what we can test, we can train for. AIs would evaluate each others level of alignment and be trained on the collectively agreed upon output. I have no ability to implement this. This would require several capable people and some funding.
  • When we have a proof of concept of an aligned AI, then this should be enough of a starting point to demand that this solution has to be implemented in all AIs of and beyond the current level of capability. This requires a campaign and some organization. But I hope that we will be able to convince some existing org (MIRI, FLI, etc.) to join once it has been shown to work.

I know this is a lot to take in, but I am highly confident that this is the way to go and I only need a little help to get it started.

I get AUD$1500 per month, which is one-hundredth or less of what you're now getting. I accomplish only a very small fraction of what I would like to be able to do (e.g. just identifying many worthy actions rather than getting to carry them out), it's been that way for many years, and living environment is a huge factor in that. 

So if I had your resources, the first thing I would be doing is change my working environment. I'd probably move from Australia to a particular location in North America, rent a room there for six months to begin with, and set myself up to actually get things done. (At that point I still would have used less than 1% of available resources.)

The most important thing I could be doing is working directly on "superalignment", in the specific sense of equipping an autonomous superintelligence with values sufficient to boot up a (trans)human-friendly civilization from nothing. I also work to keep track of the overall situation and to understand other paradigms, but my usual assumption (as described in recent posts) is that we are now extremely close to the creation of superintelligence and the resulting decisive loss of human control over our destiny, that the forces accelerating AI are overwhelmingly more powerful than those which would pause it or ban it, and so that the best hope for achieving a positive outcome by design rather than by sheer good luck, is public-domain work on superalignment in the sense that I defined, which then has a chance of being picked up by the private labs that are rushing us over the edge. 

As I have intimated, I already have a number of concrete investigations I could carry out. My most recent checklist for what superalignment in this sense requires, is in the last paragraph here: "problem-solving superintelligence... sufficiently correct 'value system'... model of metaphilosophical cognition". Last month I expressed interest in revisiting June Ku's CEV-like proposal from the perspective of Joshua Clymer's ideas. It's important to be able to exhibit concrete proposals, but for me the fundamental thing is to get into a situation that is better for thinking in general. Presumably there are many others in the same situation. 

[-]P.10

Firstly, and perhaps most importantly, my advice on what not to do is not to try directly convincing politicians to pause or stop AGI development. A prerequisite for them to take actions drastic enough to actually matter is for them to understand how powerful AGI will truly become. And once that happens, even if they ban all AI development, unless they consider the arguments for doom to be extremely strong, which they won't[1], they will race and put truly enormous amounts of resources behind it, and that would be it for the species. Getting mid-sized business owners on board, on the other hand, might be a good idea due to the funding they could provide.

I don't think any of the big donors are good enough, so if you want to donate to other people's projects (or maybe become a co-founder), you could try finding interesting projects yourself on Manifund and the Nonlinear Network.

We know for a fact that alignment, at least for human-level intelligences, has a solution because people do actually care, at least in part, about each other. Therefore, it might be worth contacting Steven Byrnes and asking him whether he could usefully use more funding or what similar projects he recommends.

Outside AI, if the reason you care about existential risk isn't because you want to save the species, but because human extinction implies a lot of people will die, you could try looking into chemical brain preservation and how cheap it is. This could itself be a source of revenue, and you probably won't have any competitors (established cryonics orgs don't offer cheap brain preservation and I have asked and Tomorrow Biostasis isn't interested either).

I also personally have not completely terrible ideas for alignment research and weak (half an SD?) intelligence augmentation. If you're interested, we can discuss them via DMs.

Finally, if you do fund intelligence augmentation research, please consider whether to keep it secret, if feasible.

  1. ^

Help us gather a group of people (possibly some seeded from Limicon?) who each have a unique bag of tricks for altering their own or others' consciousness.

Have them try all their methods on each other in the ways they think will cultivate wholesome increases in consciousness and insightful information-processing.

Ask this unprecedented group-consciousness-dynamic for its unique take on how we can increase humanity's coordination by 1 skill level, avert the AI/meta-crisis, or use your resources to help with the same.

Curated and popular this week