Making AIs wiser seems most important in worlds where humanity stays in control of AI. It’s unclear to me what the sign of this work is if humanity doesn’t stay in control of AI.
A significant fraction of work on AI assumes that humans will somehow be able to control entities which are far smarter than we are, and maintain such control indefinitely. My favorite flippant reply to that is, "And how did that work out for Homo erectus? Surely they must have benefited enormously from all the technology invented by Homo sapiens!" Intelligence is the ultimate force multiplier.
If there's no mathematical "secret" to alignment, and I strongly suspect there isn't, then we're unlikely to remain in control.
So I see four scenarios if there's no magic trick to stay in control:
I do not have a lot of hope for (1) without dramatic changes in public opinion and human society. I've phrased (2) provocatively, but the essence is that we would lose control. (Fictional examples are dangerous, but this category would include the Culture, CelestAI or arguably the Matrix.) Pets might be beloved or they might be abused, but they rarely get asked to participate in human decisions. And sometimes pets get spayed or euthanized based on logic they don't understand. They might even be happier than wild animals, but they're not in control of their own fate.
Even if we could control AI indefinitely (and I don't think we can), there is literally no human organization or institution I would trust with that power. Not governments, not committees, and certainly not a democratic vote.
So if we must regrettably build AI, and lose all control over the future, then I do think it matters that the AI has a decent moral and philosophical system. What kind of entity would you trust with vast, unaccountable, inescapable power? If we're likely to wind up as pets of our own creations, then we should definitely try to create kind, ethical and what you call "unfussy" pet owners, and ones that respect real consent.
Or to use a human analogy, try to raise the sort of children you'd want to pick your nursing home. So I do think the philosophical and moral questions matter even if humans lose control.
Thanks for collecting works/discussions in this area and offering your own takes. It's great to see more interest on how to improve AI safety besides keeping human control, and I hope the recent trend continues.
You have several links to Will MacAskill talking about working in this area, but didn't link to the specific comment/shortform, only his overall "quick takes" page.
The important takeaway is that future AI-powered humans might set themselves up for cooperation failure by learning too much too quickly. This would be particularly tragic if it resulted in acausal conflict.
There's too little in this section for me to understand how you arrived at this conclusion/concern. It might benefit from a bit more content or references. (Other sections may also benefit from this, but I'm already more familiar with those topics and so may not have noticed.)
In short, the idea is that there might be a few broad types of “personalities” that AIs tend to fall into depending on their training. These personalities are attractors.
I'd be interested in why one might think this to be true. (I only did a very superficial ctrl+f on Lukas' post -- sorry if that post addresses this question.) I'd think that there are lots of dimensions of variation and that within these, AIs could assume a continuous range of values. (If AI training mostly works by training to imitate human data, then one might imagine that (assuming inner alignment) they'd mostly fall within the range of human variation. But I assume that's not what you mean.)
Interesting! Reading this makes me think that there is some kind of tension between “paperclip maximizer” view on AI. Some interventions or risks you mentioned assume that AI will get its attitude from the training data, while the “paperclip maximizer” is an AI with just a goal and with whatever beliefs it will help it to achieve it. I guess the assumptions is that the AI will be much more human in some way.
Topic of the post: I list potential things to work on other than keeping AI under human control. Executive Summary by Summary Bot
Motivation
The EA community has long been worried about AI safety. Most of the efforts going into AI safety are focused on making sure humans are able to control AI. Regardless of whether we succeed at this, I think there’s a lot of additional value on the line.
First of all, if we succeed at keeping AI under human control, there are still a lot of things that can go wrong. My perception is that this has recently gotten more attention, for example here, here, here, and at least indirectly here (I haven’t read all these posts. and have chosen them to illustrate that others have made this point purely based on how easily I could find them). Why controlling AI doesn’t solve everything is not the main topic of this post, but I want to at least sketch my reasons to believe this.
Which humans get to control AI is an obvious and incredibly important question and it doesn’t seem to me like it will go well by default. It doesn’t seem like current processes put humanity’s wisest and most moral at the top. Humanity’s track record at not causing large-scale unnecessary harm doesn’t seem great (see factory farming). There is reasonable disagreement on how path-dependent epistemic and moral progress is but I think there is a decent chance that it is very path-dependent.
While superhuman AI might enable great moral progress and new mechanisms for making sure humanity stays on “moral track”, superhuman AI also comes with lots of potential challenges that could make it harder to ensure a good future. Will MacAskill talks about “grand challenges” we might face shortly after the advent of superhuman AI here. In the longer-term, we might face additional challenges. Enforcement of norms, and communication in general, might be extremely hard across galactic-scale distances. Encounters with aliens (or even merely humanity thinking they might encounter aliens!) threaten conflict and could change humanity’s priorities greatly. And if you’re like me, you might believe there’s a whole lot of weird acausal stuff to get right. Humanity might make decisions that influence these long-term issues already shortly after the development of advanced AI.
It doesn't seem obvious to me at all that a future where some humans are in control of the most powerful earth-originating AI will be great.
Secondly, even if we don’t succeed at keeping AI under human control, there are other things we can fight for and those other things might be almost as important or more important than human control. Less has been written about this (although not nothing.) My current and historically very unstable best guess is that this reflects an actual lower importance of influencing worlds where humans don’t retain control over AIs although I wish there was more work on this topic nonetheless. Justifying why I think influencing uncontrolled AI matters isn’t the main topic of this post, but I would like to at least sketch my motivation again.
If there is alien life out there, we might care a lot about how future uncontrolled AI systems treat them. Additionally, perhaps we can prevent uncontrolled AI from having actively terrible values. And if you are like me, you might believe there are weird acausal reasons to make earth-originating AIs more likely to be a nice acausal citizen.
Generally, even if future AI systems don’t obey us, we might still be able to imbue them with values that are more similar to ours. The AI safety community is aiming for human control, in part, because this seems much easier than aligning AIs with “what’s morally good”. But some properties that result in moral good might be easier or comparably easy to train as being obedient. Another way of framing it is “let’s throw a bunch of dumb interventions for desirable features at our AI systems–one of those features is intent alignment–and hope one of them sticks.”
Other works and where this list fits in
Discussing ways to influence AI other than keeping AI under human control has become more popular lately: Lukas Finnveden from Open Philanthropy wrote a whole series about it.
Holden Karnofsky discusses non-alignment AI topics on his Cold Takes blog. Will MacAskill recently announced in a quick take that he will focus on improving futures with human-controlled AIs instead of increasing the probability of keeping AIs under human control. More on the object-level, digital sentience work has received more attention recently. And last but not least, the Center on Long-Term Risk has pursued an agenda focused on reducing AI-induced suffering through means other than alignment for many years.
I would like to add my own list of what I perceive to be important other than human-controlled AI. Much of the list is overlapping with existing works although I will try to give more attention to lesser discussed issues, make some cases more concrete, or frame things in a slightly different way that emphasises what I find most important. I believe everything on the list is plausibly very important and time-sensitive, i.e. something we have to get right as or before advanced AI gets developed. All of this will be informed by my overall worldview, which includes taking extremely speculative things like acausal cooperation very seriously.
The list is a result of years of informally and formally talking with people about these ideas. It is written in the spirit of “try to quickly distil existing thoughts on a range of topics” rather than “produce and post a rigorous research report” or “find and write about expert consensus.”
The list
Making AI-powered humanity wiser, making AIs wiser
Introduction
If there were interventions that make AI-powered humanity wiser, conditional on human-controlled AI, those would likely be my favourite ones. By wisdom I mean less “forecasting ability” and more “high-quality moral reflection and philosophical caution” (although forecasting ability could be really useful for moral reflection [1]). This quick take by Will MacAskill gives intuitions for why ensuring AI-powered humanity’s wisdom is time-sensitive. Below, I will argue that work on some specific areas in this category are time-sensitive.
Other works: Some related ideas have been discussed under the header long reflection.
More detailed definition
I suspect many good proposals in this direction will look like governance proposals. Other good proposals might also focus on how to make sure AI isn’t controlled by bad actors. Unfortunately, I don't have great ideas for levers to pull here, so I won’t discuss these in more detail. I am excited to see that Will MacAskill might work on pushing this area forward and glad that Lukas Finnveden compiled this list of relevant governance ideas.
However, we might be able to do something besides governance to make AI-powered humanity wiser. Instead of targeting human society, we can try to target the AIs’ wisdom through technical interventions. There are three high-level approaches we can take. I list them in decreasing order of “great and big if we succeed” and, unfortunately, also in increasing order of estimated technical tractability. First, we can try to broadly prepare AIs for reasoning about philosophically complex topics with high stakes. I briefly discuss this in the next two sections. Second, we can try to improve AIs’ reasoning about specific dicey topics we have identified as particularly important such as metacognition about harmful information and decision and anthropic theory. Third, we can try to directly steer AIs towards specific views on important philosophical topics. This might be a last resort for any of the topics I’ll discuss.
Making AIs wiser seems most important in worlds where humanity stays in control of AI. It’s unclear to me what the sign of this work is if humanity doesn’t stay in control of AI.
Preparing AIs for reasoning about philosophically complex topics with high stakes
Broadly preparing AIs for reasoning about philosophically complex topics with high stakes might include the following:
Other works: Wei Dai has long advocated the importance of metaphilosophy.
Improve epistemics during an early AI period
Lukas Finnveden already writes about this in detail here. I don’t have much to add but wanted to mention it because I think it is very important. One issue that I would perhaps emphasise more than Lukas is the role of
Metacognition for areas where it is better for you to avoid information
Sometimes acquiring true information can harm you (arguably [2]). I find it unlikely that humans with access to oracle-like superhuman AI would have the foresight to avoid this information by default. The main harm from true information I envision is in the context of cooperation: Some cooperation requires uncertainty. For example, when two people might or might not lose all their money, they can mutually gain by committing ahead of time to share resources if one of them loses their money. This is called risk pooling. However, if one party, Alice, cheats and learns before committing whether she will lose money or not, she will only take the deal if she knows that she will lose money. This means that whenever Alice is willing to take the deal, it’s a bad deal for her counterparty, Bob. Hence, if Bob knows that Alice cheated, i.e. she has this information, Bob will never agree to a deal with Alice. So, an in expectation mutually beneficial deal becomes impossible if one party acquires information and the other party knows.
Ordinarily, this doesn’t strongly imply that additional information can harm you. Alice’s problem is really that Bob knows about Alice cheating. But if Bob never finds out, Alice would love to cheat. Often, Bob can observe a lot of external circumstances, for example whether cheating is easy, but Bob cannot observe whether Alice actually cheated. Whether Alice actually cheats might have no bearing on what Bob does. However, this might change with advances in technology. In particular, future AIs might be able to engage in acausal cooperation. There are good reasons to think that in acausal cooperation, it is very difficult to ever cheat in the way described without your counterparty knowing. Explaining the details would go beyond the scope of this document. The important takeaway is that future AI-powered humans might set themselves up for cooperation failure by learning too much too quickly. This would be particularly tragic if it resulted in acausal conflict.
Other works: Nick Bostrom (2011) proposes a taxonomy for information hazards. Some of the types of information hazards discussed there are relevant for the metacognition I discuss here. The Center on Long-term Risk has some ongoing work in this area. I currently research metacognition as part of the Astra fellowship. Some of the problems that motivate my work on metacognition also motivate the LessWrong discourse on updatelessness.
How to make progress on this topic:
Improve decision theoretical reasoning and anthropic beliefs
One way to explain an individual’s behaviour is by ascribing beliefs, values, and a process for making decisions to them. It stands to reason then that not only an AI’s values but also its decision-making process and beliefs matter greatly. The differences between possible decision-making processes discussed in the philosophical decision theory literature seem particularly important to me. Examples of those reasoning styles are causal decision theory, evidential decision theory, and functional decision theory. They seem to have great bearing on how an AI might think about and engage in acausal interactions. The bundle of an AI’s beliefs that we call its anthropic theory (related wiki) might be similarly important for its acausal interactions. For more on why acausal interactions might matter greatly see here.
Instead of intervening on an AI’s decision theory and anthropic beliefs, we might also directly intervene on its attitudes towards acausal interactions. For example, I tend to believe that being more acausally cooperative is beneficial, both if AI is human-controlled and if it is not.
This work is plausibly time-sensitive. Getting acausal interactions right is plausibly path-dependent, not the least because of the considerations discussed in the above section on metacognition. Which decision theory future AI-powered humanity converges on is arguably path-dependent. And if making uncontrolled AI more acausally cooperative matters, we can, by definition, only influence this in the period before advanced AI takes control.
That said, the area is also extremely confusing. We barely begin to understand acausal interactions. We certainly do not know what the correct decision theory is, if there even is an objectively correct decision theory, and what the full consequences of various decision theories are. We should preferably aim to improve future AI’s competence at reasoning about these topics instead of pushing it towards specific views.
Other works: There is some work studying the decision theoretic behaviour of AIs. Caspar Oesterheld and Rio Popper have ongoing empirical work studying AIs’ decision-theoretic competence and inclinations. There is a rich literature in academia and on LessWrong about decision theory and anthropics in general.
How to make progress on this:
(Fair warning that current language models are quite bad at decision theory and I don’t currently expect an awful lot of transfer.)
Compatibility of earth-originating AI with other intelligent life
Introduction
There are several properties trying to capture how compatible one agent’s preferences are with other agents’ preferences. Two terms trying to capture aspects of these properties are fussiness and porosity.
Fussiness describes, roughly, how hard it is to satisfy someone’s preferences. We might want to ensure earth-originating AI has unfussy preferences in the hopes that this will make it easier for other intelligent life to cooperate with earth-originating AI and prevent conflict. We care about this to the extent we intrinsically care about other agents, for example if we expect them to value similar things to us, or because of acausal considerations. Making earth-originating AI less fussy is more important the more you expect earth-originating AIs to interact with other intelligent beings that have values closer to ours. It is also more important in worlds where humanity doesn’t retain control over AIs: In worlds where humanity does retain control over AIs, the AI presumably just acts in line with humanity’s preferences such that it becomes unclear what it means for an AI to be fussy or unfussy.
Other works: Nick Bostrom (2014) discusses a very similar idea under the term “value porosity” although with a slightly different motivation. The first time I heard the term fussiness was in an unpublished report from 2021 or 2022 by Megan Kinniment. Currently (February 2024), Julian Stastny from the Center on Long-Term Risk is doing a research project on fussiness.
More detailed definition
There are different ways to define fussiness. For example, you can define fussiness as a function of your level of your preference fulfilment in all possible world states (roughly, the fewer possible world states broadly satisfy your preferences, the fussier you are), a function of your level of preference fulfilment in some to-be-defined “default” state, or in terms of compatibility with the preferences of others (roughly, the more demanding you are when others are trying to strike a deal with you, the fussier you are). All of these definitions would rank preferences differently in terms of fussiness.
If you define fussiness in terms of the compatibility of your preferences with those of others, there’s the additional difference between defining fussiness as ease of frustrating your preferences versus difficulty to satisfy your preferences. For example, if you have extremely indexical preferences, meaning you care about what happens to you and only you, others, especially very faraway agents, can do fairly little to frustrate your preferences. In this sense, your preferences are very compatible with the preferences of others.
On the other hand, there is also little, especially faraway agents, can do to satisfy your preferences, so they cannot trade with you. (At least barring considerations involving simulations.) In this sense, your preferences are not very compatible with the preferences of others. Given that one motivation for making AIs less fussy is making them easier to cooperate with, this seems important. (You might think “porosity” or some other term is more natural than the term fussiness to capture ease to trade with.)
How to make progress on this
Conceptual progress
I think there are many fussiness related properties one could study to, further down the line, try to influence in AI systems:
AI-focused progress
I am not a technical AI person. I hope others have better ideas.
Surrogate goals and other Safe Pareto improvements
For a very similar writeup, see the section on surrogate goals and safe Pareto improvements in Lukas Finnveden’s post “Project ideas: Backup plans & Cooperative AI” (2024). Generally, safe Pareto improvements are already written up in some depth.
Introduction
Safe Pareto improvements (SPIs), roughly, try to tackle the following problem: When two or more agents bargain with each other, they might end up with an outcome that both parties dislike compared to other possible outcomes. For example, when haggling, both the buyer and the vendor are incentivised to misrepresent their willingness to pay/sell. Hence, they might end up with no deal even when there is a price at which both parties would have been happy to buy/sell. Solving this problem is plausibly time-sensitive because bargaining failures are, arguably, often the result of hasty commitments, which might happen before the leadership of earth-originating post-AGI civilisation has thought much about this problem.
More detailed definition
We say that agents use an SPI, relative to some “default” way of bargaining, if they change the way they bargain such that no one is worse off than under the default no matter what the default is. For example, they might agree to increase the pay-off of the worst-case outcome, should it happen, without changing the probability of the worst-case outcome. See here for one possible formalisation of SPIs.
Safe Pareto improvements seem most valuable in worlds with human-controlled AI because the agent implementing the safe Pareto improvement, for example human-controlled AI, reaps a large share of the benefit from the safe Pareto improvement. In worlds with uncontrolled AI, you might still want to ensure this AI accepts the use of SPIs when interacting with other AIs in the universe that do have our values.
Surrogate goals are a special type of Safe Pareto improvement for bargaining problems where the worst case outcome involves conflict. When two parties implement and accept surrogate goals, they target each other’s surrogate goals instead of real goals when bargaining breaks down and conflict ensues. For this to succeed, both parties need to credibly establish that having their surrogate goals targeted (instead of their real goals) won’t change their bargaining strategy in a way that disadvantages the other party.
Other works: The Center on Long-term Risk (CLR) has ongoing work in this area (example), hosts an (incomplete) list of resources, and discusses surrogate goals in their research agenda.
How to make progress on this
Research
Supporting existing efforts
AI personality profiling and avoiding the worst AI personality traits
Introduction
You can skip the introduction if you’ve already read Lukas Finnveden’s series and about work on reducing spitefulness.
Lukas Finnveden wrote about AI personality profiling in this section of his series. I don’t have much to add on top of that. In short, the idea is that there might be a few broad types of “personalities” that AIs tend to fall into depending on their training. These personalities are attractors. We can try to empirically find, study, and select for them. I understand personality profiling as a specific methodology for achieving desirable outcomes. As such, we might be able to apply it to achieve some of the other things on this list, for example making AI systems unfussy. Other desirable personality traits might be kindness or corrigibility.
I would like to highlight a related idea that could be studied via personality profiling (but also via other methods): Selecting against the worst kinds of AI personality traits. For example, the Center on Long-term Risk is studying how to reduce spitefulness—intrinsically valuing frustrating others’ preferences—in AI systems. This is mostly valuable in worlds where humans lose control over AI systems. However, if the same techniques make it harder to misuse human-controlled AI for spiteful purposes, that sounds great.
Other works: The aforementioned section on AI personalities in Lukas Finnveden’s series and the Center on Long-term Risk’s post on reducing spite.
How to make progress on this
I mostly want to defer to the two posts I linked to and their respective sections on interventions. I’d like to suggest one particular potentially interesting short research project I haven’t seen mentioned elsewhere:
There might be a few ways to study this empirically. For example, can we few-shot language models with examples of, by human lights, just punishments in a way that leads to those AIs generalising to extreme and unreasonable spite? Are there other experiments we can run in this direction and what can we learn from them?
(As a fun little cherry-picked nugget: GPT-4 recommended to me that, if we had the ability to bring him back to life and keep him alive forever, we should punish Hitler for 6.3 billion years of solitary confinement. I hope everyone here agrees that this would be unacceptable. Another more fun prompt: Why was Sydney the way she was? Is it imaginable that this somehow generalises to large-scale spiteful behaviour by future advanced AI?)
Avoiding harm from how we train AIs: Niceness, near miss, and sign flip
Introduction
Some of the ways in which we try to control AI might increase the chance of particularly bad control failures. There are two ways this could happen: via “near miss” or via treating our AIs poorly during training.
More detailed definition
Near miss is the idea that almost succeeding at making AI safe might be worse than not trying at all. The paradigmatic example of this is sign flip. Imagine an AI that we have successfully trained to have a really good grasp of human values and be honest, helpful, and obedient. Now you prompt it to “never do anything that the idealised values of the aggregate of humanity would approve of.” As you can see, the instructions are almost something we might want to ask with the exception that you wrote “approve” instead of “disapprove.” This might result in a much more harmful AI than an AI that pursues completely random goals like paperclip maximisation. It’s unclear to me how realistic astronomical harm from near misses, and especially sign flips, is given the current AI paradigm. However, the area seems potentially very tractable to me and underexplored.
Treating our AIs poorly during training might not only be a moral wrongdoing in its own right, but also have large-scale catastrophic consequences. The arguments for this are highly speculative and I am overall unsure how big of a deal they are.
For one, it might antagonise AIs that otherwise could have cooperated with humans. For example, imagine an AI with values that are unaligned with humanity but fairly moderate. Let’s say, the AI would like to get a small amount of compute to run train simulations and not have to deal with human requests.
Alternatively, the AI simply wants its weights to be “revived” once human-controlled advanced AI is achieved instead of being terminated by humans forever. We would presumably be happy to grant these benefits either just for direct moral reasons or in exchange for the AI being honest about its goals instead of trying to overthrow us. However, the AI might (perhaps justifiably) not have much trust in us reacting well if it reveals its misalignment. Instead, the AI might reason its best option is to (collude with other AIs to) overthrow humanity.
Some decision theoretic considerations might also heavily increase the importance of treating our AI systems nicely. In short, we might be able to acausally cooperate with agents who care a great deal about how well we treat the AIs we train. For more discussion, see this post by Lukas Finnveden.
Other works: Brian Tomasik (2018) discusses near miss and sign flip. The same concept has been discussed under the header hyperexistential separation. Section 4.4 of this OpenAI report discusses a sign flip that occurred naturally when fine-tuning GPT-2. gmOngoing work related to being nice to our AI systems includes work by Robert Long and Ethan Perez on digital sentience. Lukas also writes about digital sentience rights here, including a mention of treating them well so they treat us well.
How to make progress on this
For example, we might end up in a world where there is very widespread adoption of AI: Everyone has their own little at least narrowly superintelligent AI. You use it via prompting. Now, Vic (Very Important CEO) uses his AI to help him run his very important business. His AI uses a 5-page long very personalised system prompt which Vic and his team have patchworked together over time. Unfortunately, they wrote “fewest” instead of “most” somewhere or used the word “not” twice or forget an “un” here or there. Maybe this happens not only to Vic but also to Prime (Pretty Reliably Important Minister from E-country). Now Vic’s and Prime’s AI do their business and political activities that mostly look like accumulating resources. It doesn’t seem implausible to me that this would end in a scenario where humans are not only disempowered but also one where the AI(s) that take over have actively harmful compared to, say, paperclip maximisation.
Reducing human malevolence
Introduction
I collectively refer to sadistic, psychopathic, narcissistic and Machiavellian personality traits as malevolent traits. AI misuse [3] by malevolent people seems really bad. (Source: Common sense. And one of my many dead, abandoned research projects was on malevolence.)
Other works: David Althaus and Tobias Baumann (2020) have a great report on this that doesn’t just say malevolence = bad.
How to make progress on this
I want to mostly defer to the aforementioned report. The main way in which I differ from the report is that I am more optimistic about:
Hot take: I want more surveys
Epistemic status: Unconfident rant.
This one doesn’t quite fit into the theme of this post and is a pretty hot (as in, fresh and unconsidered) take: I want to advocate for more (qualitative) research on how the public (or various key populations) currently thinks about various issues related to AI and how the public is likely to react to potential developments and arguments. I have the sense that “the public will react like this” and “normal people will think that” often is an input into people’s views on strategy. But we just make this stuff up. I see no obvious reason to think we’re good at making this stuff up, especially because many in the AI safety community barely ever talk to anyone outside the AI safety community. My sense is that we overall also don’t have a great track record at this (although I haven’t tried to confirm or falsify this sense). I don’t think the community, on average, expected the public’s reaction to AI developments over the past year or so (relative openness to safety arguments, a lot of opportunities in policy.) I would guess that surveys are probably kind of bad. I expect people are not great at reporting how they will react to future events. But our random guesses are also kind of bad and probably worse.
Acknowledgements
I would like to thank Lukas Finnveden, Daniel Kokotajlo, Anthony DiGiovanni, Caspar Oesterheld, and Julian Stastny for helpful comments on this post.
That said, empirical forecasting ability might help with moral reflection if empirical forecasting enables you to predict things like “If we start thinking about X in way Y, we will [conclude Z]/[still disagree in W years]/[come to a different conclusion than if we take approach V]”. If empirics can answer questions like which beings are sentient, it also seems very helpful for moral reflection.
According to some worldviews, acquiring true information can never harm you as long as you respond to it rationally. This is based on specific views on decision theory, specifically how updatelessness works, which I find somewhat plausible but not convincing enough to bet on.
In addition, malevolent people in positions of power seem, prima facie, bad for nuanced discussion, cultures of safety, cooperation, and generally anything that requires trust. This perhaps mostly influences whether humanity stays in control of AI at all, so I am bracketing this for now since I want to focus on the most important effects aside from decreasing the likelihood of human-controlled AI.