I decided to make a summary of non-obvious insights from my work so far. (A lot of this work is not original but borrowed from others.)
Scroll to the last section of the page (software and society) to see what I'm most keen on working on.
Information and society
How?
Cost of computing hardware going down has reduced the cost-to-benefit ratio for cyberhacking and espionage. Individuals and organisations will find it harder to keep secrets in coming world.
Both software and hardware methods fail in protecting a computer against corporations and nation states. The only defence is physical methods - smash hard disks, switch off electricity, meet in person, etc.
Espionage is now within the reach of individuals operating independent of any corporation or nation state. Examples: Ulrich Larsen, Edward Snowden
Information once acquired is rarely lost or destroyed. Every cyberhack or intelligence operation increases the number of actors (N) in the world who now have access to that information. Since N goes up but never down, on a long enough timescale N approaches say 50, at which point it is basically guaranteed to reach the public eye.
Individuals and organisations that operate in public get various benefits, such as proving trustworthiness and receiving better feedback.
The difference between 99% surveillance and 100% surveillance is very large.
100% surveillance may enable dictatorships with much longer half-lives. Biologically implanted cameras and mics may be one pathway to achieve this.
99% surveillance seems like the more likely scenario. Drone, airplane and satellite footage, smartphones connected to internet, etc all achieve 99% surveillance. Especially motivated actors such as political dissidents will likely still incur the costs (social, psychological, financial) required to avoid being surveilled.
Consequences
Increased surveillance may fix law and order in many countries.
Stable economic growth requires law and order see Acemoglu's work for example.
Lack of law and order has multigenerational psychological effects. Physical safety is low on Maslow's hierarchy and is more important to most people than an abundance of consumer goods.
Economic metrics such as GDP are a weak proxy for measuring human happiness, including political metrics such as stability of law and order will make it a better proxy.
I'm still unsure who will get to define what "crime" is in the new equilibrium. Which morality becomes the majority morality? It is possible each nuclear-armed nation and its dependents ends up homegenising moral ideas among its members and gets one universal morality. Liberalism and various religious moralities are top contenders.
Increased surveillance of elites may reduce freedom of elites and establish a more direct democracy. Both corporations and governments will be more controllable by the general population.
This has similarities to town life versus city life. The internet is homogenising global culture and morality by putting town incentives across the globe.
Truth as a moral virtue is likely to thrive in a highly transparent society as long as multiple actors can defend themselves long enough to persuade others. Nuclear weapons allow this at national level between nuclear-armed countries. Guns allow this at individual level in countries that tolerate guns.
Geopolitical power is built majorly by maintaining lead time over competitors in various technologies. The leading manufacturer of any product gets import revenue from across the world. The second leading manufacturer with 6 months inferior product gets zero revenue.
99% surveillance will significantly reduces lead times, but not eliminate them. Competent actors will likely still be able to remain on the leading edge and get the same geopolitical power they did before.
Replicating entire supply chains from scratch in competing nations might become possible with smaller lag time.
For example: If an arms race is started for human genetic engineering, there will be a smaller difference in relative power between competing nations as the entire biotech supply chain will get replicated in a small number of years in multiple nations.
A lot of important information does not reach the public eye, because people have incentives to hide it. I'll call this "Social dark matter". See also: Duncan Sabien's article.
Some professions have higher access to social dark matter, such as psychologists, religious leaders, tech CEOs etc.
99% surveillence might forcibly bring a lot of social dark matter to public eye
Social dark matter is very useful for empathising with individuals and giving them useful advice. A society fails to make progress on issues when consensus cannot be established in public on them, and "social dark matter" by definition is not in public view.
Establishing consensus allows dominant paradigms of thought to fall and new ones to replace them. Common knowledge is needed not just widespread knowledge. See Blue Eyes puzzle for an example of this.
Technology
Most fields of science and technology get accelerated when someone invents a tool that allows data collection of a system that was previously not possible at same cost and resolution. Example: electron microscope, optical telescope, cyclotron, phosophrescent DNA tagging, etc.
Cost of electrical components such as transistors, actuators, inducers, etc has gone down, which will generally accelerate all scientific fields as it could lead to invention of new data collectio instruments.
Materials science is underrated as it plays a significant role in invention of data collection instruments.
Intelligence-enhancing technologies are worth paying special attention to, as a small differential in intelligence leads to a large differential in power of every kind - offensive and defensive, scientific, engineering, military and political.
Key intelligence enhancing technologies: superintelligent AI, human genetic engineering, human brain-connectome mapping, cognitive-enhancing drugs, nanotechnology, ?
Research into superintelligent AI is already ongoing at full pace. AlexNet in 2012 was key milestone. If built this will be in a sense the last invention of human history, as the AI will them be faster than us at making new inventions.
I have estimated 15% probability of superintelligent AI being built by 2030. Scaling laws seem to work but nobody knows why they work or how long they'll keep working.
Research into human genetic engineering has stalled due to lack of consensus in academia on political consequences. CRISPR invented in 2012 was key milestone. This pause is fragile and powerful actors will be able to accelerate this field soon.
CRISPR may also enable human-animal hybrids and enhancing human traits besides just cognitive ones (IQ, memory etc)
Gene drives can cause the extinction or genetic modification of entire populations, not just individual members. This works better in species with small generation time (i.e. not humans)
Both human genetic engineering and gene drives have massive implications for warfare, economic growth, political structure of society etc.
Human brain simulation might be possible within 30 years but no one really knows. Fruitfly connectome has been mapped in 2017-2023, and neuroscientists are currently trying to understand implications. Connectome data includes connections between neurons but not signals going through them.
Research into brain-computer interfaces is ongoing. I have not studied it deeply.
Research into nanotechnology seems to have slowed to a crawl. Unsure why. Fundamental breakthroughs are likely needed.
Research into cognitive-enhancing drugs is not something I've looked a lot into. Many such programs were illegally run in 20th century, this might have influence on obtaining grants for it today. In general we lack knowledge of biochemical pathways to directly affect higher-level rational brain, instead of affecting lower-level emotional brain and affecting rational brain indirectly.
Extiction-related technologies are worth paying special attention to.
CRISPR invented in 2012 may make it possible to produce bioweapons in the next 10 years, which could cause human extinction.
Gene drives may also cause significant population-level changes which could affect food supply, incide of natural disease etc. This could affect human population significantly, but is unlikely to cause human extinction.
Superintelligent AI if invented could cause human extinction. My blind guess is this has 30% probability of occuring assuming superintelligent AI is invented.
Reduced cost of surveillance may have some influence on nuclear balance of power, as all nations will get much better visibility into each other's nuclear deployed arsenals, manufacturing facilities and supply chains. This is unlikely to change the fundamental rules IMO, so odds of human extinction are not significantly affected by this.
If you want to predict or influence societal structure far into the future, you should probably study offense-defense balanaces inherent in technology.
Culture shapes laws
Law enforcement cannot enforce a law if most of the lawyers and policemen and general population of a region don't believe that law is moral. Eventually the law will get changed in favour of the new culture.
Incentives shape culture
Incentives don't easily change a person's values, but they may change a person's behaviour. Incentives however place selection effects for people who already agree with the type of behaviour being rewarded, and those people become high-status in society. People who have not yet decided their values are more likely to copy and internalise the values of whoever is high-status.
Sometimes there is clash between financial incentives and social incentives. Sometimes people alter their behaviour to make money at the expense of what behaviour their social circle expects of them. This tends to make them lonely in the short-term, but in the long-term their social circle too is likely to emulate the same behaviour.
In the absence of incentives grinding culture into a specific form, culture progresses via mutation and remixing. Most new ideas are near neighbours of old ideas. Human brains are machines whose output depends on input. Fundamentally new ideas that don't depend on old ideas don't usually exist.
Affecting the distribution of popular ideas in collective attention affects the likely new ideas your society comes up even though you can't predict the new ideas themselves.
Many technologies in today's society seem a direct consequence of the cultural environment their inventors grew up in. For example: superintelligent AI research ongoing today because Yudkowsky and other singularitarians increased collective attention focussed on these ideas.
Morality is a key aspect of how culture transmits.
When two cultures clash, how much members of both cultures tolerate each other depends on the moral judgement they assign to the opposing culture. Your culture has won a person to it when it has shaped that person's morality.
More tolerant cultures spread under certain circumstances and less tolerant cultures spread under different circumstances. Often a highly self-replicating culture has both more and less tolerant versions of it so it can spread in both environments. (There's probably some link between these idea and ideas around common knowledge and preference cascades which I haven't figured out yet.)
Competition for capital and attention exists at every level of societal organisation, not just between corporations and governments. For examples individuals, families, ethnic groups, etc. Competitions at different levels of structure affect each other. A nation that is in wartime competition to produce more steel than its opponent is also likely to force more competition between its citizen steel workers for example.
If you want to influence society in any way, you have to atleast be competitive enough to survive. What "competitive enough to survive" looks like depends on the situation.
Some technologies can be produced and used by small groups, whereas others can only be used by large groups.
For example: uranium centrifuges and solar PV modules require a large group of people to manufacture them, whereas guns and radio can be manufactured by a small group of people.
Tech that can only be produced by large groups fuels a lot of geopolitics. Nations and corporations try to become the first to build some tech and use this as a bargainining chip to export whatever morality holds that group together in the first place.
Deliberately choosing to build tech that can be produced and used by small groups has consequences for societal structure. The open source software movement is one example of this.
Software and society
I (Samuel) am most keen on building new governance structures via software.
Naval Ravikant says there are 3 types of leverage in society: capital, attention and internet-copyable products such as books, videos and software.
Internet-copyable products are the newest and least competed for.
Internet can transmit incentives and culture. People can paid over the internet. People can get social approval over the internet. People can be influenced by ideology over the internet.
Therefore new forms of governance can be built via software. Early examples: cryptocurrency, twitter-influenced public policy
Big Tech companies currently write the software that governs society, whether their execs are fully aware of it or not. Typically, computer hardware is expensive and software developers are expensive. This necessarily meant only a large company (in terms of capital) can manage society's software and hardware stacks.
Hardware costs
Within the next 10-20 years, it will be possible to store every word spoken by every person on Earth, on a home server affordable to a group of friends. If information and software gets open sourced, it will be possible to build governance using open source software rather than letting Big Tech alone govern society.
This is not true for video data however. Video data of every person ever on Earth is still too expensive to store on a home server. Big Tech may still get some influence on how society is governed, by defining rules of access for video storage.
Software costs
Software is expensive to write because of complexity. AI + video data + cheap hardware might reduce complexity of various popular applications.
Often software is complex because hardware is expensive and hence optimisation is needed. Cheap hardware for text-based data may may its possible to use less efficient but also less complex ways of writing software.
Search is the most popular application of the internet. Be it searching for partners or employers or food or household products. Embedding search is a low complexity way of solving search.
Identity is a necessary application for software governance. Cheap video capture and storage may allow for decentralised identity, each person can just upload their own video in public.
A lot of public discussions (including on political topics) would be higher trust if they happened via video instead of text.
Note: it looks like you probably want this to be a markdown file. You can go to https://www.lesswrong.com/account, with the "site customizations" section, and click "activate Markdown" to enable the markdown editor.
2025-03-17
One pager
I decided to make a summary of non-obvious insights from my work so far. (A lot of this work is not original but borrowed from others.)
Scroll to the last section of the page (software and society) to see what I'm most keen on working on.
Information and society
How?
Consequences
Technology
Technology and society
Software and society