This doc is a mix of existing world models I have and holes in said models. I'm trying to fill some of these holes. The doc is not very well organised relative to how organised a doc I could produce if needed. Often the more time I spend on a doc, the shorter it gets. I'm hoping that happens here too.
I'm mostly going to study this stuff by myself. However if you would like to help me by speeding up the process, please [contact me](../contact_me.md). If your attempt to help me answer these questions is in good-faith, I will be grateful to you no matter how successful or failed your attempt is.
*tldr* How do we safely navigate technological progress or personal growth in a world without privacy?
DISCLAIMER
It is difficult to predict the future without altering it. My writings may have unintended effects on the future. (I'd like more accurate likelihood estimates of these effects, both mean outcome and tail outcomes.)
- I am aware that simply by thinking of a question like "will some dictator implant microphones in everyone", I am personally increasing the probability that this ends up happening. Once I have thought something I'm unlikely to forget it, and will eventually say it to others. Eventually one of them may leak it to the internet and eventually the idea may reach the relevant politically powerful people who can implement it in real life. (LLM embedding search >> Google, don't underestimate it.)
- This is unfortunate, as my platonic ideal is to be able to think through various possible futures (alone, or with a group of research collaborators) without actually influencing the world, pick the best future, and then only start taking steps that push the world towards that future.
- However I'm still going to write publicly about certain topics as that's one of the best ways for someone in my situation to get feedback.
Topic: Which organisations are capable of keeping secrets in present and near future (10-20 years from now)? What are the consequences of this reduced secrecy?
Specific questions
- How easy is it for TSMC to backdoor all their chips so they can secretely capture private keys, for example?
- How many S&P500 companies have publicly available evidence of their key business knowledge being leaked to China? (Be it via hacking or espionage or voluntary disclosure by ex-employees etc)
- Is it possible to read WiFi IP packets using handmade radio?
- Is it technically possible to implant microphones in the human body? What about cameras?
Broader questions
- **Assuming no organisation can maintain significant lead time on any technology (and it will immediately get copied by orgs united by a different morality and culture), what are the implications for technological progress in the future?**
- There is an assumption embedded here, that no org can keep secrets. I'm unsure if it is true. Supposing it is true though, what are its implications?
- The most obvious real world example of this is US versus China, neither seems able to keep significant secrets from the other.
- However I want to figure out general principles here, and not spend too much time studying individual examples like Obama or Michael Hayden or whoever. Metaphorically speaking, I want to study the dynamics of a particular initial position of Chess960, not how Magnus Carlson plays that particular initial position. This also connects to ideas on theories of history. Whether one should study game theory, sociology etc, versus the psychology of individual leaders, depends on which theory of history one subscribes to.
- How much time does it take to write code that understands metadata?
- Suppose all the world's computers were hacked and their data ended up in NSA datacentres (or their Chinese equivalent, which keeps getting renamed). Suppose all text-based formats are converted to plaintext, existing metadata the users may have left is preserved as is, and NSA appends metadata of the MAC, IP, timestamp, etc of capture.
- How much software developer time would be require to make sense of most of this metadata. This could be to answer individual queries on "suspicious" inviduals or analyse aggregate trends (such as societal respones to certain govt policies).
- Will there be more espionage orgs operating independent of nation states or elites (major politicians, billionaires) in the future?
- Internet and information tech has reduced the financial cost of both journalism and espionage. Cameras, hard disks, internet bandwidth, smartphone processors are all much cheaper than 10 or 20 years ago.
- Independent journalism has clearly increased in many countries, most notably the US.
- Independent espionage has also increased, see The Mole (anti-North Korea) or Edward Snowden (anti-US), but it is less clear to me if these are isolated examples or the beginning of a trend.
- Theory (based on incentives and culture) says independent espionage should go up, I'm waiting on the empirical evidence though.
- Will inability of organisations to keep secrets force homogeneity in the behaviour of civilians, and punish people who behave in outlier ways? Will this reduce the rate of invention of technology in future?
- Many important technologies in history seem to be invented by individuals who the median member of that country or society would not be able to relate to, and not easily tolerate as a friend / neighbour / family member.
- This relates to another confusion of mine - how do you merge the high-trust benefits of living in (some) small towns with the individual freedoms of living in (some) cities? It seems to me like high trust and reduced individual freedom are both causally downstream from the same thing, namely densely connected social graph that can gossip information about you.
- Individuals tolerated by society benefit a lot from being public. Hiring, fundraising, research feedback, making friends and dating, nearly everything goes better if you can do it on the internet.
- Same goes for orgs such as companies. Orgs that are tolerated by the (people with power in) society can move faster if they opensource a lot of their processes and outputs (except their key competitive advantages). For example, hiring, research, etc. Also they can move faster, and win race conditions against orgs that try to maximise secrecy.
- What are the psychological effects of keeping secrets? What are the failure modes of various groups that try to keep secrets? This could be small groups like families or C-suite executives of a company, or big groups like military research projects or intelligence orgs.
- **I vaguely suspect that the best way to keep important secrets in the modern world is to found a ~~cult~~ community of a few hundred people that blackholes information as follows:** people disallowed from leaving the geographic area for >30 years, internet download allowed but upload disallowed, everyone is forced to find both work relationships and personal relationships inside the area, raise families within the area, etc.
- I want more data on previous attempts at founding secret-keeping orgs in order to prove my hypothesis right or wrong.
- Some major concerns of founding such a group are ensuring people in it lead emotionally healthy lives, ensuring ideological diversity (in both thought and action), and allowing people to leave relationships that don't suit them to find new ones. Hence I'm biased towards inviting a few hundred people rather than just two (such as a marriage) or ten (such as the C-suite executives of a company).
- How do you actually secure a computer against adversaries with billions in funding?
- Physical methods in cybersecurity seem to trump both hardware-based and software-based methods. Hardware-based methods can be beaten by hardware backdoors installed by manufacturers. It seems better to assume there's an evil demon possessing your computer, and develop security with that in mind.
- Most secure way of erasing a private key from RAM is to cut the electricity. Otherwise cold boot attack is possible.
- Most secure way of erasing a private key from disk is to smash it with a hammer. Otherwise a microscope may be able to recover the data from disk.
- Most secure way of verifying someone's public key is to meet them in person. Video footage with your face and the key is the second-best option, atleast while AI cannot produce convincing deepfakes.
- Most secure of ensuring no information leaves the machine is to weld the machine into a Faraday cage.
- Most secure way of sending a message to another user without third-parties recording metadata is probably printing it on paper and sending a post. Copying to disk and sending that by post is second-best. Sending the message over the internet is worst in terms of preventing third-parties from capturing the message and associated metadata (timestamp, message size, sender and receiver identities). The server host and any other intermediary servers that are hit (think google analytics or cloudflare) can sell this data to data brokersa, fiber optic capables can be tapped, wireless signals can be triangulated and routers can be hacked.
- I wonder if there's way to build a physical firewall that requires near-zero trust in other people to work. Like, build radio or networking equipment that isn't manufactured in some centralised backdoorable factory, so you can verify what's inside every IP packet exiting and entering your machine. It seems okay for the typical use case if this slows down internet speed by many orders of magnitude.
- As a dictator, how to build 100% surveillance instead of 99%, to increase the stability of your dictatorship?
- This is basically a redteaming exercise. Put yourself in the dictator's shoes and then put yourself in the shoes of someone trying to evade him.
- Information analysis is becoming easier with LLM-based search, language translation, etc. Information capture - both technical level amd culture engineering - is the hard part.
- I currently model various levels of democracy as states in a markov chain, each state has some probability of being stable or decaying to a different state.
- The difference in half-life of a dictatorship with 99% surveillance and 100% surveillance is large. People organising revolts are the 1% who will endure the maximum inconvenience to bypass surveillance.
- Very little technology is needed to organise a revolt. Just a few people (then a few hundred, then a few thoushand etc) meeting at a common location is enough. There are ways to organise a revolt even if surveillance can successfully enforce curfew. Mics alone are not enough because this 1% of people can communicate using pen and paper when organising. Cameras are needed for surveillance.
- LLMs are not yet smart enough to fully automate surveillance against this 1% of people who will take maximum precautions. This is true even if you could insert an LLM in every microprocessor in your country. An army of human censors must assist the system.
- Biologically implanted microphones and cameras seem like one obvious way to do perfect surveillance, assuming you could engineer culture to the point everyone was okay with it.
- A person moves around a lot.
- Mics and cameras in static locations can be escaped. Unless you literally imprison people, total land area to cover with mics and cameras is too large to do perfect surveillance this way.
- The person carrying the mic/ camera themselves is possible (such as a smartphone or an implant)
- As of 2024, battery tech is not good enough to justify mics/cameras travelling by itself in the air. If the mic/camera travels on ground there can be charging stations, but robotics knowledge in 2024 is not good enough to traverse uneven terrain at low wattage.
- You can engineer incentives such that everyone reports on themselves or report on each other (example: stalinist russia), but I'm unsure how you get beyond 99% surveillance with this sort of system either. A group of close friends and family can collectively choose to not report each other, and distance themselves from the rest of society so no one else can report them. Can you prevent people from distancing themselves from others? Maybe I should read more about the historical examples where this stuff has been tried.
- North Korea's technique of keeping the population illiterate and starving is effective, but still only reaches <99%. There will need to be a 1% of cvilians who are well-fed and educated from diverse intellectual sources.
Topic: How much power do science fiction writers and early enthusiasts have in deciding which technologies humanity chooses to pursue?
Specific questions
- Would Shane Legg and Demis Hassabis have cofounded Deepmind if Eliezer Yudkowsky hadn't talked about AI at all in time interval 2000-2008?
- Shane Legg claims he was inspired by Ray Kurzweil. Yudkowsky helped broadcast views of people like Ray Kurzweil by organising MIRI and Singularity Summit.
- Yudkowsky got funding and attention from Peter Thiel, and may have also helped Deepmind get their seed round from Thiel. (As of 2014 Founder's Fund owned over 25% of Deepmind)
Broader questions
- I generally want to read 1990-2015 history of biotech. Who or what inspired Illumina's parent companies that worked on next generation sequencing? Who or what inspired Kary Mullis to work on PCR? Who inspired the inventors of CRISPR? Who inspired Kevin Esvelt to work on gene drives?
- The standard pipeline for how technologies come into society: scifi -> theory -> practical (lab demonstration) -> engineering (scale up). If an individual of my socioeconomic class wanted to maximise their influence on this pipeline, my hypothesis is they should study scifi and scifi -> theory stages. I would like evidence that proves my hypothesis is wrong.
- Example of evidence that would prove me wrong: a list of technologies that had scifi writers and early enthusiasts, got proven in lab demos, failed to obtain funding for scale up at first, got scaled up many decades later and significantly changed society when they did. This would prove that studying the engineering/scaleup and funding landscape is more important.
- Another example of evidence that would prove me wrong: a list of technologies that had scifi writers and early enthusiasts, got many researchers interested who ran experiments, did not achieve successful lab demos, but got proven in lab many years decades later once some other necessary precursor technology was invented. This would prove that studying the practical research is more important as many plausibly good ideas turn out to just not work despite inspiring people.
- **If my hypothesis is right, could a handful of people consistently meme-ing in favour of BCIs or gene drives or whatever for five years, basically bring these technologies into existence?** Assume the memes are technical enough and interesting enough to attract the curiosity of researchers in the relevant research fields. And assume most outlier-brilliant researchers are driven primarily by curiosity not altruism or money or fame, which I think has been true throughout history.
Topic: Which technologies can possibly influence the future of humanity?
Specific STEM questions:
- What is the consensus among neuroscientists for Neuralink's timelines?
- Did MKULTRA actually discover anything useful? Could it have discovered anything useful, if ran for more time with more funding?
- Many documents are FOIA-ed but I haven't spend enough time reading them. My guess is they didn't achieve much.
- How much useful work did Biopreparat actually do?
- My guess is they didn't achieve much, but I wanna know the facts.
Broader technical questions
- I'd like to study pharmacology and neuroscience till I'm no longer at a beginner level, as those are the two of the following six categories I have least knowledge about.
- Human (or human-like) brains are likely to shape the future. Technology that will directly alter what human brains do seems worth paying special attention to.
1. Information tech - search engines, interest-based communities etc
2. Digital minds - superintelligent AI, mind uploads, etc
5. Genetics - CRISPR, etc especially if done to alter human brains
6. Nanotechnology - especially bionanomachines
- I'm particularly interested in studying MKULTRA, history of barbiturates and history of psychedelics. MKULTRA is AFAIK rare example of pharmacology research with the explicit goal of altering human brains and human society as a result. Also it's aimed at changing human brains, not fixing "disabilities".
- Are there ethical pharma research agendas not aimed at fixing disabilities?
- I want to study more about bioweapons research. I suspect its mostly borrowing techniques from biotech that I'm already vaguely aware of, but I wanna study more and confirm.
- I want to study more about possibilities for biotech automation
- DNA sequencing is automated and cheap but the process to figure out whether any given sequence is actually useful (often gene cloning and protein expression) is not fully automated or cheap. Current cost is ~$100 for reagents and 10-100 researcher hours.
- This seems like the hamming question for biotech (as per my limited knowledge) so I'd like to look more into it.
- Update: Nuclera seems relevant. [Demo video](https://www.nuclera.com/resource-library/how-to-set-up-a-run/) Credits: a friend
- I want to study more materials science. I know very little about it today.
- Most STEM research fields go through three phases:
1. Invent new tool to (cheaply) acquire lots of data from some physical system
2. Acquire lots of data - from nature or from experiments
3. Understand the physical system using all this data
- Step 2 and step 3 often inform each other and run in an iterative loop
- Step 1 could be the invention of microscope or cyclotron or radio telescope or anything else really.
- Step 1 usually depends heavily on getting the right materials
- A lot of practical inventions also seem to depend on material science. For instance fusion energy research is AFAIK basically containining 10M Kelvin plasma using fields, an alternative pathway might (???) be discovering materials that can contain it. Quantum computing research will benefit from having better nanomaterials and better superconducting materials I guess?
- I understand an intro to materials science textbook won't teach me about better superconductors or whatever, but it still seems worthwhile to study.
Broader non-STEM questions
- I'd like to build a "gears-level" high-level framework of the more indirect ways technology shapes society. (Not the stuff listed in the six categories above)
- Often technology shifts offense-defence balances between various actors in society - individuals, small groups and large groups. An oversimplified way of categorising some historical examples would be as follows:
- Tech that increases power of individuals relative to small groups: cities (drainage systems, etc), printing press, guns, cheap airplane fuel
- Tech that increases power of large groups relative to individuals: radio, social media ?
- Tech that increases power of large groups relative to both small groups and individuals: nuclear bombs, nuclear energy, cheap steel
- Also some technology gives power to certain individuals over others:
- Tech that increases power of old people relative to young people: elderly healthcare (treatments for cancer, poor eyesight, neuro disorders etc), anti-aging if ever discovered
- Tech that increases power of women relative to men: condoms?
- Tech that gives power to large groups of people (relative to small groups and individuals) fuels most of geopolitics as far as I understand
- Countries and large corporations want to be the first to discover and deploy some tech and then use their military, spies, immigration policy, export controls, R&D budget etc etc to monopolise or maintain lead time on tech. US tech policymaking is the most obvious example.
- Large groups that have achieved monopoly or lead time in some tech often use this as a bargaining chip to export their culture or religion or whatever morality unites that group in the first place.
- Very often a large group of people controls production of some tech (individuals or small groups can't produce it), but once produced, individual units are sold as a commodity which gives power to individuals. Tech wih centralised production, decentralised ownership is very common, and has geopolitical dynamics more predictable than tech that is not like this. For example, geopolitics of solar PV modules is easier to model than geopolitics of railway networks IMO.
- I want a framework that I can fit all the historical examples into, right now my framework is messy (not "gears-level").
Topic: Information people don't feel safe enough to share
Specific questions
- Is there any way to increase public access to therapy-client records from over 30-60 years ago? Is it a good idea to do this? What about personal diaries and letters?
- Is there any way to increase the number of therapy-client records collected from today onwards that will be released publicly 30-60 years from now? Is it a good idea to do this?
Broader questions
- How do you design societies where more people feel safe enough to share more information about their personal lies publicly?
- A lot of information about individual human experiences does not reach the public domain because people don't feel safe enough to share it publicly. (They're many reasons for this and they're often valid, from the perspective of that individual).
- This information is however extremely useful, be to empathise with other individuals at a personal level or provide them useful advice with their life problems or make policy recommendations to govts that benefit individuals or even design new forms of govt more conducive to individuals.
- Iteration speed of psychology as a field is slower than it would be if there were public transcripts of conversations. Each therapist must form hypotheses based on the limited private data they have, and their guesses of whether to trust hypotheses from other therapists who also work with private data. (This is related to my posts on knowledge versus common knowledge, common knowledge can bring down govts or dominant research paradigms for example, widespread knowledge alone cannot).
- This also applies broadly to individuals trying to help other individuals with personal advice (which is often atleast partly based on psychology). It doesn't have to be restricted to people trained as psychologists/therapists/whatever.
- How to best nudge people to leave behind their private information (such as that shared only with friends and family), so that some years after they die we get this information in public domain?
- I want to study more about the culture around this, in different countries. What are the different cultural attitudes to personal and sensitive information?
- I should also probably looking into succession planning for big tech companies. What happens once (say) Mark Zuckerberg dies and his (Facebook's) entire plaintext database fits inside a football. Who gets the football next?
- How to better organise all the historical information we do have on personal and emotionally sensitive matters? I would like to spend some time looking at existing datasets, to see if I can convert everything to plaintext and embedding search it.
Topic: Interplay between incentives and culture
Definition: In general when I talk about incentives I usually mean these three: social (people giving you respect/compassion/admiration/sex/etc), financial (people giving you money/food/goods/place to live/etc) and safety (people imprisoning/injuring/raping/murdering you, or protecting you from others who might). Doing "X" gets you more respect or money or safety, or not doing "X" gets you less of it. Maslow's hierarchy is a decent model, if you ignore the ordering of the hierarchy.
Broader questions
- How much power do elites have to take decisions that go against their local incentives and local culture?
- (For example if the prime minister of a country is in favour of declaring war but other people in his party and other parties are not, how much power does this person have to single-handedly shift the situation?)
- What are the psychological traits required to do this? How do you train more of our elites with these traits?
- What is the political knowledge required to do this sort of manoeuvre? Can we teach our elites to do more of this?
- (Yes I am biased lol, I think most elites don't do anything interesting with their lives. This is causally downstream of incentives and culture of the people around them. "Interesting" is defined as per my tastes, ofcourse each elite may have their own unique tastes.)
- How do you ethically run experiments to see the outcomes of unusual incentives (social, financial, safety) and culture on people?
- There is a lot of existing data available to be collected, on how existing incentives and culture influence people. The three socioeconomic classes have different cultures and incentives, people in different countries have different cultures and incentives, people in different professions have different cultures and incentives.
- But this data is finite, and it would help to be able to run experiments of different circumstances not occurring naturally.
- Ethical problems abound, for instance threatening someone's life or disrespecting them or depriving them of important information about the world is usually considered unethical in the context of a research experiment. What are some techniques to bypass this?
- Theory goes only so far when predicting human behaviour, experimentation is needed. (I mean, I basically see STEM versus non-STEM as prediction and control of systems not including and including human beings respectively. Human brains are the most complex known object in the observable universe and predicting them with >90% probability is hard in many situations.)
- Hmm I should prolly first make a list of experiments I'd love to run, assuming ethics is not an issue. Then filter the list on ethics. Will update this section when I do.
- How to think about morality and how to teach morality in a world where morality is dependent on circumstances?
- Different people face different incentives and culture. A moral principle that is easy to follow in one person's situation is difficult to follow in another person's situation. For example honesty is generally easier when you have some money saved than if you don't, because if someone dislikes your honesty and is abusive in response, you have more options to escape them or fight back.
- A significant threshold for whether an ideology or institution has power over you is whether it has shaped your sense of right and wrong. For example (some) communists believing private property is bad and theft is okay, or (some) anarchists believing big govts are bad and tax evasion is okay, or (some) religious people believing sex before marriage is not okay and denying couples houses for rent, etc.
- Morality is a political question, as whichever ideology or group can recruit more soldiers to be morally okay killing enemy soldiers in its name is one that will be more powerful. Political circumstances of a society change with time, and this correlates with changes in moral thinking of a society.
- People generally suck at understanding is-ought distinction.
- People (including me) also suck at imagining what they would be like if they were born in hypothetical cultures they are not actually a part of.
- The practical result is people find it very hard to understand what morality is like from the perspective of someone in a sufficiently different circumstance than them.
- Will the internet force homogenisation of our ideas of morality worldwide? Or does an eternal culture war just become the new normal? I'm guessing it'll be a mix of both. I want to build a more gears-level model for memetics with a focus on morality.
Topic: Miscellaneous
- What do "replicators" in non-STEM look like?
- Businesses that hire very few people and sell self-contained products are easier to replicate than other businesses, because people are harder to predict or control than physical systems. For example: a large farm with automated equipment is easier to manage than a farming village with thousands of labourers.
- What are some easy-to-replicate involve-less-people playbooks in politics or non-STEM more broadly? A lot of political events seem to me to be one-off events without an underlying theory that will enable replicating them in other contexts.
- I would love to discover/invent playbooks for regime change or good tech policy or maintaining law and order etc. that are replicable across multiple cultural contexts.
- Why didn't the US nuke USSR cities immediately after nuking Japan to establish a nuclear monoppoly, before USSR got nukes? Are the transcripts of these conversations available? (Between the people who were pro-nuke and the people who were anti-)
- Should I just stop caring as much about grammar and spelling in my writing, and invent more shorthands?
- English in 2024 is more concise than English from the middle ages, this is good as it reduces cognitive load, and saves time.
- I sometimes want to invent jargon for concepts. I want to skip articles (a, an, the) and not worry about grammar. I suspect future humans will be doing this anyway.
- I don't want to raise the entry barrier for people viewing my work though, atleast while my work is not that popular.
- How good are Israeli research univerities exactly?
- After US, UK, China, Israel seems like it might occupy 4th place in any tech race. Israel is nuclear-armed (hence won't listen to US or China) + great cyberhacking/espionage (so they can steal everyone's research without much lag time) + decent research talent (so they can implement stolen research)
2024-12-26
This doc is a mix of existing world models I have and holes in said models. I'm trying to fill some of these holes. The doc is not very well organised relative to how organised a doc I could produce if needed. Often the more time I spend on a doc, the shorter it gets. I'm hoping that happens here too.
I'm mostly going to study this stuff by myself. However if you would like to help me by speeding up the process, please [contact me](../contact_me.md). If your attempt to help me answer these questions is in good-faith, I will be grateful to you no matter how successful or failed your attempt is.
*tldr* How do we safely navigate technological progress or personal growth in a world without privacy?
DISCLAIMER
It is difficult to predict the future without altering it. My writings may have unintended effects on the future. (I'd like more accurate likelihood estimates of these effects, both mean outcome and tail outcomes.)
- I am aware that simply by thinking of a question like "will some dictator implant microphones in everyone", I am personally increasing the probability that this ends up happening. Once I have thought something I'm unlikely to forget it, and will eventually say it to others. Eventually one of them may leak it to the internet and eventually the idea may reach the relevant politically powerful people who can implement it in real life. (LLM embedding search >> Google, don't underestimate it.)
- This is unfortunate, as my platonic ideal is to be able to think through various possible futures (alone, or with a group of research collaborators) without actually influencing the world, pick the best future, and then only start taking steps that push the world towards that future.
- However I'm still going to write publicly about certain topics as that's one of the best ways for someone in my situation to get feedback.
Topic: Which organisations are capable of keeping secrets in present and near future (10-20 years from now)? What are the consequences of this reduced secrecy?
Specific questions
- How easy is it for TSMC to backdoor all their chips so they can secretely capture private keys, for example?
- How many S&P500 companies have publicly available evidence of their key business knowledge being leaked to China? (Be it via hacking or espionage or voluntary disclosure by ex-employees etc)
- Is it possible to read WiFi IP packets using handmade radio?
- Is it technically possible to implant microphones in the human body? What about cameras?
Broader questions
- **Assuming no organisation can maintain significant lead time on any technology (and it will immediately get copied by orgs united by a different morality and culture), what are the implications for technological progress in the future?**
- There is an assumption embedded here, that no org can keep secrets. I'm unsure if it is true. Supposing it is true though, what are its implications?
- The most obvious real world example of this is US versus China, neither seems able to keep significant secrets from the other.
- However I want to figure out general principles here, and not spend too much time studying individual examples like Obama or Michael Hayden or whoever. Metaphorically speaking, I want to study the dynamics of a particular initial position of Chess960, not how Magnus Carlson plays that particular initial position. This also connects to ideas on theories of history. Whether one should study game theory, sociology etc, versus the psychology of individual leaders, depends on which theory of history one subscribes to.
- How much time does it take to write code that understands metadata?
- Suppose all the world's computers were hacked and their data ended up in NSA datacentres (or their Chinese equivalent, which keeps getting renamed). Suppose all text-based formats are converted to plaintext, existing metadata the users may have left is preserved as is, and NSA appends metadata of the MAC, IP, timestamp, etc of capture.
- How much software developer time would be require to make sense of most of this metadata. This could be to answer individual queries on "suspicious" inviduals or analyse aggregate trends (such as societal respones to certain govt policies).
- Will there be more espionage orgs operating independent of nation states or elites (major politicians, billionaires) in the future?
- Internet and information tech has reduced the financial cost of both journalism and espionage. Cameras, hard disks, internet bandwidth, smartphone processors are all much cheaper than 10 or 20 years ago.
- Independent journalism has clearly increased in many countries, most notably the US.
- Independent espionage has also increased, see The Mole (anti-North Korea) or Edward Snowden (anti-US), but it is less clear to me if these are isolated examples or the beginning of a trend.
- Theory (based on incentives and culture) says independent espionage should go up, I'm waiting on the empirical evidence though.
- Will inability of organisations to keep secrets force homogeneity in the behaviour of civilians, and punish people who behave in outlier ways? Will this reduce the rate of invention of technology in future?
- Many important technologies in history seem to be invented by individuals who the median member of that country or society would not be able to relate to, and not easily tolerate as a friend / neighbour / family member.
- This relates to another confusion of mine - how do you merge the high-trust benefits of living in (some) small towns with the individual freedoms of living in (some) cities? It seems to me like high trust and reduced individual freedom are both causally downstream from the same thing, namely densely connected social graph that can gossip information about you.
- Individuals tolerated by society benefit a lot from being public. Hiring, fundraising, research feedback, making friends and dating, nearly everything goes better if you can do it on the internet.
- Same goes for orgs such as companies. Orgs that are tolerated by the (people with power in) society can move faster if they opensource a lot of their processes and outputs (except their key competitive advantages). For example, hiring, research, etc. Also they can move faster, and win race conditions against orgs that try to maximise secrecy.
- What are the psychological effects of keeping secrets? What are the failure modes of various groups that try to keep secrets? This could be small groups like families or C-suite executives of a company, or big groups like military research projects or intelligence orgs.
- **I vaguely suspect that the best way to keep important secrets in the modern world is to found a ~~cult~~ community of a few hundred people that blackholes information as follows:** people disallowed from leaving the geographic area for >30 years, internet download allowed but upload disallowed, everyone is forced to find both work relationships and personal relationships inside the area, raise families within the area, etc.
- I want more data on previous attempts at founding secret-keeping orgs in order to prove my hypothesis right or wrong.
- Some major concerns of founding such a group are ensuring people in it lead emotionally healthy lives, ensuring ideological diversity (in both thought and action), and allowing people to leave relationships that don't suit them to find new ones. Hence I'm biased towards inviting a few hundred people rather than just two (such as a marriage) or ten (such as the C-suite executives of a company).
- How do you actually secure a computer against adversaries with billions in funding?
- Physical methods in cybersecurity seem to trump both hardware-based and software-based methods. Hardware-based methods can be beaten by hardware backdoors installed by manufacturers. It seems better to assume there's an evil demon possessing your computer, and develop security with that in mind.
- Most secure way of erasing a private key from RAM is to cut the electricity. Otherwise cold boot attack is possible.
- Most secure way of erasing a private key from disk is to smash it with a hammer. Otherwise a microscope may be able to recover the data from disk.
- Most secure way of verifying someone's public key is to meet them in person. Video footage with your face and the key is the second-best option, atleast while AI cannot produce convincing deepfakes.
- Most secure of ensuring no information leaves the machine is to weld the machine into a Faraday cage.
- Most secure way of sending a message to another user without third-parties recording metadata is probably printing it on paper and sending a post. Copying to disk and sending that by post is second-best. Sending the message over the internet is worst in terms of preventing third-parties from capturing the message and associated metadata (timestamp, message size, sender and receiver identities). The server host and any other intermediary servers that are hit (think google analytics or cloudflare) can sell this data to data brokersa, fiber optic capables can be tapped, wireless signals can be triangulated and routers can be hacked.
- I wonder if there's way to build a physical firewall that requires near-zero trust in other people to work. Like, build radio or networking equipment that isn't manufactured in some centralised backdoorable factory, so you can verify what's inside every IP packet exiting and entering your machine. It seems okay for the typical use case if this slows down internet speed by many orders of magnitude.
- As a dictator, how to build 100% surveillance instead of 99%, to increase the stability of your dictatorship?
- This is basically a redteaming exercise. Put yourself in the dictator's shoes and then put yourself in the shoes of someone trying to evade him.
- Information analysis is becoming easier with LLM-based search, language translation, etc. Information capture - both technical level amd culture engineering - is the hard part.
- I currently model various levels of democracy as states in a markov chain, each state has some probability of being stable or decaying to a different state.
- The difference in half-life of a dictatorship with 99% surveillance and 100% surveillance is large. People organising revolts are the 1% who will endure the maximum inconvenience to bypass surveillance.
- Very little technology is needed to organise a revolt. Just a few people (then a few hundred, then a few thoushand etc) meeting at a common location is enough. There are ways to organise a revolt even if surveillance can successfully enforce curfew. Mics alone are not enough because this 1% of people can communicate using pen and paper when organising. Cameras are needed for surveillance.
- LLMs are not yet smart enough to fully automate surveillance against this 1% of people who will take maximum precautions. This is true even if you could insert an LLM in every microprocessor in your country. An army of human censors must assist the system.
- Biologically implanted microphones and cameras seem like one obvious way to do perfect surveillance, assuming you could engineer culture to the point everyone was okay with it.
- A person moves around a lot.
- Mics and cameras in static locations can be escaped. Unless you literally imprison people, total land area to cover with mics and cameras is too large to do perfect surveillance this way.
- The person carrying the mic/ camera themselves is possible (such as a smartphone or an implant)
- As of 2024, battery tech is not good enough to justify mics/cameras travelling by itself in the air. If the mic/camera travels on ground there can be charging stations, but robotics knowledge in 2024 is not good enough to traverse uneven terrain at low wattage.
- You can engineer incentives such that everyone reports on themselves or report on each other (example: stalinist russia), but I'm unsure how you get beyond 99% surveillance with this sort of system either. A group of close friends and family can collectively choose to not report each other, and distance themselves from the rest of society so no one else can report them. Can you prevent people from distancing themselves from others? Maybe I should read more about the historical examples where this stuff has been tried.
- North Korea's technique of keeping the population illiterate and starving is effective, but still only reaches <99%. There will need to be a 1% of cvilians who are well-fed and educated from diverse intellectual sources.
Topic: How much power do science fiction writers and early enthusiasts have in deciding which technologies humanity chooses to pursue?
Specific questions
- Would Shane Legg and Demis Hassabis have cofounded Deepmind if Eliezer Yudkowsky hadn't talked about AI at all in time interval 2000-2008?
- Shane Legg claims he was inspired by Ray Kurzweil. Yudkowsky helped broadcast views of people like Ray Kurzweil by organising MIRI and Singularity Summit.
- Yudkowsky got funding and attention from Peter Thiel, and may have also helped Deepmind get their seed round from Thiel. (As of 2014 Founder's Fund owned over 25% of Deepmind)
Broader questions
- I generally want to read 1990-2015 history of biotech. Who or what inspired Illumina's parent companies that worked on next generation sequencing? Who or what inspired Kary Mullis to work on PCR? Who inspired the inventors of CRISPR? Who inspired Kevin Esvelt to work on gene drives?
- The standard pipeline for how technologies come into society: scifi -> theory -> practical (lab demonstration) -> engineering (scale up). If an individual of my socioeconomic class wanted to maximise their influence on this pipeline, my hypothesis is they should study scifi and scifi -> theory stages. I would like evidence that proves my hypothesis is wrong.
- Example of evidence that would prove me wrong: a list of technologies that had scifi writers and early enthusiasts, got proven in lab demos, failed to obtain funding for scale up at first, got scaled up many decades later and significantly changed society when they did. This would prove that studying the engineering/scaleup and funding landscape is more important.
- Another example of evidence that would prove me wrong: a list of technologies that had scifi writers and early enthusiasts, got many researchers interested who ran experiments, did not achieve successful lab demos, but got proven in lab many years decades later once some other necessary precursor technology was invented. This would prove that studying the practical research is more important as many plausibly good ideas turn out to just not work despite inspiring people.
- **If my hypothesis is right, could a handful of people consistently meme-ing in favour of BCIs or gene drives or whatever for five years, basically bring these technologies into existence?** Assume the memes are technical enough and interesting enough to attract the curiosity of researchers in the relevant research fields. And assume most outlier-brilliant researchers are driven primarily by curiosity not altruism or money or fame, which I think has been true throughout history.
Topic: Which technologies can possibly influence the future of humanity?
Specific STEM questions:
- What is the consensus among neuroscientists for Neuralink's timelines?
- Did MKULTRA actually discover anything useful? Could it have discovered anything useful, if ran for more time with more funding?
- Many documents are FOIA-ed but I haven't spend enough time reading them. My guess is they didn't achieve much.
- How much useful work did Biopreparat actually do?
- My guess is they didn't achieve much, but I wanna know the facts.
Broader technical questions
- I'd like to study pharmacology and neuroscience till I'm no longer at a beginner level, as those are the two of the following six categories I have least knowledge about.
- Human (or human-like) brains are likely to shape the future. Technology that will directly alter what human brains do seems worth paying special attention to.
1. Information tech - search engines, interest-based communities etc
2. Digital minds - superintelligent AI, mind uploads, etc
3. Neuroscience - brain computer interfaces, etc
4. Pharmacology - barbiturates ("truth serum"), psychedelics, opiates etc
5. Genetics - CRISPR, etc especially if done to alter human brains
6. Nanotechnology - especially bionanomachines
- I'm particularly interested in studying MKULTRA, history of barbiturates and history of psychedelics. MKULTRA is AFAIK rare example of pharmacology research with the explicit goal of altering human brains and human society as a result. Also it's aimed at changing human brains, not fixing "disabilities".
- Are there ethical pharma research agendas not aimed at fixing disabilities?
- I want to study more about bioweapons research. I suspect its mostly borrowing techniques from biotech that I'm already vaguely aware of, but I wanna study more and confirm.
- I want to study more about possibilities for biotech automation
- DNA sequencing is automated and cheap but the process to figure out whether any given sequence is actually useful (often gene cloning and protein expression) is not fully automated or cheap. Current cost is ~$100 for reagents and 10-100 researcher hours.
- This seems like the hamming question for biotech (as per my limited knowledge) so I'd like to look more into it.
- Update: Nuclera seems relevant. [Demo video](https://www.nuclera.com/resource-library/how-to-set-up-a-run/) Credits: a friend
- I want to study more materials science. I know very little about it today.
- Most STEM research fields go through three phases:
1. Invent new tool to (cheaply) acquire lots of data from some physical system
2. Acquire lots of data - from nature or from experiments
3. Understand the physical system using all this data
- Step 2 and step 3 often inform each other and run in an iterative loop
- Step 1 could be the invention of microscope or cyclotron or radio telescope or anything else really.
- Step 1 usually depends heavily on getting the right materials
- A lot of practical inventions also seem to depend on material science. For instance fusion energy research is AFAIK basically containining 10M Kelvin plasma using fields, an alternative pathway might (???) be discovering materials that can contain it. Quantum computing research will benefit from having better nanomaterials and better superconducting materials I guess?
- I understand an intro to materials science textbook won't teach me about better superconductors or whatever, but it still seems worthwhile to study.
Broader non-STEM questions
- I'd like to build a "gears-level" high-level framework of the more indirect ways technology shapes society. (Not the stuff listed in the six categories above)
- Often technology shifts offense-defence balances between various actors in society - individuals, small groups and large groups. An oversimplified way of categorising some historical examples would be as follows:
- Tech that increases power of individuals relative to small groups: cities (drainage systems, etc), printing press, guns, cheap airplane fuel
- Tech that increases power of large groups relative to individuals: radio, social media ?
- Tech that increases power of large groups relative to both small groups and individuals: nuclear bombs, nuclear energy, cheap steel
- Also some technology gives power to certain individuals over others:
- Tech that increases power of old people relative to young people: elderly healthcare (treatments for cancer, poor eyesight, neuro disorders etc), anti-aging if ever discovered
- Tech that increases power of women relative to men: condoms?
- Tech that gives power to large groups of people (relative to small groups and individuals) fuels most of geopolitics as far as I understand
- Countries and large corporations want to be the first to discover and deploy some tech and then use their military, spies, immigration policy, export controls, R&D budget etc etc to monopolise or maintain lead time on tech. US tech policymaking is the most obvious example.
- Large groups that have achieved monopoly or lead time in some tech often use this as a bargaining chip to export their culture or religion or whatever morality unites that group in the first place.
- Very often a large group of people controls production of some tech (individuals or small groups can't produce it), but once produced, individual units are sold as a commodity which gives power to individuals. Tech wih centralised production, decentralised ownership is very common, and has geopolitical dynamics more predictable than tech that is not like this. For example, geopolitics of solar PV modules is easier to model than geopolitics of railway networks IMO.
- I want a framework that I can fit all the historical examples into, right now my framework is messy (not "gears-level").
Topic: Information people don't feel safe enough to share
Specific questions
- Is there any way to increase public access to therapy-client records from over 30-60 years ago? Is it a good idea to do this? What about personal diaries and letters?
- Is there any way to increase the number of therapy-client records collected from today onwards that will be released publicly 30-60 years from now? Is it a good idea to do this?
Broader questions
- How do you design societies where more people feel safe enough to share more information about their personal lies publicly?
- A lot of information about individual human experiences does not reach the public domain because people don't feel safe enough to share it publicly. (They're many reasons for this and they're often valid, from the perspective of that individual).
- This information is however extremely useful, be to empathise with other individuals at a personal level or provide them useful advice with their life problems or make policy recommendations to govts that benefit individuals or even design new forms of govt more conducive to individuals.
- Iteration speed of psychology as a field is slower than it would be if there were public transcripts of conversations. Each therapist must form hypotheses based on the limited private data they have, and their guesses of whether to trust hypotheses from other therapists who also work with private data. (This is related to my posts on knowledge versus common knowledge, common knowledge can bring down govts or dominant research paradigms for example, widespread knowledge alone cannot).
- This also applies broadly to individuals trying to help other individuals with personal advice (which is often atleast partly based on psychology). It doesn't have to be restricted to people trained as psychologists/therapists/whatever.
- How to best nudge people to leave behind their private information (such as that shared only with friends and family), so that some years after they die we get this information in public domain?
- I want to study more about the culture around this, in different countries. What are the different cultural attitudes to personal and sensitive information?
- I should also probably looking into succession planning for big tech companies. What happens once (say) Mark Zuckerberg dies and his (Facebook's) entire plaintext database fits inside a football. Who gets the football next?
- How to better organise all the historical information we do have on personal and emotionally sensitive matters? I would like to spend some time looking at existing datasets, to see if I can convert everything to plaintext and embedding search it.
Topic: Interplay between incentives and culture
Definition: In general when I talk about incentives I usually mean these three: social (people giving you respect/compassion/admiration/sex/etc), financial (people giving you money/food/goods/place to live/etc) and safety (people imprisoning/injuring/raping/murdering you, or protecting you from others who might). Doing "X" gets you more respect or money or safety, or not doing "X" gets you less of it. Maslow's hierarchy is a decent model, if you ignore the ordering of the hierarchy.
Broader questions
- How much power do elites have to take decisions that go against their local incentives and local culture?
- (For example if the prime minister of a country is in favour of declaring war but other people in his party and other parties are not, how much power does this person have to single-handedly shift the situation?)
- What are the psychological traits required to do this? How do you train more of our elites with these traits?
- What is the political knowledge required to do this sort of manoeuvre? Can we teach our elites to do more of this?
- (Yes I am biased lol, I think most elites don't do anything interesting with their lives. This is causally downstream of incentives and culture of the people around them. "Interesting" is defined as per my tastes, ofcourse each elite may have their own unique tastes.)
- How do you ethically run experiments to see the outcomes of unusual incentives (social, financial, safety) and culture on people?
- There is a lot of existing data available to be collected, on how existing incentives and culture influence people. The three socioeconomic classes have different cultures and incentives, people in different countries have different cultures and incentives, people in different professions have different cultures and incentives.
- But this data is finite, and it would help to be able to run experiments of different circumstances not occurring naturally.
- Ethical problems abound, for instance threatening someone's life or disrespecting them or depriving them of important information about the world is usually considered unethical in the context of a research experiment. What are some techniques to bypass this?
- Theory goes only so far when predicting human behaviour, experimentation is needed. (I mean, I basically see STEM versus non-STEM as prediction and control of systems not including and including human beings respectively. Human brains are the most complex known object in the observable universe and predicting them with >90% probability is hard in many situations.)
- Hmm I should prolly first make a list of experiments I'd love to run, assuming ethics is not an issue. Then filter the list on ethics. Will update this section when I do.
- How to think about morality and how to teach morality in a world where morality is dependent on circumstances?
- Different people face different incentives and culture. A moral principle that is easy to follow in one person's situation is difficult to follow in another person's situation. For example honesty is generally easier when you have some money saved than if you don't, because if someone dislikes your honesty and is abusive in response, you have more options to escape them or fight back.
- A significant threshold for whether an ideology or institution has power over you is whether it has shaped your sense of right and wrong. For example (some) communists believing private property is bad and theft is okay, or (some) anarchists believing big govts are bad and tax evasion is okay, or (some) religious people believing sex before marriage is not okay and denying couples houses for rent, etc.
- Morality is a political question, as whichever ideology or group can recruit more soldiers to be morally okay killing enemy soldiers in its name is one that will be more powerful. Political circumstances of a society change with time, and this correlates with changes in moral thinking of a society.
- People generally suck at understanding is-ought distinction.
- People (including me) also suck at imagining what they would be like if they were born in hypothetical cultures they are not actually a part of.
- The practical result is people find it very hard to understand what morality is like from the perspective of someone in a sufficiently different circumstance than them.
- Will the internet force homogenisation of our ideas of morality worldwide? Or does an eternal culture war just become the new normal? I'm guessing it'll be a mix of both. I want to build a more gears-level model for memetics with a focus on morality.
Topic: Miscellaneous
- What do "replicators" in non-STEM look like?
- Businesses that hire very few people and sell self-contained products are easier to replicate than other businesses, because people are harder to predict or control than physical systems. For example: a large farm with automated equipment is easier to manage than a farming village with thousands of labourers.
- What are some easy-to-replicate involve-less-people playbooks in politics or non-STEM more broadly? A lot of political events seem to me to be one-off events without an underlying theory that will enable replicating them in other contexts.
- I would love to discover/invent playbooks for regime change or good tech policy or maintaining law and order etc. that are replicable across multiple cultural contexts.
- Why didn't the US nuke USSR cities immediately after nuking Japan to establish a nuclear monoppoly, before USSR got nukes? Are the transcripts of these conversations available? (Between the people who were pro-nuke and the people who were anti-)
- Should I just stop caring as much about grammar and spelling in my writing, and invent more shorthands?
- English in 2024 is more concise than English from the middle ages, this is good as it reduces cognitive load, and saves time.
- I sometimes want to invent jargon for concepts. I want to skip articles (a, an, the) and not worry about grammar. I suspect future humans will be doing this anyway.
- I don't want to raise the entry barrier for people viewing my work though, atleast while my work is not that popular.
- How good are Israeli research univerities exactly?
- After US, UK, China, Israel seems like it might occupy 4th place in any tech race. Israel is nuclear-armed (hence won't listen to US or China) + great cyberhacking/espionage (so they can steal everyone's research without much lag time) + decent research talent (so they can implement stolen research)