In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.
How do you prove something is impossible? You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method. You might prove that all the methods you know about do not work. That doesn't prove there's not some other option you don't see. "I don't see an option, therefore it's impossible." is only an appeal to ignorance. It's a common one but it's incorrect reasoning regardless. Think about it. Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long.
I say: "Then Look!"
How often do we push past this feeling to keep thinking of ideas that might work? For many, the answer is "never" or "only if it's needed". The sense that something is impossible is subjective and fallible. If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief. What distinguishes this from bias?
I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible. This is valid, but it's completely missing the obvious: As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work. The hard part is THINKING of a plan to do the impossible. I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one. Not only that, I think we're capable of doing this on a worthwhile topic. An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.
Here's how I am going to proceed:
Step 1: Come up with a bunch of impossible project ideas.
Step 2: Figure out which one appeals to the most people.
Step 3: Invent the methodology by which we are going to accomplish said project.
Step 4: Improve the method as needed until we're convinced it's likely to work.
Step 5: Get the project done.
Impossible Project Ideas
- Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster. Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety"). My ideas.
- Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it.
- Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities.
- Rational Agreement Software: If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree? This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top. This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
- Discover unrecognized bias: This is especially hard since we'll be using our biased brains to try and detect it. We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
- Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.
Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.
Figure out which one appeals to the most people.
Assuming each idea is put into a separate comment, we can vote them up or down. If they begin with the word "Idea" I'll be able to find them and put them on the list. If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.
The goal is vague because I don't know how to get started with it.
I'm not quite sure what you're saying with the rest of your comment. I understand that economics and foreign policy are basically two different areas. However, the policies of both fields interact quite a lot, and both disciplines use many of the same tools, such as games theory and statistical analysis. I would perhaps even argue that IR studies would be improved overall if they were widely conceived of as a sub discipline of economics. They also share many of the same problems.
For example, in both fields there are large difficulties with comparing the results of economic and foreign policies and comparing them to the results that other policies counterfactually would have had, because countries are radically different in one time period as compared to another, and because policies themselves are more or less appropriate for some countries than others. Figuring out how to apply the lessons of one time and place to another is more or less what I was envisioning when I said that I wanted to make the social sciences more empirical.
There are also problems with measuring variables in both fields. In science, it's relatively easy to determine what the output amount of energy from a system is, or the velocity of a specific object at a specific time. But in economics and IR, we have lots of trouble even understanding exactly what the inputs and outputs are or would be, let alone understanding their relationship with one another. For example, uncertainty is hugely important in IR and in economics, but it seems almost impossible to measure. Even more obvious things, like the number of troops in a certain country or the number of jobs in a specific sector, are often debated intensely by people within these fields.
Without the ability to measure inputs or outputs of policy processes or the ability to compare those processes to the hypothetical effectiveness that other policies might have had, these fields are crippled. If there is any way to get around these problems or to minimize them, we really need to figure it out. This will be really really hard, if not impossible, but it's probably the most effective nonscientific thing that we can be doing to minimize existential risk.
TL;DR: I want to be Harry Seldon except in real life.