In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.
How do you prove something is impossible? You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method. You might prove that all the methods you know about do not work. That doesn't prove there's not some other option you don't see. "I don't see an option, therefore it's impossible." is only an appeal to ignorance. It's a common one but it's incorrect reasoning regardless. Think about it. Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long.
I say: "Then Look!"
How often do we push past this feeling to keep thinking of ideas that might work? For many, the answer is "never" or "only if it's needed". The sense that something is impossible is subjective and fallible. If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief. What distinguishes this from bias?
I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible. This is valid, but it's completely missing the obvious: As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work. The hard part is THINKING of a plan to do the impossible. I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one. Not only that, I think we're capable of doing this on a worthwhile topic. An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.
Here's how I am going to proceed:
Step 1: Come up with a bunch of impossible project ideas.
Step 2: Figure out which one appeals to the most people.
Step 3: Invent the methodology by which we are going to accomplish said project.
Step 4: Improve the method as needed until we're convinced it's likely to work.
Step 5: Get the project done.
Impossible Project Ideas
- Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster. Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety"). My ideas.
- Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it.
- Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities.
- Rational Agreement Software: If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree? This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top. This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
- Discover unrecognized bias: This is especially hard since we'll be using our biased brains to try and detect it. We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
- Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.
Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.
Figure out which one appeals to the most people.
Assuming each idea is put into a separate comment, we can vote them up or down. If they begin with the word "Idea" I'll be able to find them and put them on the list. If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.
Mmmm. Okay this looks like a really good one. We need a title for it so I can add this to the list. "Make Social Sciences Rigorous" might work... but I think people are already trying to be rigorous, and "more rigorous" is kind of vague. We need a nice solid, concrete goal. Maybe there's a more strict, more specific term than rigorous... "logically consistent" or ... hmmm... what specific goals would you say would best express this vision?
I also feel a need to clarify the term "social sciences". You give examples like how there are too many unknowns in economics and foreign policy. This feels like two separate problems. In a way, they are. What you're saying here is "The way to solve all these problems in all these diverse areas is by making social sciences more rigorous". That, I can believe, for sure. However, I don't think that would be the entire solution. When it comes to anything political, there are also large masses of people involved in the decision-making process. They may choose the most rational, most scientifically valid option... or they might not. You might counter with "If we understood why they make decisions that are against their own best interests, we could wake them up to what's going on." Is that what you're envisioning?
Would you spell out the whole line of reasoning?
P.S. I redid a lot of the original post, any suggestions?
The goal is vague because I don't know how to get started with it.
I'm not quite sure what you're saying with the rest of your comment. I understand that economics and foreign policy are basically two different areas. However, the policies of both fields interact quite a lot, and both disciplines use many of the same tools, such as games theory and statistical analysis. I would perhaps even argue that IR studies would be improved overall if they were widely conceived of as a sub discipline of economics. They also share many of the same problems.
For example... (read more)