All of Meme Marine's Comments + Replies

The reason for agnosticism is that it is no more likely for them to be on one side or the other. As a result, you don't know without evidence who is influencing you. I don't really think this class of Pascal's Wager attack is very logical for this reason - an attack is supposed to influence someone's behavior but I think that without special pleading this can't do that. Non-existent beings have no leverage whatsoever and any rational agent would understand this - even humans do. Even religious beliefs aren't completely evidenceless, the type of evidence ex... (read more)

I think more importantly, it simply isn't logical to allow yourself to be Pascal Mugged, because in the absence of evidence, it's entirely possible that going along with it would actually produce just as much anti-reward as it might gain you. It rather boggles me that this line of reasoning has been taken so seriously. 

2David Matolcsi
I think that pleading total agnosticism towards the simulators' goals is not enough. I write "one common interest of all possible simulators is for us to cede power to an AI whose job is to figure out the distribution of values of possible simulators as best as it can, then serve those values." So I think you need a better reason to guard against being influenced then "I can't know what they want, everything and its opposite is equally likely", because the action proposed above is pretty clearly more favored by the simulators than not doing it.  Btw, I don't actually want to fully "guard against being influenced by the simulators", I would in fact like to make deals with them, but reasonable deals where we get our fair share of value, instead of being stupidly tricked like the Oracle and ceding all our value for one observable turning out positively. I might later write a post about what kind of deals I would actually support.

Kudos to you for actually trying to solve the problem, but I must remind you that the history of symbolic AI is pretty much nothing but failure after failure; what do you intend to do differently, and how do you intend to overcome the challenges that halted progress in this area for the past ~40 years?

1Lorec
This is one of those subject areas that'd be unfortunately bad to get into publicly. If you or any other individual wants to grill me on this, feel free to DM me or contact me by any of the above methods and I will take disclosure case by case.

Yes, I agree that the US military is one example of a particularly well-aligned institution. I think my point about the alignment problem being analogous to military coup risk is still valid and that similar principles could be used to explore the AI alignment problem; military members control weaponry that no civil agency can match or defeat, in most countries.

All military organizations are structured around the principal of its leaders being able to give orders to people subservient to them. War is a massive coordination problem and being able to get soldiers to do what you want is the primary one among them. I mean to say that high ranking generals could issue such a coup, not that every service member would spontaneously decide to perform one. This can and does happen, so I think your blanket statement on the impossibility of Juntas is void.

I mean to say that high ranking generals could issue such a coup

Yes, and by "any given faction or person in the U.S. military" I mean to say that high ranking generals inside the United States cannot form a coup. They literally cannot successfully give the order to storm the capitol. Their inferiors, understanding that:

  • The order is illegal
  • The order would have to be followed by the rest of their division in order to have a chance of success
  • The order would be almost guaranteed to fail in its broader objective even if they manage to seize the FBI headq
... (read more)

I am unsurprised but disappointed to read the same Catastrophe arguments rehashed here, based on an outdated Bostromian paradigm of AGI. This is the main section I disagree with.

The underlying principle beneath these hypothetical scenarios is grounded in what we can observe around us: powerful entities control weaker ones, and weaker ones can fight back only to the degree that the more powerful entity isn’t all that powerful after all. 

I do not think this is obvious or true at all. Nation-States are often controlled by a small group of people or even ... (read more)

If it really wanted to, there would be nothing at all stopping the US military from launching a coup on its civilian government.

There are enormous hurdles preventing the U.S. military from overthrowing the civilian government.

The confusion in your statement is caused by blocking up all the members of the armed forces in the term "U.S. military". Principally, a coup is an act of coordination. Any given faction or person in the U.S. military would have a difficult time organizing the forces necessary without being stopped by civilian or military law enfor... (read more)

9Thane Ruthenis
Strong-upvoted, this is precisely the kind of feedback that seems helpful for making the document better.

No message is intuitively obvious; the inferential distance between the AI safety community and the general public is wide, and even if many people do broadly dislike AI, they will tend to think that apocalyptic predictions of the future, especially ones that don't have as much hard evidence to back them as climate change (which is already very divisive!) belong in the same pile as the rest of them. I am sure many people will be convinced, especially if they were already predisposed to it, but such a radical message will alienate many potential supporters.... (read more)

I am sorry for the tone I had to take, but I don't know how to be any clearer - when people start telling me they're going to "break the overton window" and bypass politics, this is nothing but crazy talk. This strategy will ruin any chances of success you may have had. I also question the efficacy of a Pause AI policy in the first place - and one argument against it is that some countries may defect, which could lead to worse outcomes in the long term.

Why does MIRI believe that an "AI Pause" would contribute anything of substance to the goal of protecting the human race? It seems to me that an AI pause would:

  • Drive capabilities research further underground, especially in military contexts
  • Force safety researchers to operate on weaker models, which could hamper their ability to conduct effective research
  • Create a hardware overhang which would significantly increase the chance of a sudden catastrophic jump in capability that we are not prepared to handle
  • Create widespread backlash against the AI Safety commun
... (read more)
2CronoDAS
I don't think people laugh at the "nuclear war = doomsday" people.

There's a dramatic difference between this message and the standard fanatic message: a big chunk of it is both true, and intuitively so.

The idea that genuine smarter-than-humans-in-every-way AGI is dangerous is quite intuitive. How many people would say that, if we were visited by a more capable alien species, that would be totally safe for us?

The reason people don't intuitively see AI as dangerous is that they imagine it won't become fully agentic and genuinely outclass humans in all relevant ways. Convincing them otherwise is a complex argument, but cont... (read more)

7RobertM
This comment doesn't seem to be responding to the contents of the post at all, nor does it seem to understand very basic elements of the relevant worldview it's trying to argue against (i.e. "which are the countries you would probably least want to be in control of AGI"; no, it doesn't matter which country ends up building an ASI, because the end result is the same). It also tries to leverage arguments that depend on assumptions not shared by MIRI (such as that research on stronger models is likely to produce enough useful output to avert x-risk, or that x-risk is necessarily downstream of LLMs).

I think one big mistake the AI safety movement is currently making is not paying attention to the concerns of the wider population about AI right now. People do not believe that a misaligned AGI will kill them, but are worried about job displacement or the possibility of tyrannical actors using AGI to consolidate power. They're worried about AI impersonation and the proliferation of misinformation or just plain shoddy computer generated content.

Much like the difference between more local environmental movements and the movement to stop climate change, focu... (read more)

Even so, one of the most common objections I hear is simply "it sounds like weird sci-fi stuff" and then people dismiss the idea as totally impossible. Honestly, this really seems to be how people react to it!

2trevor
My thinking about this is that most people usually ask the question "how weird does something have to be until it's not true anymore", or less likely to be true, and don't really realize that particle physics already demonstrated long ago that there just isn't a limit at all. I was like this for an embarrassingly long time; lightcones and Grabby Aliens, of course that was real, just look at it. But philosophy? Consciousness ethics? Nah, that's a bunch of bunk, or at least someone else's problem.
  • "Guided bullets" exist; see DARPA's EXACTO program.
  • Assuming the "sniper drone" uses something like .50 BMG, you won't be able to fit enough of a payload into the bullet to act as a smoke grenade. You can't fit a "sensor blinding round" into it.
  • Being able to fly up 1000m and dodge incoming fire would add a lot of cost to a drone. You would be entering into the territory of larger UAVs. The same goes for missile launching drones.
  • Adding the required range would also be expensive. Current small consumer drones have a range of about 8 miles (DJI Mavic) so takin
... (read more)
1RussellThor
Firstly some context: Missile vs gun Radio comms and protection against jamming For your points 1. Guided bullets - yes good, unsure whether they can be made cheap yet but if they can of course such a system would use them 2. Chaff etc - yes probably correct, however it seems this is not needed for missiles to destroy current guns. 3. Fly to 1000m - Yes it would, however for sniper drone we are comparing the cost to a actual soldier. I have in mind something like https://newatlas.com/drones/huntress-turbojet-drone/ for heavier drones. Other sniper drones could be electric with a very short flight time, carried by the huntress or logistics drone 4. Relay drones - the idea is most of them fly over territory that has been secured - think a drone with flapping wings like a bird circling at 1000m - if you shoot it down with a big gun you give away your position. Also such drones will be doing constant surveillance of territory. 5. Anti-armor only - yes however infantry holed up in a building can't stop the invasion, it can route around them.  6.  Flak guns - yes guns can take down drones economically, however it becomes missiles vs flak gun. 7. Aircraft - yes I overstated a bit - for the initial invasion conducted with stockpiled materiel, they can't easily stop it. However taking out the aircraft is very important for the drone army. The drones can take out the airbases - so it could be a race between the fast aircraft trying to bomb the logistics before the drones reached the airfields. Most countries are ~1000 kilometers or less in length, which is in range of a cheap Cessna type logistics drone before they even do mesh network fuel drops to extend the range. Such low slow cheap aircraft would be protected by MANPAD carrying drones, or just equipped with them. fighter jets would be forced to shoot expensive missiles to destroy them, rather than get in close with the cannon etc. Even if the fighters can do 1,5000-2000 kilometers conventional forces could still