Meme Marine
Meme Marine has not written any posts yet.

Meme Marine has not written any posts yet.

The reason for agnosticism is that it is no more likely for them to be on one side or the other. As a result, you don't know without evidence who is influencing you. I don't really think this class of Pascal's Wager attack is very logical for this reason - an attack is supposed to influence someone's behavior but I think that without special pleading this can't do that. Non-existent beings have no leverage whatsoever and any rational agent would understand this - even humans do. Even religious beliefs aren't completely evidenceless, the type of evidence exhibited just doesn't stand up to scientific scrutiny.
To give an example: What if that AI was... (read more)
I think more importantly, it simply isn't logical to allow yourself to be Pascal Mugged, because in the absence of evidence, it's entirely possible that going along with it would actually produce just as much anti-reward as it might gain you. It rather boggles me that this line of reasoning has been taken so seriously.
Kudos to you for actually trying to solve the problem, but I must remind you that the history of symbolic AI is pretty much nothing but failure after failure; what do you intend to do differently, and how do you intend to overcome the challenges that halted progress in this area for the past ~40 years?
Yes, I agree that the US military is one example of a particularly well-aligned institution. I think my point about the alignment problem being analogous to military coup risk is still valid and that similar principles could be used to explore the AI alignment problem; military members control weaponry that no civil agency can match or defeat, in most countries.
All military organizations are structured around the principal of its leaders being able to give orders to people subservient to them. War is a massive coordination problem and being able to get soldiers to do what you want is the primary one among them. I mean to say that high ranking generals could issue such a coup, not that every service member would spontaneously decide to perform one. This can and does happen, so I think your blanket statement on the impossibility of Juntas is void.
I am unsurprised but disappointed to read the same Catastrophe arguments rehashed here, based on an outdated Bostromian paradigm of AGI. This is the main section I disagree with.
The underlying principle beneath these hypothetical scenarios is grounded in what we can observe around us: powerful entities control weaker ones, and weaker ones can fight back only to the degree that the more powerful entity isn’t all that powerful after all.
I do not think this is obvious or true at all. Nation-States are often controlled by a small group of people or even a single person, no different physiologically to any other human being. If it really wanted to, there would be nothing... (read 934 more words →)
No message is intuitively obvious; the inferential distance between the AI safety community and the general public is wide, and even if many people do broadly dislike AI, they will tend to think that apocalyptic predictions of the future, especially ones that don't have as much hard evidence to back them as climate change (which is already very divisive!) belong in the same pile as the rest of them. I am sure many people will be convinced, especially if they were already predisposed to it, but such a radical message will alienate many potential supporters.
I think the suggestion that contact with non-human intelligence is inherently dangerous is not actually widely intuitive. A... (read 495 more words →)
I am sorry for the tone I had to take, but I don't know how to be any clearer - when people start telling me they're going to "break the overton window" and bypass politics, this is nothing but crazy talk. This strategy will ruin any chances of success you may have had. I also question the efficacy of a Pause AI policy in the first place - and one argument against it is that some countries may defect, which could lead to worse outcomes in the long term.
Why does MIRI believe that an "AI Pause" would contribute anything of substance to the goal of protecting the human race? It seems to me that an AI pause would:
I suppose we are now seeing the first stages of "Semiotic Warfare" - the art of achieving political goals through the manipulation of meaning itself. I think it's a good idea, but I would advise against calling it "poison" because this framing implies you are attempting to injure the model. This is not the goal and I think such framing would prime for adversarial behavior.