I have been hacked, and some (selected) files have been deliberately deleted - this file/post was part of it. I found suspicious software on my computer; I made screenshots - these screenshots were deleted together with several files from my book “2028 - Hacker-AI and Cyberwar 2.0”. I could recover most from my backup.

So far, I have hesitated to post sensational warnings on using smartphones in malware-based cyberwars designed to decapitate governments. Still, I wrote this paper out of concern that this could happen – and cheap marketing talk does not and will not change that.

I don’t know who hacked me, but I assume they got this post. It is spicy (and scary). It is published here to protect my family and myself. I have beefed up my security. Certainly, these measures are no match for what I believe attackers could do. Therefore, I established measures to guarantee that my research will survive and be published before it is officially published as a book.

I changed my password credentials, but I am under no illusion that this is enough to protect this publication. I have asked friends to save and print this post on their local systems (and not to sign into this site while doing this). I will certainly repost it if an impersonator deletes (or modifies) it with my credentials. (Lesswrong doesn’t have 2FA – and I am not suggesting that because I assume it is insufficient anyway).

 

Hacker-AI is AI used in hacking to increase speed for creating new malware that is easier and more targeted in its use. It could become an advanced super-hacker tool (gaining sysadmin access in all IT devices), able to steal crypto-keys, any secret it needs or uses on devices every available feature. As discussed in “Hacker-AI and Digital Ghosts – Pre-ASI” and “Hacker-AI – Does it already exist?”, Hacker-AI helps malware to avoid detection (as a digital ghost) and make itself irremovable on visited devices. 

In the paper/post “Hacker-AI and Cyberwar 2.0+”, I have hypothesized that Hacker-AI is an attractive cyber weapon for states waging cyberwar by facilitating the following cyberwar-related actions or capabilities:

  1. Surveillance (audio, video, and usage) of smartphones or other IT devices - at scale (every system)
  2. Selective access denial for some people caused by their device’s malware
  3. Direct threatening of targeted people via AI bots on devices
  4. Realtime Deep-Fakes and their use in refining facts/truth
  5. Reduction of costly war consequences via pre-war intelligence and preparation
  6. Misdirections on who is the culprit in cyber activities

An effective defense against the above capabilities is only achievable by technically restricting the capabilities of Hacker-AI, as suggested in the tech proposal posted in “Improved Security to Prevent Hacker-AI and Digital Ghosts”. However, it will take time to deploy sufficient defense capabilities, during which an assailant could start waging Cyberwar 2.0.

How could governments and their people prepare with non-technical means for Hacker-AI and Cyberwar 2.0+ before sufficient technical capabilities provide protection?

(1) Situations

In the post “Safe Development of Hacker-AI Countermeasures – What if we are too late?”. I introduced six Threat-Levels (TL) depending on the defender’s knowledge of the assumed or proven existence, capabilities, or actions of Hacker-AI as it relates to the development/deployment of technical countermeasures:

  • TL-0 means no current threat from Hacker-AI.
  • In TL-1, Hacker-AI is potentially developed but not detected yet. 
  • TL-2 means that all involved in developing/deploying countermeasures were warned that Hacker-AI has tried to attack the security development effort covertly. 
  • Sub-level TL-2X means people in development were directly attacked. 
  • TL-3 would show that an adversary has used Hacker-AI in a successful Cyberwar 2.0+ campaign. 
  • TL-4 means that adversary’s Hacker-AI is making the development of countermeasures impossible.

Hacker-AI is feasible, but it is unknown if it already exists. Therefore, the current threat level is below TL-2, but this could change quickly if at least one nation uses the time to work on Hacker-AI features and turns it into a deployed cyber weapon of a Cyberwar 2.0+; then we would have TL-3. 

We are not predicting future events or political developments, or intentions. We describe a scenario that requires urgent attention due to its attractiveness to adversaries and the catastrophic outcome for its targets. 

We assume that nations like PROC are preparing and then waging a cyberwar 2.0 on Taiwan. Alternatively, Russia could attack Ukraine (or another neighbor) in 3 or 4 years with the here mentioned cyberwar capabilities. It seems Russia or PROC have invested more in propaganda-based, espionage, or damage-creating cyber-war weapons than offensive malware-generating Hacker-AI and Cyberwar 2.0+ capabilities.

Additionally, the US and its allies could have already developed parts of the Hacker-AI technology, but because of legal considerations, they are using it sparely in a targeted way and not deploying it offensively.

Because Taiwan (ROC) has a high density of smartphones/IT devices, it is almost an ideal candidate for this type of warfare. Therefore, the most likely, first example of Cyberwar 2.0+ is the annexation of Taiwan by PROC. In this baseline scenario, we assume that PROC has the technical skills to use cyber-attack tools to replace (decapitate) ROC’s leadership, incl. governmental bureaucracy. Then PROC could create/operate, in the aftermath, an AI-based surveillance apparatus over 23 million inhabitants to fortify their gains.

Based on our baseline scenario, the targeted country (ROC) and seemingly unaffected countries (USA, NATO countries) are not technically prepared. We have two distinct situations. (A) The target country is directly surveilled and attacked in Cyberwar 2.0+, and (B) directly unaffected countries (bystanders) respond/adapt to the fact that Cyberwar 2.0+ is now an existential threat for every country. 

Only the USA, NATO countries, and a few other countries have the engineering to contribute to the development/production of countermeasures. It is unlikely that Taiwan, as the cyberwar target, could participate.

(2) Cyberwar 2.0+

On a high-level view, Cyberwar 2.0+ is deniable, and its operations are very hard to detect. Its most likely outcome is the replacement of the government and the establishment of an AI-based surveillance system. The war operations consist of three distinct Cyberwar Phases (CWP): (I) Pre-, (II) Actual- and (III) Post-war, defined by different activities:

  • CWP-I (Pre-war), the targeted country, is being surveilled intensely and covertly, primarily via its citizens’ smartphones and computers. A comprehensive data model of its leaders, businesses, citizens, resources, motivations, assumed pressure points, vulnerabilities, strength/skills, security, processes, geography, infrastructure, available computers/smartphones/software or hardware, etc., is likely being generated by the assailant. This model allows him to simulate a cyberwar and post-war. It can prepare him to use his resources, i.e., when/what to do under different contingencies.
    From this model/simulation, the assailant derives a comprehensive, detailed action plan to replace the existing government with a new puppet regime, including actions to make the new government operational by effectively taking full control. The model/simulation will help to optimize assailants’ (automated) responses. They may also decide to gather additional intelligence on their future (business) vulnerabilities via additional espionage toward lowering the pain from sanctions. However, CWP-I would give only additional urgency and focus to what is already being done.
    Because of the undetectable ghost-like features of the used malware/spyware, the comprehensive surveillance operation could be done entirely covertly. If defenders have unexpectedly special (dedicated) hardware to detect ghosts-like malware or if the attacker was careless in training its Hacker-AI, i.e., malware is detected by software. Governments may have some additional time within this phase to prepare – but defenders’ last-minute countermeasures would be detected via surveillance and considered via updates to the model/simulation.
    CWP-I ends if assailant’s leadership believes it has sufficient good-enough data (model) for its operation, ending with a victory as a high-probability outcome. This conclusion could be confirmed via in-detail (realistic/dry-run) simulations. CWP-I surveillance will continue in follow-up phases.
  • CWP-II (Actual-war). No textbook case can help us to define when a (real) cyberwar starts or ends. So far, cyberwar actions have annoyed or scared citizens in targeted counties, or these actions were part of a kinetic/destructive war. No country has been defeated by cyberwar alone. 
    We define the beginning of a cyberwar when the assailing country infringes via several coordinated events on significant sovereign rights on the country’s territory. So espionage or putting malware on users’ smartphones or IT devices is below the threshold of war. Instead, examples of acts of war are giving fake orders (on significant activities) by pretending to be a government authority, having citizens arrested (by local police) based on pretenses/fake evidence, or having people (with governmental roles) intimidated/coerced to become a traitor. 

    Additionally, manipulating infrastructure services (e.g., power, water, communication/internet, CCTV surveillance, money supply, financial or eCommerce transactions, logistics, transportation, healthcare, etc.) or security-related resources (police, military, weapons, etc.) could all constitute acts of war.

    The primary goal of Cyberwar 2.0 is the digital decapitation of a country’s government and society. In the course of this war, mid-level clerks could be intimidated to collaborate with the attacker. Also, communication or publications of higher-level institutions or leaders would be disrupted or manipulated meaningfully. With deep fake orders, key people in security could become unwilling collaborators in arresting deciders, influencers, and other security people under pretenses. The military could be ordered to stand down or be isolated from what is happening.
    Cybercriminals could be given cyber tools to create additional confusion, diversion, or misdirection on cyber events. News media and social media are either distracted or ordered to report normalcy. 
    Justifying a new government would probably require some political theater or fabricated disasters that could be delivered in various forms, depending on circumstances or opportunities. Conceivable is also a physical (violent) escalation in which someone in the inner circle or security forces is turned into an assassin, e.g., coerced to use explosives to eliminate government’s leadership.
    The cyberwar is ending with establishing a new puppet government controlled by the assailant. This new leadership will probably replace the bureaucratic and security leadership with intimidated (lower-ranked) collaborators from the target population. Additionally, the previous security services are either dispersed or arrested. 
    To achieve this outcome, no (foreign) soldiers have to enter the target country before being officially invited. Government’s legitimacy is difficult to determine from the outside. Foreign countries must accept nations’ sovereign decisions if no proof of foreign interventions or involvement can be provided.
  • CWP-III (Post-war) is designed to fortify assailant’s gains and to make the change permanent, i.e., irreversible. In this phase, the surveillance is continued. Suspected resistance fighters or saboteurs will likely be arrested and placed in re-education camps. Also, misdirection is used to shift the blame to others. Fake news is used to create a narrative that helps to calm down a possible shockwave of panic reaction worldwide.
    This phase aims to reduce possible damages that could affect business continuity and reduce the value of the spoil of war. However, if there is no violence, and the puppet government is accepted by its citizens, CWP-III and re-education via camps could be postponed as surveillance from CWP-I is still active.
    Affected countries (targeted and bystanders) must change their operations as soon as they become aware of cyberwar 2.0+ activities or if they see a chance to reverse the initial victory of the attacker.
  • CUCA (“Country is Under Cyber-Attack”) must be declared by the primarily targeted country/government as an important announcement to trigger prepared emergency regulations. This announcement must be made authoritatively via multiple publications and communication channels so that every citizen knows the country is under siege. The CUCA announcement must be protected against unauthorized misuse and premature cancellation.
    Previous governments should also be able to signal to their fellow citizens that the country is about to be liberated via the distribution and deployment of security hardware. This security hardware would help people to retake control of some of their IT equipment from permanently occupying Hacker-AI malware. It is assumed that software tools trying to do the same are being rejected by the malware.

    Not-directly affected countries should declare a special state of emergency to activate prepared rules and regulations supporting the country’s cyberwar preparation efforts.

On the assailant’s side, people involved in this war as operators won’t dare to talk as they would know about the surveillance capabilities. Any sign of decent could probably be identified ahead of the operation. 

The biggest impact of cyberwar 2.0+ is on people in the targeted country; their freedom is taken permanently. Even if citizens are later allowed to travel outside their countries, the government would know if these people have left enough “collateral” back home (i.e., their family, etc.), guaranteeing their return.

If a country was overtaken almost overnight, everyone involved in national security would ask, how can any country, including the USA or alliances of nations, protect their sovereignty? 

Additionally, deniability and misdirection are essential to avoid direct retaliation. However, the outcome is speaking the truth: a government was likely replaced in a cyberwar. An uncontrolled, fear-triggered escalation to military or nuclear retaliation (without solid proof) seems unlikely; what if the regime change was just a coup, an internal power struggle? Therefore, the biggest impact this event could likely have is that nations massively mobilize their technical talents to develop technical protection against Hacker-AI asap. However, we are then (for sure) too late. The problem of being too late and its late mitigation was discussed in the post: “Safe Development of Hacker-AI Countermeasures – What if we are too late?”.

(3) What is detectable in a Cyberwar 2.0+?

Technically unprepared defenders will not detect Cyberwar 2.0 activities. Detection happens only if the assailant intentionally steps out of the shadow, e.g., when it interacts with people, which means it has directly threatened many people via AI bots. However, evidence that this has happened could probably be avoided by the malware. Threats serving an assailant’s agenda could also be delivered via deep fakes and blamed on an inner political struggle for power. We must assume that Hacker-AI’s malware/digital ghosts remain undetected during all cyberwar phases or that it always uses misdirection to blame others. The only exception is that many people within the attacker’s camp dare to speak out as whistleblowers despite the personal risks and dangers to them.

Government’s or society’s decapitation could start as isolated technical problems – due to the suppression of certain information, it could take several days until the full scope of these disruptions becomes apparent to larger audiences. Even then, the assailant could find credible ways to blame others. During a time of confusion, the assailant could use the uncertainty to arrest or intimidate people in key positions within the bureaucracy, security apparatus, or political class and destabilize the existing order further. The carefully planned approach by attackers is likely more successful than any uncoordinated attempt to resist determined actions.

For technically unsophisticated victims, it is unlikely to recognize Hacker-AI activity. Other explanations for cyber events seem more reasonable. Cyberwar 2.0 (CWP-II) could already have started without anyone in the targeted country detecting it. Systematic surveillance of most people via their electronic devices (smartphones and PCs) could give attackers an uncatchable advantage. 

Malware could surveil phone calls and other smartphone activities, locations, or proximity or grab the resumes from user’s eMail. People’s phone data reveal a person’s role, status, motivation, and potential pressure points. The audio could be transcribed on devices, and surveillance data could be aggregated into small, inconspicuous data packages uploaded to 1000s of servers outside the target and assailant’s country covertly. From these data, the assailant could automatically derive detailed plans of action to manipulate institutions covertly. To win the cyberwar, the attacker must identify key people incl. possible replacements, and create surveillance bubbles around them.

Cyberwar 2.0+ assumes that most people (even in relevant positions) can be compelled to collaborate with assailants’ demands via direct intimidation by AI bots or real-time deep-fakes in communication or publications. If done inconspicuously, intimidation would not leave any traces, accept some reporting on it. 

The targeted country does not need to be invaded by soldiers. Many people in a society could be turned into collaborators, even traitors – with little persuasion. A machine-generated voice could deliver massive/ruthless threats (e.g., against the well-being of a family), while humans would potentially fail to be credible or consistent enough. Threats are delivered to individuals with malware/AI bots. Targeted persons are prevented from extracting digital or physical evidence or having other witnesses in these occurrences. Everyone who is targeted for being a collaborator could subsequently be observed. Follow-up actions could be automatically initiated via AI, drones, or deep fake calls with voices from relatives reporting strange or even scary events. It must be expected that most targeted (unprepared) people would stay silent and comply.

The problem for governments facing that kind of adversary is that finding nothing could mean there is nothing or something they cannot detect. The new defense line is in Cyberwar 2.0 within the populous. Defenders will receive plenty of data traces with useless noise or data that should misdirect the defender’s attention or conclusions. Without changing this critical deficit/blindness conceptionally, there is little hope that meaningful actions or resistance could alter the outcome of Cyberwar 2.0.

(4) What are the Preparation Goals/Measures for a Cyberwar 2.0+ Target?

If the assailants’ main goal is the government’s and society’s decapitation, then defenders must establish measures to counter cyberwar 2.0+ activities by keeping the government in place and operable as long as possible. If possible, the conflict’s cost must be significantly increased for the assailant. Unfortunately, both goals are very difficult to achieve, likely impossible without dedicated or extensive preparation.

Preparing for cyberwar 2.0 using digital or internet-based means to report suspicious deep-fakes or intimidation by AI bots is a serious mistake. Digital channels will be suppressed or manipulated by flooding false or manipulated reports. Assuming that defenders could somehow work around attacker’s capabilities is most likely wrong. We must accept (a) covert surveillance via mobile phones, (b) denial of service for critical people/organizations, (c) intimidations via AI bots or (d) deep fakes, (e) comprehensive attacker preparation/planning via simulation, and (f) misdirection on who is the attacker. 

Governments must proactively prepare rules to remain in charge during CWP-II and to make its illegal replacement by a puppet government as difficult and time-consuming as possible. Unfortunately, the probability that a legitimate government could survive Cyberwar 2.0 is slim. It could already be considered a victory if it could publicly announce CUCA (“Country is under Cyber-Attack”) and warn the world community about what happened.

Still, we propose that governments and citizens must prepare measures to achieve many goals within the following categories:

  1. Information/Intelligence 
    • Governments need reliable info on threats/demanded tasks from intimidated people asap
  2. Preservation of Structures and organizational missions
    • Preventing government’s decapitation by maintaining (reduced) command and control
    • Preservation of existing bureaucratic/security structures and hierarchies
    • Increased organizational resilience against external influence or personal intimidation
  3. Protection against painful economic disruptions/damages
    • Reduction of economic disruption for defenders
    • Prepared methods to slow down detrimental decisions, accelerating beneficial decisions
  4. Protected (unaltered) access to or communication with citizens
    • Dependable announcement that a comprehensive cyberwar has started (CUCA)
    • Establishing (reliable) methods of authorized information flow to all citizens
  5. Maintaining capability for reliable actions during and after CWP-II
    • Preparing a command/control backup (i.e., an underground) for retaking governance later
  6. Protection of people
    • Preventing arrests of innocent bureaucracy, security, or leadership – except it is done by people who have first-hand knowledge or irrefutable evidence of treason 
    • Protection of people who have given information despite threats

Keeping secrets around most measures is a waste because the adversary will gain this information anyway. However, if people are involved, electronic traces must be avoided proactively. The strength of preparation should come from making it public (open source) so that people in different positions can contribute with their detailed know-how to make it better.

1. Information/intelligence gathering

The government needs to have reliable information on what the assailant demanded within intimidating calls to people as soon as possible (within 24 hrs or faster). We must give informants a safe method to report these demands despite vicious threats to their people and families.

  • Routinely, e.g., every 4-6 hours, dedicated people in each organization would collect an envelope with a (paper/checklist-type) form in which informants can anonymously report demands by assailants. Victims of threats are not encouraged to disregard these demands – their protection is more important. As a rule: everyone in video-cam-free zones must fill out these forms by hand.
  • Paper-based, standardized forms/reports are locally aggregated (reducing it to a smaller size, i.e., without empty forms) and provided as paper-based, hand-filled (summary) reports to the next aggregation level/office. Finally, they are collected/aggregated in different independent main information/ intelligence offices and reported to the nation’s political/military leadership so that they can decide on declaring if the country is under cyber-attack (CUCA).
    Stationed analysts do threat-level assessments without IT support, using wallpaper or erasable blackboards only. Aggregation happens by regularly analyzing pre-sorted forms with summary forms so that higher-level aggregation/intelligence offices have less effort to create threat-level assessments quickly (incl. potential handwritten details on unexpected demands).
  • These analysts need certain character traits that help them work as a team over an extended time together. These professionals would create threat-level assessments multiple times per day.
    They would work (without IT, phones, or advanced entertainment) separately, even isolated from their family for some time, and then rotate out for the same amount. This schedule would give assailants less chance to have them compromised and misused within operational goals.
  • The locations of these offices should be protected against electronic surveillance, but their offices can’t be kept secret; they should be close to the government’s leadership.
    In-person meetings of the head of each office with the government’s political and military advisors should also be regularly held so that anyone can infer the threat assessment from the frequency of meetings or the people invited.
  • Everything done by the information/intelligence offices is paper-based and handwritten. Printed information that they are using or generating is done outside – supervised by humans who know what they expect to get. Only very old copy machines would be allowed and used as an exception.

2. Preservation/continuity of governance

The assailant’s main mission is to decapitate the existing government – violently or via cyber-means by isolating the leadership or certain bureaucratic layers from each other. During this time, the risk is that the assailant will use fake orders or deep fake audio/video calls to create new structures with new or compromised/intimidated leadership. The existing governmental/bureaucratic/security structures and hierarchies and their organizational missions must be preserved or operated in pre-determined modes. 

Preventing government’s decapitation means that the command and control over countries’ institutions must be maintained, but potentially with reduced intensity. 

Because it is unlikely that technically not prepared countries can detect CWP-I activities, only demands by AI bots within CWP-II could be reported. Country is under a cyber-attack (CUCA) announcement should only be issued if there is sufficient evidence that decapitation is adversary’s goal.

Then, governance must significantly change after CUCA is confirmed or declared. Meetings and orders should be given in person only. Note-taking or orders are handwritten (or done with an old typewriter). 

The government’s overall goals are to maintain command and control and have resilient/stable organizations. The focus of sieged governments is to keep the lights on and not to make (unnecessary) changes. Unlike other wars, in Cyberwar 2.0, every decision, order, or change of laws/rules could be fake, and its authenticity must be questioned because of Hacker-AI, i.e., malware from Hacker-AI that could manipulate data representations of events to its advantage. A freeze in making modifications to rules/laws is a significant limitation on government’s sovereignty, but it is necessary due to the nature of cyberwar 2.0 waged against the targeted country.

Here are a few suggestions for concrete measures:

  • Proactively defining/establishing simplification and prioritization: a freeze on major decisions (triggered by CUCA) must not be undermined as a matter of principle. The bureaucracy should not accept changes or decisions because it could receive a distorted full picture. Instead, it could also operate in a mode in which it is allowed (at its discretion) to cut through previous red tape.
  • After CUCA is declared, some critical (security-related) administrational processes must be turned into paper-/form-based processes (done by hand) so that malware has less impact on them. If these changes were not planned, thought through, and decided ahead of CUCA – attackers would likely take advantage of the confusion around the trial and error.
  • Organizations must be resilient against undue external influence or personal intimidation; i.e., external issues should not impact the service we expect from organizations. E.g., multiple persons should always be prepared to take over any task if required. Additionally, organizational changes, particularly in key positions, should happen transparently (overtly) and be crosschecked with intelligence extracted from gathered data about assailants’ threatening demands.
  • The lack of sufficient control over most governmental organizations could be stress–tested by attackers. After CUCA, organizations should be prepared to self-regulate/control their operations and handle their problems from internal/external influence for many months without external command/control/oversight. Reviews on what happened during the cyberwar crisis could be adjudicated later, and all participants should be aware of that.

The performance of operations at governmental organizations after the CUCA declaration will likely have no impact on the government’s decapitation. Regime change is more likely accelerated with events that have nothing to do with operational activities. The theater leading to a new government controlled by the assailant is assessed based on established (political) conventions and not how poorly the legitimate executive continued its governance.

Realistically, the old government is being replaced quickly by a new puppet regime; cyberwar activities will stop then. Preparation for the continuation of governance is probably a waste of effort. In Cyberwar 2.0, the attacker will win predictably. The most important goal should be the announcement of CUCA to warn the world community of the cyberwar. If the government can provide evidence for its claim, it probably scored an important victory before being defeated.

3. Protection against economic damages/disruption

Economic consequences for the assailant will come from sanctions, a less productive workforce, or sabotage. Increasing assailant’s costs without increasing the personal risks of people involved in these acts is extremely difficult and potentially unachievable.

Limiting painful economic consequences from disruptions or damages from assailants is largely outside the control of governments in CWP-II or CWP-III. The government should still try to reduce economic disruption for defender’s population. Government’s analysts should identify methods to slow down detrimental decisions for its citizens while accelerating beneficial decisions. 

Suggesting concrete proposals is outside author’s competence.

4. Protection of communication with citizens

The government must stay in touch with its citizens. Also, as consumers of received information, people must be sure that they receive unaltered messages that the government has authorized. People must trust the message, the medium (i.e., printed or verbal on an event), and the messenger, who the receiver should (at least) know. Digital communication channels must be distrusted. The following goals are for phases CWP-II and CWP-III:

  1. There must be a dependable announcement to its citizens that the country is in a comprehensive cyberwar (CUCA). After this declaration, citizens are informed and prepared that services or messages are either compromised (i.e., helping the assailant) or deep-faked. It should be publicly announced that all publications, incl. videos/TV and all not-in-person audio/video communication, could be faked and used for propaganda or disinformation purposes.
  2. Tasks within civil defense measures are assigned (preferably) to teams of trained citizens or training new (trusted) volunteers. Potentially spontaneously formed (autonomous) teams could help maintain and improve society’s living conditions when government’s more centralized command and control is failing.
  3. The legitimate government has reliable methods of authorized information flow down to all citizens based on word of mouth and redundant trusted messengers.

We acknowledge that we do not have a comprehensive plan on how the above objectives can be accomplished reliably and sustainably (under surveillance). However, a few suggestions should be made here:

  • New methods or processes using prepared publications and trained volunteers should be developed on an ongoing basis and established in advance. All communication measures should be independent of IT technologies and created with minimal data traces.
  • Local (info) events should be cell-phone-free. Only cell phones with removed battery/power supply can be used trusted. Otherwise, they are surveillance devices, even if manually switched off. Putting cell-phones in tin foil reduces their connectivity, but microphones could still record sound. 
  • Phone jammers are insufficient. Instead, EM-detectors could help locate devices based on their emitted network activity, incl. wifi, and Bluetooth. But malware could deactivate these network activities while keeping the microphone or video cam active for recording. All mobile-device users must be told that they should be conscient about having these devices with them all the time.

5. Maintaining capability to do reliable actions

In CWP-II, governments are under assault; command and control and the continuity of governance were already discussed under item (b) preservation/continuity of government. 

The cyberwar will likely be lost, and a puppet regime will take control. In CWP-III, surveillance would continue and be enhanced by public surveillance measures if some people try to avoid being surveilled by their smartphones or personal devices.

Still, the old government should not give up on helping their fellow citizens to regain their freedom and stop the intrusive AI-based surveillance.

  • Preparing policies proactively for CWP-III via establishing an (informal) command/control backup (i.e., an underground) could help us to retake governance or mobilize people to start an uprising. 

Dedicated experts should prepare conceptional plans on what key positions should be retaken/occupied by sympathizers or which technical key components should be deactivated or sabotaged that could favorably change the result of civil uprisings or help the resistance network.

Even if it is unlikely that there is a back from total AI-controlled surveillance, there should be no stone unturned in which we study the reversal.

  • Small teams/independent cells with inconspicuous people who know each other well could pledge allegiance to their constitution and preserve the faith in the old order. They remain silent while preparing actions when the signal for the uprising comes. These groups are trained to receive and send covert messages under surveillance conditions.
  • The uprising could only be successful if Hacker-AI-based malware is being stopped on enough IT devices. Although software updates would have stopped the initial malware leading to Cyberwar 2.0, once persistent malware has occupied the hardware, it is too late for software fixes. Only security hardware components included in devices as retrofit could make a difference. 

These security-hardware retrofits can be miniaturized to very small components, i.e., as small as network plug-connectors (about 1cm3). They would need to be smuggled into the country and distributed to people who want to regain control over their devices. 

Existing smartphones are likely not retrofittable. We would need to destroy them as potential spying devices and use either simple burner phones or a new smartphone with security components.

The idea of liberating a cyber-occupied country requires: 

(a)  fixing IT equipment with security-hardware retrofits around the same time in all devices so that people doing that would not become a target for the puppet regime’s security, 

(b)  removing power from all (non-retrofittable) mobile or IoT devices and 

(c)  having a plan of switching off or removing adversarial control over public surveillance/infrastructure IT systems via (dormant underground/resistance) teams focusing on assigned tasks. 

However, the occupier or new regime could have changed many systems within CWP-III; returning to the old order/pre-occupation might be impossible.

6. Protection of people

Citizens of the target country are in danger for three reasons: 

  1. The new regime could arrest someone who is a member of the political leadership, politically opposed to PROC, for being a key part of government’s bureaucracy, or for being a member of government’s security (military, intelligence services, or police).
  2. Anyone who spoke about being contacted by assailant’s AI bots, ignored to comply with their demands and their threats. These threats might be real, and machines would not forget; they could be implemented via drones almost automatically.
  3. Once assailant controls security and the justice system, many more could be arrested because they fit a profile (i.e., being a potential saboteur) and therefore jailed in a re-education camp

Events related to (i) and (ii) happen during the actual war (i.e., CWP-II). Events related to (iii) are the aftermath (CWP-III) and are outside the control of the legitimate (but replaced) government.

  • Everyone on the list mentioned in (i) should have a personal escape plan (with their family) and access to escape resources provided by the legitimate government before and after CUCA is announced. The announcement of CUCA could be interpreted by many citizens exposed from reason (i) that they should leave with their families their country asap.
    Some of them may have a second passport that would give them diplomatic support from other governments. They may also have financial resources outside the country. 
    The targeted country (e.g., Taiwan) could proactively negotiate with neighboring countries to handle predictable refugee issues via financial agreements or transactions.
  • Preventing innocent leaders, bureaucrats, or security personnel (i) arrests during CWP-II falls under the legitimate government’s purview. Strict rules should prevent normal security personnel from arresting certain categories of people without specially authorized officers present. The arrests must be companied by at least two higher-ups from a special investigative department. Alternatively, two or more investigators with first-hand knowledge of treasonous behavior must be present. At least one must give a written testimonial about the case, which is then provided to a dedicated aggregating/intelligence office – which has the authority to overwrite (at least temporarily) arrests or even court decisions based on the quality of their reports/intelligence.
    If people (qualified or authorized to give testimonials) are missing in an arrest, independently of CUCA, and the arrestees can prove that they are part of a special category, then they must be released immediately – demanded by a single legitimately protesting police officer.
    All officers and executives within the special investigative department are stored in printed binders with images/personal data that police officers consult before accepting the execution of an arrest.
    The release of these people could give them and their families a chance to leave their country immediately before CUCA is announced. The philosophy is based on the “No man left behind” rule of the US military so that everyone gives all they can until it makes no sense to continue.
  • Everyone who thinks assailant’s AI bot contacted them on the smartphone or IT device should safely report this event via handwritten forms to be filled out (in video-free zones). The report is inserted in an envelope even if nothing reportable happened. 
    Informants are not instructed to ignore or defy the demanded task, even if that act would violate their loyalty to the organization. Informants are kept safe during and after reporting. 
    Still, everyone at risk of being contacted by AI bots or deep fakes should be trained/informed about reacting and reporting incidents (with details) as soon as possible without creating suspicion.
    The training for dealing with threats happens in video-free and cell-phone-free events.
  • People at risk should be trained to use their common sense when and how they inform non-verbally on their coercive situations, i.e., when they believe they should be taken out of a case related to a demand from the AI bot. Conversations should be trained to give or allow others to do the task instead. Disrupting a step within an assailant’s much larger plan could be helpful but dangerous.

Helping many people to escape a country affected by cyberwar 2.0 or after CUCA is announced is considered an important long-term contribution and potentially a good investment into a better future for that country and the people left behind.

7. Generally suggested methods/rules or behavior

Being under constant (covert) surveillance via smartphones or other IT devices (audio, video, or usage) is a new quality in surveillance. Switching them off is often not enough, as electronic devices can appear to be switched off. People will deal with this situation differently. Some will surrender and accept the new reality, while others will give up on smartphones or IoT devices, keep electronic devices disconnected from the power supply, or remove their batteries. The novel 1984 by George Orwell illustrates the risk of (actively) avoiding surveillance; it could make people doing that more suspicious. Still, personal methods and organizational policies could be implemented to deal with mobile devices already pre-cyberwar.

  • Using less sophisticated burner phones instead of smartphones. Switching between these phones should be supported by having SIM cards more easily exchanged between phones
  • Organizations should offer more smartphone boxes within their building. They need to be dampened so that recording surrounding sound is useless. Meetings are done mandatorily without smartphones.
  • All offices should have unmistakably marked cell phone-free and video-free zones
  • Pre-Cyberwar, some people could analyze their homes for potential IoT spy devices. They can start protecting their privacy immediately with the removal of devices. Taking this step after CUCA is declared is potentially a more dangerous move.
    Not all people will be subject to constant surveillance, but they cannot know that (for sure). After CWP-I, the attacker will likely narrow the surveillance focus to a few million citizens. Still, low-ranking clerks or operators could be on the list of potentially usable assets. The attacker could use regular people as tools. 
    The cyberwar frontline would go invisibly through the entire society, and exposing or announcing opposition to the likely winner of the cyberwar is a dangerous decision. Compliance and potentially quietly signaling to the outside are what we should expect at most from people coerced into collaboration.
  • Organizations should give coerced people a quiet method of dealing with being a collaborator. If people report and even show that they are coerced, they become double agents. 
    People in organizations must know via prior education the personal risk they would accept if they expose themselves as a coerced collaborator to others. People should be trained via publications, education, and exercises on how to send or spot these signals.
  • Additionally, organizations should have a culture of (enough) transparency to detect suspicious (potentially treasonous) decisions/actions quickly.
  • Providing advice on how to deal with intimidation within a cyberwar is essential. If this education on “tips” should happen publicly via TV or the Internet should be studied in more detail later.

However, organizations should have trained cyber-security professionals educated in this topic. 

  • As a rule and policy, all decisions or orders done or given under distress are null and void. Their outcome must be reverted immediately or soon after coercion has been detected and confirmed.
  • Subordinates have the responsibility, even the duty, to decline orders when they have doubts (via non-verbal signals or from the context). However, at least one peer or collaborator must be consulted and accept or tolerate the declining of specific orders in their discussion (within a safe zone) to make the decision unassailable. The compromised person’s name should not be mentioned in these discussions.

(5) Preparations for not-directly targeted countries

The most significant difference between targeted and not-directly targeted countries is that non-targeted countries are probably not the target of total smartphone surveillance. The required computation backend size would be too large to generate comprehensive data models of all people. Still, Hacker-AI operators might have done some other reconnaissance missions in which they determine who is important to surveil more intensively. They may even do surveillance regularly or continuously on these individuals.

How intrusive Hacker-AI operatives are within not-directly targeted countries is difficult to predict. They might try to penetrate defense systems and make militaries’ logistics partly or fully inoperable if they know they can’t be detected. Also, they might try to understand and then deactivate the (nuclear) retaliation system in several critical key components without making these changes detectable. I cannot claim to know whether Hacker-AI’s malware can penetrate hardened military systems. However, if these systems use the same architectural principles as regular systems, i.e., von-Neuman architecture, virtual address space, direct memory access (DMA), etc., and no physical separation of security and regular tasks or whitelisting of all apps before entering RAM, then it remains to be seen if malware can do it or if these systems are good enough to resist.

The goals of not-directly targeted countries after the confirmation that Cyberwar 2.0+ with Hacker-AI exists and is a viable form of warfare are probably twofold: 

(1) What can a country do immediately to prevent being another victim of Hacker-AI and Cyberwar 2.0+? 

(2) Creating and protecting a safe environment in which Hacker-AI countermeasures can be developed, manufactured, distributed, and deployed.

Regarding (1): Nothing could prevent or protect a country from being the next target. However, we would still suggest that the already proposed methods of guaranteeing continuity of government for targeted countries should be used. Additionally, smartphone use should be reduced until new smartphones with security hardware are manufactured and widely deployed. 

Although it might be too late, some computer systems should be taken offline immediately and kept offline until security hardware solutions are used to protect them. Also, it is not sufficient, but still, old software (from persistent storage media, like CDs) should be reinstalled if there is a chance that the system could be compromised.

Based on cultural or societal consent, countries will respond to the threat of Cyberwar 2.0 and potential total AI-based surveillance in different ways. They all must hope that technical security tools will lead out of this problem soon. It is conceivable that countries will use their war mobilization act to create tools to help them adapt their bureaucracy, economy, and citizens to the new normal. However, that requires new tools that must be developed and finally deployed within step (2).

Regarding (2): This post will refer to the post “Safe Development of Hacker-AI Countermeasures – What if we are too late?” for details. Every single piece of preparation in the absence of Hacker-AI would be of tremendous help. The most important goal is to safeguard engineering skills, manufacturing capacities, and services, including all steps toward deployment against nefarious malware interferences in every way imaginable.

(6) Discussion

War preparation aims to deny the adversary his goals. For Cyberwar 2.0+, the assailant would try to achieve a rapid regime change and low follow-up costs from the war and its aftermath. Denying these goals in a conventional war is done by destruction and sanctions from the world community. However, cyberwar is a remote data operation with surveillance, intimidation, and misdirection designed to decapitate a government and its society and replace it with coerced puppets recruited from within the targeted population.

It is very difficult to fight a war where the covert frontlines go trace-less through society’s own population. Cyberwar activities are deniable and easily blamed on others. Coercion of people designed to accomplish operational goals that can’t be done otherwise could be the only detectable event in which an assailant would get out of the shadow. Physical evidence for cyber attacks will likely not exist. Governments could create a paper trail from anonymous handwritten reports to aggregation at more centralized information/intelligence offices from which trends or patterns could be derived.

These papers should be brought as diplomatic mail to other countries, where they should be stored and further studied. Only via diplomatic couriers, potentially involving other countries, would the world receive some physical evidence that led to the announcement of a cyberwar (CUCA). This outcome could already be considered an important victory under otherwise futile conditions.

(7) Conclusion

Too late means what it says: too late. Preparing for these situations in which we are presumingly too late does not necessarily provide different outcomes. For targeted countries that are not technically prepared for a Cyberwar 2.0+, preparation would, at most, provide the world community a signal that changes (like a regime change) within the targeted country were triggered by a confirmed cyberwar 2.0+.

It would be a significant success if many people targeted via arrest could be rescued before being placed in re-education camps. 

A confirmed cyberwar 2.0 would show that waging war is a (seemingly) risk-free decision. This war could send shockwaves through the world. No country will be safe until technical means would make surveillance on smartphones and IT devices much more difficult or even not feasible.

New Comment