Preparing for infrastructure compromise
Getting ready for the new cold war
Many of us grew up during the Cold War. We spent a lot of time worrying about what we would do if our respective politicians lost their minds and pressed the nuclear button, sending us all diving under our desks or into bomb shelters. We’ve continued to worry about the nuclear abilities of other nations, particularly North Korea, but by and large, preparing local communities for war went away with the switch from Civil Defense to Emergency Management. Now we crawl under our desks for earthquakes (which might actually protect us) and head for tornado shelters instead of fall-out shelters.
Now we face the prospect of a new type of conflict with Russia and her allies. No longer a cold war and not to the level of active warfighting in our homeland, but something in the middle. Maybe a “warm war.”
Over the past few years, Russia has claimed the ability to hack anything that is hackable: power plants, satellites, healthcare facilities, and any other piece of infrastructure that utilizes computer processing. Both Colonial Pipeline and JBS (the meat supplies) fell victim to Russian cyber criminals last year, and DHS has warned that Russia has an entire cyber arsenal just waiting to be used. We’ve seen Russia’s ability to influence people in the United States via disinformation and misinformation campaigns using social media, both directly and through contracted third parties. As I write this, Facebook/Meta is trying to crack down on foreign influence related to the trucker convoy protest in Canada and the looming threat of one in the United States. There is very clear evidence that effort is being catalyzed by parties outside the United States (or Canada), whose only vested interest is to encourage strife in those countries (they also probably get paid decently to do it).
In light of that, let’s talk about planning for the possibility of attacks on our critical infrastructure. We have used an all-hazards approach to planning and preparedness for many years and since 9/11, have focused heavily on critical infrastructure protection. Mission areas and lifelines align us with this type of planning, but as often happens, we don’t get down into its deep details at the local level. We had the same issue with the COVID-19 pandemic: lots of pandemic plans, but most of them, at every level, suffered from a failure of imagination. They didn’t scale well, they didn’t account for the way people actually behaved in a pandemic, and they didn’t account for the supply chain issues we continue to experience. Odds are that most of those planning efforts had somebody bring up true worst-case scenarios, which were put in the “that will never happen” bucket and never spoken of again.
How do we avoid doing that in planning for widespread infrastructure failure? Most of us have plans to deal with a power outage at the local level. Most of us do not have a plan that works if that outage covers a wide area and lasts a long time. Somebody probably mentioned it in the planning process, but it was discarded as too hypothetical. Take those outliers seriously. Not only take them seriously but encourage people to bring them forward to talk about, walk through your current plans with, and acknowledge where your current plan breaks down. Then amend your plan to account for it, even if that means acknowledging that it’s outside your control and all you can do is support your community with whatever you have on hand.
Local governments and small businesses also tend to do poor continuity planning. Even if they have a COOP plan, it tends to lack breadth and depth, and it makes assumptions about critical infrastructure restoration that are probably wrong in a scenario like this. They also often lack true interdependency lists, as people often don’t fully understand how distinct types of critical infrastructure depend on each other. A recent conversation I had with a local emergency manager revealed that their organization didn’t believe they had any systems that used satellite services or that they relied upon any that did. They didn’t consider that mobile devices rely on GPS satellites for some functions or that the global Tsunami Warning System uses satellites to relay data. They may not have considered that some teleworking employees are relying on satellite Internet to connect to work. Weather and space weather forecasting rely on satellites, as are a host of other services that we use every day. Modern 911 systems use satellites to get location data from callers. Though 911 would still work, locations of mobile callers would fall back to what we had in the 1990s unless GPS is available. Even the unmanned ariel systems (UAS/drones) that many agencies use don’t work, or don’t work well, without GPS access.
Consider also the dependencies you have on reliable high-speed Internet services. What does your organization do that you rely upon the Internet for? Do you have a Voice Over IP (VoiP) phone system that only works when you have network and Internet access? Do you have remote employees who can only do their work if both they and your organization have high-speed access? Do your public safety agencies lose all or some of their mobile access to Computer-Aided Dispatch (CAD) if you lose Internet? Does CAD lose access to some of the systems it needs if you lose Internet access (hint: it does)? Do the clocks in your EOC sync to an Internet-based time service? If your answer to any of these is “I don’t know,” you need to get with all of your users and figure out what your actual dependencies are. There’s a tendency to rely upon IT to know all of this, but they might not, and they might not know your priorities for restoration.
It goes without saying that power is essential and has been named as a potential target for US adversaries. We’ve all dealt with small or even large power outages, but there have been very few massive scale power outages that lasted a significant length of time. How much fuel do you have for your generators that control things like water and sewer? Can you get more fuel in a timely manner? If you can’t, what systems do you lose, and what does that mean for your organization and your community? What is your actual plan if your entire community loses power for an extended length of time? There are a lot of lessons that can be learned from the Northeast blackout of 2003, but even that only lasted a few days (or less, depending on where you were).
Beyond your organization, what do some of these losses mean for the remaining critical infrastructure and your community as a whole? At what point do your hospitals become unviable? At what point does your garbage service quit collecting? At what point does your community begin to run out of critical supplies? Our communities have shown exemplary resilience in the last two years, but could they do it again so soon? At what point do our communities begin to break down and cease to function?
As you start to work your way through lifelines and critical infrastructure in detail, you’ll start to paint the picture of how all of our systems are connected and how a loss of one impacts the others. This is the place to start for in-depth continuity planning for your organization. In discussion and tabletop exercises, push your plans until they break, then fix them. Then push them again! Document the failure points so you can set expectations both internally and externally. Planning for a non-theoretical event often affords emergency managers the buy-in and support they need from senior officials to improve or create plans in a short time. The current threats to critical infrastructure may be just such an opportunity.