Say Hello To Server Islanding
Network Admins need to take a step back and reconsider physical barriers to “isolate and island” server systems. Although cloud computing has been heralded as the solution for all our server woes, it still has many of the same issues that plagued us when we had racks of pancaked pizza boxes in our utility closets. Our blind trust in large cloud service providers has seen many businesses hacked back to the stone age, enterprise-wide, because cloud services have such a massive attack surface. Hospitals are brought to a standstill, banks are robbed, utilities are broken into, and thousands upon thousands of dollars are spent on cybersecurity. Many systems see enormous advantages from cloud computing, no doubt, but there are still countless others which we’ve shoehorned into the cloud that would have been better left on-site. These include systems that are critical to local operations, like building management control systems, automation and SCADA, critical power and many others.
Until the next revolution in digital identity (hint: BLOCKCHAIN) brings confidence and assurance to “who” and “what” we’re connected to, we’re going to need to consider using air gaps in order to protect our servers. “The Server” is the core of the network, and we need to protect it with physical barriers in much the same way physical-world islands are protected. What I’m suggesting is that we need to take our critical server systems and re-think having them online to the wider world at all. To clarify though, I’m not suggesting these server systems have “no” connection to other systems, (they’d have only limited use if they couldn’t communicate with other devices), but what I am suggesting is that we focus on making the standalone server as self-reliant and secure as possible, so that we are not dependent on remote resources. In the same way that natural islands provide protection, we should look to the same model, using barriers that cannot currently be defeated without local physical access, in order to protect ourselves.
I’m sure I’ll be flooded with hate mail over this from angry Network Admins. From day one, we’ve been taught how to make systems communicate, how to troubleshoot them, and protect them with software supplied by hardware manufacturers. Servers are designed to be massive repositories for data, places to collaborate on projects, and therefore by their very nature are the most interconnected devices we use. You can isolate with firewalls, intrusion detection systems, virus and malware protection, virtual LANS and a cadre of other software methods but they all have one thing in common… they require vastly more resources to maintain than a physical barrier.
What’s the alternative?
Now is the time to consider local servers again. Server Islanding is the concept of separating a self-sufficient server system and deliberately creating a physical barrier between it and the outside world. Physical isolation of servers and networks, also called “Air-Gapping” isn’t a new concept. Governments, militaries, and even paranoid weirdos out in the woods have seen the benefits of isolation and used it to their advantage, but commercially, its rarely implemented, because, quite frankly, it’s been a pain in the ass to make work with gear that’s not independent enough to handle it. So, if the major manufactures of the world won’t give us what we need (hint: they don’t like server islanding because the money is in services), then you need to either find a niche provider that will or build it yourself. Its only the future of human civilization that depends on it, so no biggie.
Here are the three major elements of Server Islanding…
Robust – Islanded Servers need to be robust. Our servers have evolved to become pancaked pizza boxes of computational power that can crunch data constantly, yet faceplant when the AC fails. How often does that happen? Ask the UPS business, they make billions on keeping the AC running for datacenters. There are many paths toward robustness, including going fanless, using wide temperature components, and encasing the systems in containers which both prevent physical access to inputs of the system. Creating islands isn’t as hard as it was in the past, since we now have low TDP processors and much more resilient flash storage that can be combined with industrial grade power supplies to create systems that are exponentially more reliable than the small business grade junk we’re used to. These systems don’t need to cost an arm and a leg and absolutely don’t need to be able to take a 50-caliber shot to the head, but they DO need to be able to survive a damned sprinkler going off.
Self-Reliant – Islanded Servers need to be Self-Reliant. What does this mean? It means that they really need to be systems that are independent enough themselves, as a full “Compute-Display-Storage” node. There are currently many deployments with compute and storage separations, but many of the perceived advantages completely fail in spaces without a full time IT presence. These systems have many exposed points for failure (cabling, switching, etc.) and they are exponentially more difficult to make redundant when compute and storage are separated. We need servers that would make Henry David Thoreau proud.
Practical – Let me put this simply. We need practical systems that allow a technician to be able to walk a completely non-technical person through troubleshooting the islanding system over the phone. When you’re Islanded, tech support could be far away. An Islanded System needs to be easy enough to use, and control, by operations personnel. This is one of the most limiting factors of Server Islanding. It’s really tough. A slick control interface just isn’t good enough. Operators need to be able to seriously get under the hood of the system, with ease, in times of need. Ideally, every component should be modular and obvious to someone untrained in IT. This also includes the ability to interact with the system visually. Long ago, the evolution of server systems into racks meant we did away with the display of the system. The display is expensive from a real-estate standpoint, and we did away with it many years ago, not realizing we’d ever need a non-remote window back into our servers. KVM’s are a good stop gap, but no where near as easy to troubleshoot as a built-in display.
What are the practical uses for Islanded Servers?
Industrial Control Systems- Keeping these systems isolated is critical. Not only should these systems that control our electrical utilities, water and wastewater, and critical services be placed in air-gapped networks, they also need to be separated from one another. Although the entire network (all vendors) may be air gapped, the reality is, each vendor can be at the mercy of another if they’re consolidated in the same rack. One switch issue, one misplaced cable, and now 10 vendors are all being screamed at by OT that the entire control system is messed up. If these systems could be practically isolated from one another, hosted locally on their own islanded servers, then set to store and forward data to connected historians, data aggregators, etc., we could have a much more reliable network for our utilities. It’s a big paradigm shift to think of things this way, you do give up some real estate, but the advantages could be enormous.
Remote Locations – Remote locations are always nightmares for IT, but they really don’t need to be. With more robust equipment, say cutting the average component failure in ½, and providing the ability for remote IT to coach local resources through troubleshooting issues, local servers could return. We can remove our dependence on cloud servers in remote locations, and instead use local systems that can store and forward information like financial reports, batched payment information, etc., yet still provide systems that will run the business or facility when the connection to the cloud is down.
Franchises – As the IoT world embraces sensors, we realize we need this data to be aggregated, recorded, and consolidated into actionable and reportable systems. In food and beverage, aggregating the temperatures of refrigeration units, alarm systems, inventory systems, and financial systems locally can be an enormous task. Franchises typically depend on point of sale systems that require cloud based connections, but internet access can be flaky. Having a truly independent local server that could allow a business to function locally without internet access could mean thousands of dollars in savings.
Gordon Triplett founded Aertight Systems, Inc. in 2004 and is the President of the company. Gordon grew up as a child of the 1980’s, disassembling and reassembling many of the 1stmajor manufactured computers at home. After college, Gordon worked for one of the 1stCLEC’s in the area, Allegiance Telecom, before being recruited to work for WorldCom, then on to Kesem Technology in Washington DC.