From Pet Systems to Cattle Farming – What Happened to the Data Center?

There is something in craftsmanship. It’s personal, it’s artistic, and it can be incredibly effective in achieving one’s goals. On the other hand, mass production can be effective in other ways, through speed, efficiency, and cost savings.

The story of data centers is one of going from cottage industry – where each individual machine is a pet project, maintained with great care – to mass production with large server farms where the individual units are completely disposable.

In this article, we take a look at how data centers have changed shape over the decades. We look at the implications for data center workloads and the people who run them, who have now lost their favorite systems. We will also examine the cybersecurity implications of the new data center landscape.

Pet system with a big purpose

For any system administrator who started their career before the advent of virtualization and other cloud and automation technologies, systems were finely crafted hardware components treated with the love of a pet.

It starts with the emergence of computer rooms in the 1940s – where large machines manually connected by miles of cables were what one could only call a labor of love. These computer rooms contained the steam engines of the computer age, soon replaced by more sophisticated equipment thanks to the silicon revolutions. As for security? A big lock on the door was enough.

Mainframes, the forerunners of today’s data centers, were also finely engineered solutions, with a single machine taking up an entire room and requiring ongoing, expert craftsmanship to keep it running. This involved both hardware skills and coding skills where mainframe operators must code on the fly to keep their workloads running.

From a security perspective, mainframes were relatively easy to manage. This was (well) before the dawn of the internet age, and IT managers’ favorite systems presented a reasonably limited risk of breach. The first computer viruses appeared in the 1970s, but they posed little risk to mainframe operations.

Pre-engineered computing power with unique management requirements

Bring in the 1990s and the rise of data centers. Individual, mass-produced machines offered much more affordable off-the-shelf computing power than central units. A data center was simply a collection of these computers, all connected to each other. Later in the decade, the data center was also connected to the Internet.

Although the individual machines required minimal physical maintenance, the software that handled the workloads on those machines required ongoing maintenance. The data center of the 1990s was largely comprised of pet systems. This counted for each machine, which was an act of server management know-how.

From manual software updates to running backups and maintaining the network, IT admins have had their work cut out – if not in physically maintaining machines, then certainly in managing the software that supports their workloads.

It was also an era that first exposed enterprise workloads to external security vulnerabilities. With the data centers now connected to the internet, suddenly there was a door allowing attackers to enter the data centers. This puts systems familiar to the IT administrator at risk – risk of data theft, risk of misuse of equipment, etc.

Thus, security has become a major concern. Firewalls, threat detection, and regular patching of vulnerabilities were the kind of security tools that IT administrators had to adopt to protect their favorite systems at the turn of the millennium.

Server farms – mass produced, mass managed

The 2000s saw a major change in the way workloads were handled in the data center. Efficiency and flexibility were at the heart of this change. Given the huge demand for IT workloads, solutions including virtualization and containerization further afield quickly gained traction.

By loosening the tight link between hardware and operating system, virtualization meant that workloads became relatively independent of the machines running them. The net result brought a wide range of benefits. Load balancing, for example, ensures that difficult workloads always have access to computing power, without requiring excessive financial investments in computing power. High availability, in turn, is designed to eliminate downtime.

As for the individual machines, they are now entirely disposable. The technologies used in modern data centers mean that individual machines are virtually meaningless – they are just cogs in a much larger operation.

These machines no longer had nice individual names and just became instances – for example, the web server service is no longer provided by the incredibly powerful “Aldebaran” server, but rather by a framework of “webserver-001” to “webserver-032”. Technical teams could no longer afford to spend time adjusting each as precisely as before, but the large numbers used and the efficiency gained through virtualization meant that the overall computing power in the room would still surpass the results of the pet systems.

Limited opportunity for crafting

Container technologies like Docker and Kubernetes more recently have taken this process even further. You no longer need to dedicate entire systems to perform a given task, you just need the basic infrastructure provided by the container to run a service or application. It is even faster and more efficient to have countless containers underlying a service rather than specific systems dedicated to each task.

Deploying a new system no longer requires the manual installation of an operating system or a labor-intensive service configuration and deployment process. Everything now resides in “recipe” files, simple textual documents that describe the behavior of a system, using tools like Ansible, Puppet or Chef.

IT administrators could always include some tweaks or optimizations in these deployments, but since each server is no longer unique and there are so many supporting each service, it makes little sense to spend the necessary effort to do it. Administrators who need more performance can always reuse the recipe to launch a few more systems.

While a few basic services, like identity management servers or other systems storing critical information, would still remain pets, the majority were now considered cattle – of course, you didn’t want that none of them fail, but if one did, it could quickly be replaced by another, equally mundane system performing a specific task.

Take into account that workloads are increasingly running on rented computing resources residing in large cloud facilities and it’s clear that the days of servers running like a pet system are over. Now it’s mass production – in an almost extreme way. Is this a good thing?

Mass production is good: but there are new risks

The flexibility and efficiency that mass production brings are good things. In the computing environment, little is lost by no longer needing to “hand-craft” and “maintain” computing environments. It’s a much easier and faster way to bring workloads online and ensure they stay online.

But there are a number of security implications. While safety might be “engineered” into pet systems, livestock environments require a slightly different approach – and certainly still require a strong focus on safety. For example, the cattle systems are generated from the same recipe files, so any intrinsic flaws in the base images used for them will also be deployed at scale. This directly translates to a larger attack surface when a vulnerability surfaces, because there are just so many more possible targets. In this situation, it doesn’t matter if you can start a new system in minutes or even seconds – do it on thousands of servers at once and your workloads will be affected no matter how long it takes, and that will impact your bottom line.

To a large extent, automation is now the answer to security in server farms. Think of tools like automated penetration analysis, and automated live patch tools. These tools provide tighter security against an equally automated threat and reduce the administrative overhead of managing these systems.

A changed IT landscape

The evolution of the computing environment has changed the architecture of the data center and the approach of the people who operate the data centers. It’s just not possible to rely on old practices and expect to get the best results – and that’s a tough challenge, as it requires considerable effort from system administrators and other practitioners. IT – it’s a big mindset shift and it takes a conscious effort to change the way you think about system administration, but some underlying principles, like security, still apply . Since the number of vulnerabilities does not appear to be decreasing – quite the contrary, in fact – it will continue to apply for the foreseeable future, regardless of other evolutionary changes affecting your data center.

Rather than oppose it, IT admins should accept that their favorite systems are now, for all intents and purposes, gone – replaced by mass production. It also means accepting that security challenges are still there, but in a different form.

To keep server workloads running efficiently, IT admins are relying on a new set of tools, with tailored methods that rely on automating tasks that can no longer be done manually. So, similarly, when performing server farm security operations, IT administrators should consider patch automation tools such as TuxCare KernelCare Companyand see how they fit into their new set of tools.


Source link

Comments are closed.