An expression we hear a lot when talking about hosts that make up a network or system is whether these hosts are “cattle” or “pets”. But what does this really mean? What makes a host a “pet” and what makes it “cattle” and why is everyone trying to have cattle? Aren’t we all supposed to be eating less meat?
Okay, so it has nothing to do with meat, or farming, or cats and dogs. I am also sure, as I explain it, many farmers would disagree with the analogy on many levels.
However, like it or not, it does seem to have become a term that is ingrained in IT today. Let’s start with some definitions.
For a host to be considered cattle, it must be deployable and rebuildable. An example would be a system created using something like Terraform and configured by something like Puppet that contains no specific data. To use another buzz word, it should be ephemeral.
This means it is replaceable/deployable automatically. Ideally, it is part of a group or, even better, a load balanced or auto scaling group so an instance can disappear and be replaced with no interruption of service. It can appear to increase resources when load is high and then disappear again.
There is no need to back it up, it can be recreated automatically and it holds no data. Why back it up when you can just recreate it anytime you need to?
There is no need for anyone to login by hand and do anything, so you don’t even need to be able to log in to the operating system – the automation can destroy and recreate it to a state where it can deliver its function and service.
There is no need to patch the deployed system, you can just deploy a new version of the system with a later operating system. Perhaps even use blue/green deployment methodology to run a rolling update. No outage, no downtime.
A single host, or even a group of hosts can disappear, and the workload just keeps on running, no interruption of service.
This all sounds wonderful; it is a brave new world. However, is it achievable? Before we talk to that, let’s think about what makes a host a pet.
A pet host requires attention and tending. It has most likely been touched by hand to make it perform its functions. The existence of this specific, prepared host is a requirement of a system – if it is missing, things don’t work. To recreate it, some processes or configuration might have to be done. If it is restarted, it may need a human to log into it and change something or start something. If it was to vanish, a system/workload would fail and an outage would occur.
A pet host should be backed up, it will require less effort and be quicker to recover it from a backup than create it again. A pet might also hold some data that changes. To update it, you need to patch the operating system and then test the application in lower environments.
For the last 20 years, most systems have been created using hosts that’s are pets – they were all special, they all had to be there and running. The needs of these systems would be known and documented, and system admins would install, configure and start things as required. Systems had checks running to make sure everything that had a special process had that process running to deliver what was required.
This traditional architecture has worked for a long time, and if you have legacy system from the last 5-10 years, it is most likely delivered using hosts that are pets. However, more recent systems are likely delivered using cattle, or if you are making a new system then you want to try and achieve hosts that are cattle. So yes, cattle are the nirvana. It is much more cost effective and scalable to have cattle.
While we should all aspire to have cattle, there is only so far you can go to change a pet hosts into cattle hosts. To a degree, pets can have their creation and configuration automated, making them programmatically rebuildable. Best practice today is to do this using tools like Puppet, Ansible or Terraform. OSS Group have done this for many customers, reducing risk and time to recovery, improving stability and security. We have even moved traditional pets as a “lift and shift” to the cloud, using AWS as a virtual data centre.
But this does not mean these systems can achieve cattle status. They can become well-trained, low-maintenance pets, but they are still pets and must be treated as such.
Often to make your system hosts truly into cattle, you will have to redesign or reengineer the workloads; applications, services and processes running on these systems. They need to be designed using modern concepts of scalability and micro services. To be “cloud native”, these are not insignificant changes in architecture.
I encourage you to ask yourselves this: is my network made up of pets or cattle? Be careful to identify what your workloads are, be complete in your analysis of your workloads and the hosts that run them so you don’t give yourself the false illusion you have ephemeral cattle when actually you have pets that need special care and attention.