Making Containers Work
In the previous articles we looked at what containers are and how you would look to adopt. In this article we will take a high level look at what it takes to make the adoption of containers work within a company.
The first thing to consider is that your approach to containers needs to be different to that of virtual or physical machines. You work with containers as if they are throw away items, which is not to say that the content isn’t important but the vessel hosting this content is not. You do not log in to a container and make changes or patch it, you adopt a CI/CD strategy that rebuilds the base container image with the perquisite changes, deploy a new version of the service and roll the old one over to the new one post-test completion.
This can be achieved for both traditional / COTS and cloud native application types, with the CI / CD process being adjusted to make allowances for the difference. One major difference being that traditional / COTS applications generally provide a binary to the process as opposed to code, so that binary needs to be included in your container image build process whereas cloud native will require a process either through S2I (source to image) or the CI / CD process itself that builds & tests the code, then deploys it to the container.
Not all applications you deploy to containers will be cloud native in the respect that they are atomic or require no ability to store state. Typically, most applications require some form of persistence and the most likely way to satisfy this is by handing the responsibility of storing or managing this state to a backing service (database, message queue, cache, etc.). Within a container platform there are generally two ways to deal with this, the first is by brokering to a backing service, these services are usually ‘hosted off platform’ and the platform manages the requests to access and utilise them. The other way is to host the backing services in close quarters to the applications dependent upon them, to do this the backing services you deploy and maintain require consistent and reliable access to a storage subsystem to ensure that as you deal with immutability you do not lose data.
When treating applications as immutable and as they start to become more distributed then your ability to see the actual state of your services in real-time and be alerted of prolonged situations where your defined state doesn’t meet your actual state becomes critical. Utilising monitoring that adapts to constant change through discovery is a must and beginning to instrument your applications to emit signals that support detailed tracing is highly desirable.
Disabling root privileges to a container is a 101 but you need to take a wider, holistic view across the platform. You need to ensure your users of the platform only have sufficient access (principle of least privilege) to perform their tasks and not inadvertently introduce back doors in to other areas. You need to consider your ingress strategy; automation introduces tremendous power but you need to maintain a healthy respect and detailed understanding of what happens. Consider how external parties access your exposed services, how do you best deal with aspects such as DDoS, API throttling, intra-application component trust. An emerging way to deal with aspects of this is through the adoption of a service mesh. Service meshes incorporate components such as reverse proxies, service discovery, API management etc. whilst additionally dealing and adapting to the dynamic nature of a container platform.
Some or indeed most of the above might just seem like good corporate IT hygiene and in the situations where you are consuming a container platform from a provider, then most, if not all should be catered for within the service. Some of the aspects above may seem quite daunting at first but when all things are considered the business benefits should outweigh the learning curve and consideration required.
In my final post looking at the specifics of containers we will look at some of those business benefits.