CIO – Otto Berkes – 4/26/17
[Ed. note: Author is Otto Berkes, CTO at CA Technologies]
There’s no doubt about it, containers are hot - and for good reason too. Their ability to speed development and deployment is making them a popular choice among DevOps teams the world over. Without wading into the container vs. VM debate (answer: they both have their uses), perhaps the most important thing about containers is that they make architecture “snackable.” Because containers share the operating system kernel in which they run, container-based architectures can be assembled as a set of smaller, lighter-weight components while achieving greater modularity and robustness. If this sounds like the fulfillment of the elusive SOA vision, it’s because it is.
Unfortunately, “better” doesn’t always mean “simpler,” and moving to a container-based approach creates new challenges that must be overcome and requires a shift in thinking. For starters, you will need to invest in creating a new workflow with a new set of skills and tools. This is made more difficult by the fact that the tools and processes for building container-based architectures are still immature. It can also be hard to know which tools and technologies to bet on. How will you orchestrate and manage your containers? How will you secure and monitor them? How will you ensure that your container-based architecture is performing as designed?
In fact, how will you go about designing your new container-based architecture given all the variables in play? Your approach to architecture will need to evolve to accommodate the distributed nature of container-based systems. Sure, individual container-based components may be well-defined and cleanly partitioned, but the various teams building, deploying and operating the different pieces will all have to understand how they all fit together. Like a container-based system, your architectural know-how will also need to become more distributed and well-specified.