DevOps engineers use CI/CD pipelines (e.g.,checkout-> build->unit test->package->integration test->other kinds of tests->deliver/deploy) to run a battery of automated tests when code is checked in to validate code correctness and to evaluate that a software solution is functionally sound and behaving well across various testing stages (e.g., smoke, unit, integration, etc.) and deployment environments (e.g., dev, qa, production, etc) .
The use of these automation pipelines is one of the cornerstones of a successful DevOps transformation that enables DevOps engineers to work more efficiently and predictably while delivering solutions that are of higher quality and sounder functionally – a boon for DevOps engineers and customers both. As DevOps-based teams achieve and sustain velocity, many code changes across many teams will be checked-in.
With all this change, can we also know whether a solution’s architecture stays true to its aims and constraints? Historically, architects might review a solution for compliance with architecture “building codes” – a codification of architectural concerns and constraints software solutions should adhere to. Such an approach faces some challenges in that 1) it is a manual process; 2) different solutions may have different architectural concerns and constraints that are relevant – in this case, one size does not fit all. How can this approach be improved? Can automation pipelines be applied to architectural concerns? The answer is yes! As we’ll see, Enterprise Architects can benefit from the same DevOps best practices and techniques that help DevOps engineers deliver better code and functionally sound software solutions.
In this article we’ll take a brief look at how Enterprise Architects can include automated architecture fitness functions in CI/CD pipelines to have visibility into whether an architecture remains congruous with its goals and constraints as teams deliver software changes over time.
These techniques can be applied to mainframe as well as any other platform where CI/CD pipelines are utilized.
Fitness Functions Defined
Architecture fitness functions help Enterprise Architects gain the same actionable visibility around architectural concerns when code changes are pushed through an automation pipeline that includes such instrumented functions.
So just what is an “architecture fitness function”?
“architecture fitness function” =df : a function that provides an objective integrity assessment of some architectural characteristics (Building Evolutionary Architectures, Ford, Parsons and Kua)
In plain terms, these functions (tests) help ensure that a software solution continues to conform with important architecture goals (often expressed as the “ilities”) as code changes are introduced over time.
There are many “ilities” (e.g., maintainability, serviceability, scalability, traceability, interoperability, and observability, to name a few), and relevant ones can vary from software solution to software solution depending on business or technology constraints, architectural aims, and any number of other factors.
To illustrate this, let’s say that the software solution is a microservices implementation operated at scale where its service topology and interaction patterns might be dynamic and complex. An architectural goal of key relevance here should be observability given these factors as this characteristic will make it easier to evaluate the health and operational correctness of the solution (without the presence of the observability trait it becomes very hard if not impossible to gain the actionable insights observability can deliver). For the sake of this scenario, assume that “observability” is defined as follows:
“observability” =df 1) logs are aggregated; 2) distributed tracing can be toggled on and distributed trace spans are forwarded to a trace collector; 3) service health can be obtained through a well-known URI.
An automation pipeline can include architecture fitness functions to test for all these cases that together define what observability means in this scenario. If any of them fail, the architecture observability goal is not achieved. Architecture fitness functions like other automation pipeline tests should be instrumented to capture relevant data and use telemetry to communicate these actionable details regarding any failure so that the issue can be evaluated and addressed.
Classification of architecture fitness functions
Architecture fitness functions fall into one of several classifications as outlined below.
Atomic / Holistic
An atomic architecture fitness function tests one architectural consideration in isolation. A canonical example is a circular package dependency test that can help detect and defend against cases where Java packages import each other. This can lead to unnecessary inclusion of packages in cases where they are not required (this can lead to application bloat, and makes it harder to perform refactoring).
A holistic architecture fitness function test exercises a combination of architectural aspects. A good example of such a function is what Gremlin is doing to bring chaos engineering to CI/CD pipelines.
Gremlin is taking Netflix’s Chaos Monkey concept, used to randomly inject server failures into production systems, and doing a shift-left of chaos engineering practices to incorporate them into a CI/CD automation pipeline. While Chaos Monkey’s focus is on server failure injection, Gremlin can do that and more. For example, Gremlin can stage network attacks and related architecture fitness functions could be applied in a pipeline to ensure that a service layer remained resilient and available.
Triggered / Continual
A triggered architecture fitness function responds to some pipeline event such as when a pipeline runs unit tests against code changes. For example, an architecture fitness function could be defined and implemented to invoke a tool such as ArchUnit to enforce code structure as part of unit testing the changes.
A continual architecture fitness function doesn’t respond to a specific event but validates code changes against relevant architecture goals over time. For example, an API service layer might have performance fitness functions that test whether APIs under load demonstrate reasonable latency and acceptable error rates.
Static / Dynamic
A static architecture fitness function would be something like checking that the code’s cyclomatic complexity is not too high (given some threshold). Either the test passes (< the threshold) or it doesn’t.
A dynamic architecture fitness function will accept a range of results as being OK depending on the context wherein the function is executed. For example, depending on scaling factors, some slower performance measurements might be acceptable at higher scale whereas they wouldn’t be at lower scale.
Automated / Manual
Pretty straightforward distinction here. Automating architecture fitness functions as part of your CI/CD pipeline is the recommended best practice (although, pragmatically, and depending on your specific circumstances, you may have some tests that are performed manually).
Intentional / Emergent
Some architecture goals are known at the beginning of an architectural effort or in the early going. As such, they are considered intentional goals. Architecture goals that emerge over time as more things are learned are considered emergent goals. Consequently, architecture fitness functions can be thusly classified.
There may be cases in which your software solution runs in a domain-specific context. Let’s take federal government installations. In such cases, a crucial architectural goal would be appropriate securability given requirements specific to this domain. For example, an architecture fitness function could be defined to make sure any new code introduced around handling authentication or authorization concerns conforms to the correct use of PIV/CAC for Multi-Factor Authentication.
The mainframe is another domain in which architectures must unfailingly prove to be secure and resilient. The architecture for software solutions that modernize and open up the mainframe – e.g., Zowe’s API Mediation Layer (APIML) that exposes APIs to z/OS REST API services – must have architecture goals of securabiity, availability and resilience. Given that, a Zowe CI/CD pipeline could utilize architecture fitness functions to test that these architecture goals are met. For example, a Zowe APIML architecture fitness function could verify that API calls from a client through the APIML gateway to some backend service continued to succeed even when random API gateway instances were shutdown. For this scenario, assume there are four different distinct APIML gateways behind a z/OS Dynamic Virtual IP Address (DVIPA).
I hope this blog has provided some insight into the power of pairing CI/CD pipelines with architecture fitness functions.
I used observability as the architecture goal to show what an architecture fitness function is about. I pointed out that there were many “ilities” and those of relevance may be different from software solution to software solution depending on various constraints, aims and other factors. That said, I would claim that there are two architectural goals that are essential when thinking about architecting, building, deploying and operating software solutions: observability and evolvability.
Be on the lookout for forthcoming blogs where I write about observability-driven development (important DevOps trend applicable to CI/CD pipelines and beyond), evolutionary architectures (an approach that leads to architectures that are built for change from inception), and additional articles where I explore architecture fitness functions further.