Leverage IoT adage “Build Once, Use Many” to Scale IT Operations
When approaching a situation like the disruptions currently being felt within a rapidly evolving yet also highly mature and specialized IT landscape, completeness of vision is essential to maintaining progress within everchanging paradigms. Creating a process and crafting a practice allows for the tough lessons we need to not go by in vain. By leveraging the IoT adage “Build Once, Use Many”, effective organization find methods to quickly scale out operations allowing for much tighter roll-outs and ongoing operations.
Comprehensive Coverage and Framework Is Critical
The ability to deliver repeatable results over different technologies and doctrines is part of the advantage CA Unified Infrastructure Management (CA UIM) delivers. Forming a framework for ingesting and configuring new pieces of technology is the cornerstone of this and for CA UIM it’s elementary. Every device is quantified from the thinnest container or bare-metal hypervisor to the most massive Z-implementations possible, CA UIM does in many ways what .NET did for the programing world by unifying so many disparate pieces-parts into a semi-homogenous “source” for CA UIM data that can be carved into consumable morsels of relevant information. Effectively laying the groundwork not only positions the effective organizations I’ve worked with to be able to scale, it allows them the ability to flex when the business needs to shift 180° overnight as they inevitably always need to. For example, when a large financial services customer I worked with received word that their entire VBlock implementation was going to be sunset in lieu of this new technology called “OpenStack” which “wasn’t really virtualization but something bigger”, they found success in CA UIM’s ability to support the platforms needed. The initial core deployment massive shifts possible. If your structure isn’t sound, even the best polish won’t outshine instability whether extracting deep insights or responding through automated actions, on what used to be nuisance alarms.
Planning & Strategy Are Critical
Data by itself is basically valueless; a metric on a table in a vacuum does no one much good. This is reflected in the fact organizations continue to build entire datacenters dedicated to the purpose of storage. Data by itself, as I’ve said, is valueless BUT when we add context in the form of other complimentary data points, both within the specific technology stack as well as across other facets of the Software Delivery Chain, useful insights can be discovered and unlocked. Since obtaining the data to begin with is the preliminary step to delivering anything, strategizing about how to quickly and effectively scale efforts is paramount. Failing to plan is planning for failure and this is as absolutely true in the context of CA UIM as it is with any other solution on the market. Personal biases aside, if you lack the vision and initiative to begin a conversation about improving health and welfare of critical services, no solution known will automagically solve your problems forever. Building a process and scaling it for durable longevity will provide a framework with which to approach any situation. Understanding the constructs and mechanics that can be leveraged within CA UIM forms a level of certainty in an uncertain world and can go a long way towards eliciting better interactions overall, as highlighted during the later phases of this blog series. Remember that the technology won’t make up for the value of asking the “right questions” and learning to figure what you don’t know. Herein lies the beauty of agile; when done properly, those lessons are learned quickly and improvement is continuous!
Devices to CA UIM can be anything, it’s how the technology understands things in the real world. Within that ubiquitous nature however are key differentiators useful for divining your path to success. From bare-metal virtualization and containers to physical servers and even mainframes, from datacenter to the cloud, from server to network switch, CA UIM quantifies entities as devices. These devices are ingested in one of three ways:
- Network discovery (ICMP/SNMP/SSH/WMI),
- Probe-based detection (Hypervisors/Profiles), and
- Robot self-identification.
Configuring the priority order of these to your advantage will allow for delivering consistent results across environments. CA UIM is usually smart enough to synchronize all components of these devices once it established the main record, but successful groups respect this early and it serves to prevent many issues later.
“Agent Optional” means you don’t need a UIM agent (R) installed on every device. This means you have both the flexibility and options to deliver multiple levels of monitoring with different shapes and forms. The most successful service providers and organizations I’ve worked with leverage Service Catalogs with tiered offerings to deliver premium level monitoring when needed but consistent and competent basic monitoring always. Additionally, thanks to CA UIM licensing it also means you can gain multi-perspective monitoring without incurring extra license costs or system overhead in most cases. Whenever possible, Robot-onboard strategies will give the richest options from system metrics, log analysis and integration within CA UIM automation. Deploying Robots is accomplished through UMP Push, Manual Pull and Silent Automation methods. Finding the right blend of deployment methods will be your opportunity to assess resources and optimize your efforts when Robots are the way forward. Robots can’t go everywhere though. From ESX hosts to network devices to containers and cloud instance, and I have no qualms sayings this, there are places Robots just can’t go. And they shouldn’t in many other cases, which is why the agentless path presents stand alone and complimentary benefits. Leveraging protocols like SNMP, RESTful Web Services and other proprietary API/CLI methods allows for passive and central collection of MANY essential metrics. Furthermore, employing the Templating features within Agentless probes can make auto-scaling much easier. Back to Robots though, with the advent of Monitoring Configuration Services (and some awesome things coming soon therein), the “set it and forget it” approach to scaling is finally within reach for Agent-onboard strategies too.
Whatever individual or combined strategies you do choose from, be cognizant always of architecture. The proper foundation allows you to build quickly for the long haul, yet just as with any construction – poor footings make for weak houses. Some technologies require more overhead, some present a greater array of metrics, while some are extremely specific in how they manifest – in the end, keeping tenancy and resource considerations top of list will benefit your infrastructure planning discussions. Consider using additional hubs to create unique origins and don’t forget that robots can define custom origins as well as User Tags (all of which are usable in USM).
Ultimately, flexible durability is what any strategy should strive to maintain. One major financial services organization I worked with migrated over 10K instances from vBlock to OpenStack (by way of management edict) and thanks to their resourceful nature was able to deliver seamless coverage of Performance and Health. Though they worked directly with CA UIM to help develop a holistic OpenStack probe, while that was being developed they employed a prioritized Agent-onboard strategy to deliver on commitments. We’ll revisit this case again, but suffice to say that automation was key from agent deployment to probe distribution to configuration delivery, this organization was able to shift gears with flawless agility and accuracy. Always remember that CA UIM is optimized for automation in many ways and if that’s a strength in your Software Factory, work it.