Best Practices to Succeed in your AIOps Strategy
Six tips to help you roll out your analytics solution
AIOps is more than just the latest buzzword – rather it is becoming an essential piece to successful digital transformation initiatives. But in order to succeed, you must have the proper strategy in place.
In this blog, you will find some suggested best practices to help you succeed in your AIOps journey by mitigating challenges and simplifying adoption. These best practices focus on two key challenges for modern operations, the vast amount of data being collected and the criticality of being agile and proactive – they can be implemented in any analytics platform and have been proven to help our customers.
1: Tag your Data
Metrics, logs, inventory, topology… Terabytes of data will flow into your Data Lake so tagging your data is critical in order to get value out of it.
Tagging will ease the browsing, searching and visualization of the data across your distributed analytics repository so be sure to always tag your data when being ingested into the platform.
Any connector, API or event forwarding utility should facilitate this task. It is usually much more efficient to tag the data at the time of ingestion than to do it at a later stage when the data is at rest in the Data Lake.
A good collection of tags can be:
- Domain (e.g.: netops, application, infra…)
- Geo (EMEA, US, APJ, country code)
- Source product or application
2: Secure your Data
Always choose secure connectors to transfer data in/out of your Analytics platform. For instance, any log flowing into the solution must be sent over TLS (e.g. syslog) or https (API endpoints). Data should be secured not only in-transit but also at rest, methods like dm-crypt will help to encrypt your data at rest and is the recommended encryption method for open source vendors like Elastic.
Choose what you store: regulations such as the European GDPR will fine organizations that are not storing and processing data lawfully, transparently, and for a specific purpose. This means that the ‘store-all’ policy can lead to legal issues. We strongly recommend the use of pseudonymization techniques to replace personal data before storing it in the Data Lake to provide mechanisms to erase personal data and to set a smart and configurable retention policy setting. Remember that personal data can include a range of information such as IP addresses, Cookie IDs and even mobile device IDs.
3: Leverage APIs
It is the age of automation. Manual steps/integrations are expensive and time consuming so always go for a solution that enables the user to configure and manage the application via GUI or API, equally. Remediation and proactive actions are a big pillar of AIOps – CI/CD pipelines rely heavily on APIs so it is highly recommended to use APIs whenever it is possible to configure the solution.
The set of APIs provided by any Analytics solution must be open, flexible and well-documented. Get familiar with them as APIs are fundamental for ETL operations and integrations.
4: Define a Naming Convention for your Services
Services are a logical group of the business that delivers a functionality with a purpose. They are also known as a subset of the infrastructure made of Configuration Items and relationships. It is critical we define (or inherit) the organization service model when building or bringing our Services in the Analytics solution.
Always use a pragmatic approach to name your Services: We can prepend a tag to classify services by Geo, Business Unit, Customer or Market Segment or IT function. This convention will be tremendously helpful when the list of modeled services starts growing and we need to quickly filter by Service Name.
5: Build a Hierarchy of Services
The rise of tiny and highly-specialized microservices stresses the importance of building a hierarchy of smaller components. The value of this strategy can be huge when we need to pinpoint the culprit of a Service degradation in a Hybrid environment. This “micro-service” modelling should be supported by any Service-centric analytic tool so users can benefit from the automatic discovery of services, mappings and the supporting infrastructure.To get a clear picture of a Service Health, it is advisable to develop a service structure comprising of both standalone CIs and supporting services.
5: Leverage Proactive Analysis
AIOps has come to help us with Big Data processing. According to Gartner, AIOps platforms are designed to enhance IT operations with a set of features that apply Machine Learning to Big Data – reactive monitoring is no longer enough. In order to be successful, we strongly recommend that user understand and leverage analytics features that will help you to be proactive.
These capabilities are:
- Anomaly detection: When a metric starts showing a behaviour that is out of the known values, it is time for action. Machine Learning is doing the heavy-lifting here to watch millions of metrics and pinpoint issues before they happen.
- Predictions Insight: The demand and revenue of your IT Service starts to grow, supporting resources are put under pressure: Prediction Insight features will let you know when your service will start to be degraded due to lack of resources and prevent any potential loss in revenue.
Keep in mind that remediation of these two scenarios can be automated via an API (e.g. spin-up additional resources if we are going to run into a resource constraint situation).
It is crucial to stay informed on the direction and new functionality in your solutions – a great way to do this is to attend roadmap sessions from your solution vendor. Attending a roadmap session is an opportunity for you to get a clear view of what’s coming down the road, including details on upcoming product functionality, features and architectural enhancements. It is also a great way to get acquainted with product management and SME teams.
For information on timing of these sessions, make sure you are following our CA Communities.
To learn more about our AIOPs platform visit: www.ca.com/aiops