Each layer of technology in the data centre is becoming progressively more complex to control and manage. The average server environment now has thousands of configuration parameters (e.g. Windows OS contains – 1,500+, IBM WebSphere Application Server – 16,000+, and Oracle WebLogic – 60,000+). The growing interdependence and complexity of interaction between applications also makes it increasingly difficult to manage and control business services.
IT change is very much a fact of life and it takes place at every level of the application and infrastructure stack. It also impacts pretty much every part of the business! To meet these development challenges businesses have adopted agile development processes to accelerate application release schedules. By employing practices such as continuous integration and continuous build they are able to generate hundreds of production changes each day. For example, eBay is estimated as having around 35,000 changes per year!
Industry analyst firm Forrester have stated that, “If you can’t manage today’s complexity, you stand no chance managing tomorrow’s. With each passing day, the problem of complexity gets worse. More complex systems present more elements to manage and more data, so growing complexity exacerbates an already difficult problem. Time is now the enemy because complexity is growing exponentially and inexorably.”
The tools we use to manage IT infrastructure have been around for many years but are only capable of measuring what has happened. They are also not designed to deal with the complexity and dynamics of our modern IT technologies. IT operation teams need to be able to automate the collection and analysis of vast quantities of data down to the finest resolution and be able to highlight any changes to unify the various operations silos. None of the traditional tools are up to this ‘Big Data’ problem!
Big data for operations is still a relatively new paradigm and Gartner has defined the sector as “IT Operations Analytics.” and one that can enable smarter and faster decision-making in a dynamic IT environment with the objective to deliver better services to your customers. Forrester Research defines IT analytics as “The use of mathematical algorithms and other innovations to extract meaningful information from the sea of raw data collected by management and monitoring technologies.”
Despite its relative age a lot has already moved on and here are a few interesting findings:
Where to use ITOA?
IT Operations Analytics (ITOA) is also known as Advanced Operational Analytics, or IT Data Analytics) and encapsulate technologies that are primarily used to discover complex patterns in high volumes of ‘noisy’ IT system availability and performance data. Gartner has outlined five core applications for ITOA:
Root Cause Analysis: The models, structures and pattern descriptions of IT infrastructure or application stack being monitored can help users pinpoint fine-grained and previously unknown root causes of overall system behavior pathologies. Proactive Control of Service Performance and Availability: Predicts future system states and the impact of those states on performance. Problem Assignment: Determines how problems may be resolved or, at least, direct the results of inferences to the most appropriate individuals or communities in the enterprise for problem resolution.