Dynatrace today announced it has added a data lakehouse, dubbed Grail, to the Dynatrace Software Intelligence Platform to make it simpler to aggregate all the observability data an IT team collects.
Bob Wambach, vice president of product marketing for Dynatrace, said Grail will initially drive analytics of log data but in time will also be applied to additional types of application development, security and business intelligence data.
The goal is to streamline processes such as data indexing and rehydration across a unified observability platform that eliminates the need to maintain multiple disparate repositories, he added.
Grail, at its core, is a causational data lakehouse based on a massively parallel processing (MPP) analytics engine that will initially be made available on the Amazon Web Services (AWS) cloud. IT teams can use the Dynatrace Query Language (DQL) to query data stored in the data lakehouse. Additionally, the Davis artificial intelligence (AI) engine that is already embedded within the Dynatrace Software Intelligence Platform will be able to apply machine learning algorithms to the data stored in Grail.
Data lakehouses have recently emerged as an approach to enable organizations to apply the structure of a data warehouse platform to massive amounts of data that previously would have been stored in a data lake. In effect, a data lakehouse combines the attributes of both types of platforms for centralizing the management of data. One of the biggest issues with any approach to observability is the massive amount of data that winds up being stored to allow DevOps teams to track issues over time.
Of course, it’s up to each individual IT team to determine how much data they want to save to drive analytics, but the more data they save the more cloud storage costs will naturally increase. Grail provides a purpose-built means of consolidating data silos in a way that reduces the total cost of observability, noted Wambach.
In general, because of the need to embed a data lakehouse capability, most observability platforms will be consumed as a software-as-a-service (SaaS) application. That means the number of IT teams that will likely build and maintain their own observability platform will be limited. IT teams are also now struggling to manage increasingly complex IT environments made up of both cloud-native applications built using microservices and legacy monolithic applications. In fact, a global survey of 1,303 CIOs and senior IT practitioners commissioned by Dynatrace found 71% of respondents believed the explosion of data produced by cloud-native technology stacks is beyond the ability of IT staffs to manage without help from an AI platform.
It’s not clear how quickly IT teams will be migrating from legacy monitoring tools that are designed to track specific metrics to observability platforms that promise to make it easier to identify the root cause of an issue by simplifying the launch of queries against a massive pool of data. However, as organizations continue to become more dependent on software, it’s apparent that the level of tolerance for any kind of disruption is steadily declining; it’s now more a question of when rather than if a different approach to managing IT will be required.