This blog post is the second and final part of article that about an interview we had with Dritan Bitincka, co-founder of Cribl, an Observability Infrastructure vendor. The interview proceeded on three different tracks: (a) Mr. Bitincka’s Journey to Cribl, and (b) Industry Changes, and (c) The Future of Observability and Cribl. This part of the article captures the last two tracks.
Industry Changes. We shifted gears to the topic of what changes he has seen in the industry. Dritan said, on the top of his mind, that he is amazed at how fast observability tooling and solutions are being adopted. He explained that part of the reason is that now that most of them are cloud-based, organizations can validate the value of the solutions very fast. Before cloud was so popular, he said, it took months with salespersons doing bake-offs of solutions and customizing each premises-based environment. Now, with cloud-based, standardized systems, it takes days or weeks, allowing for decisions to be made right away. The basic message is that with cloud-based systems, customers can now do self-service and can self-evaluate systems. The second message was that the volume of data stored for observability systems is growing about 30% annually. But, if storage was free and infinite, his customers tell him they would storage 5-10x more. This desire to store more is a big part of what Cribl is betting on pursuing in the Observability Infrastructure market. Third, Dritan sees many observability tools companies getting funded – the space is hot, he says (we agree, and AI applications will only push this further).
I asked a bit more about the cloud, and we discussed the cloud version of LogStream that just went to general availability in October. In this model, Cribl offers LogStream as a service for its customers. Mr. Bitincka said the interest in this service has been phenomenal and that there are a ton of proofs of concept trials underway. There is particular interest from organizations that don’t have operations teams.
The Future of Observability and Cribl. Then we shifted to where the observability market is headed over the next 3-5 years. Dritan said that his customers want to instrument as much data as possible, including directly from applications as they run. He explained that Cribl offers AppScope as open-source, allowing users to collect performance data from all sorts of applications. He said that roughly 20% of applications at organizations are fully instrumented using Application Performance Management (APM) because most programs are close-sourced, and it is difficult to instrument them. His view is that there will be an emergence of tooling of applications that goes well beyond that of simple agents that reside on devices throughout the computing, networking, and security systems that are used. By peering into applications very deeply, this will cause a deluge of data that customers can currently use LogStream to handle, control and route. The location where all this new data will reside, Dritan calls it an Observability Lake. Once all this data is saved to long-term storage, he says, all the teams in the organization can self-service access it, and potentially forever. A significant advantage to using such an approach is that, in contrast to a database system where you must know ahead of time how to structure the data, with an Observability Lake approach, the teams can investigate incidents and data there were not expected and play these events back on recent or very old data. I was interested to learn how AppScope works. Dritan explained that AppScope is a black-box instrumentation technology that sits between the operating system and the application. It sees all the interactions between the application and the filesystem, the CPU, the network etc.. It captures all the metadata associated with this traffic and forwards it downstream. And, he explained that it doesn’t matter what language the program is written in, whether it is Ruby, C, Java, etc., because AppScope is just intercepting syscalls. I got the sense from Mr. Bitincka that Cribl is betting that AppScope will challenge the agent-based approach that is so common these days.
I challenged Mr. Bitincka and asked what Cribl’s plans in tackling new observability infrastructure challenges are. What I learned was that the company plans to enhance both LogStream and AppScope further. For LogStream, there are 60 integrations with other systems – the number will grow, and there will be more protocol support and more device support, even including IoT systems. The company’s goal, according to Dritan, is to help customers with the “data generation” phase – how to unearth more data. Customers agree, with Cribl getting many more inbound requests from prospects as they drive towards pervasive and ubiquitous instrumentation. So, the company’s next focus is to generate data at a large scale and in a standard way. He said the company will double-down on AppScope and thereby will develop a universal edge collection system. The idea behind this edge collection system is that it would remove the headaches customers get from collecting, processing, and managing observability data. Our take on this edge collection system strategy is that Cribl will start competing with some more traditional observability vendors who have developed their own agents that reside on computing systems. But, if Dritan’s right that his customers use as many as a dozen tools, potentially all having different agents, then Cribl’s single-collection strategy could prove valuable to customers. This new data collection capability from Cribl would allow for simplified data collection and consolidated Observability Lake storage, thereby allowing customers to use all the analytical tools they want.
Recently, we had the opportunity to speak with Dritan Bitincka, co-founder of Cribl, an Observability Infrastructure vendor. All three of Cribl’s co-founders were employees at Splunk, a leading observability vendor. It was exciting to hear how Dritan’s experience with Splunk led him and his co-founders to seek a new place in the value chain in the observability market. We also discussed the industry’s future. The interview proceeded on three different tracks: (a) Mr. Bitincka’s Journey to Cribl, and (b) Industry Changes, and (c) The Future of Observability and Cribl. A week after this post, we publish about Industry Changes and The Future of Observability.
Dritan Bitincka’s Journey to Cribl. Dritan is the VP of products, and I noticed that he was very active in posting blog articles about the company’s first product, LogStream, in 2018 and 2019. By the time 2020 came along, Dritan’s posts were occasional (my favorite is here because it explains how simple it is to connect LogStream to Azure Sentinel), and he’s only posted once in 2021. My takeaway, which Dritan confirmed, was that he has been very busy expanding his team and building new products, the typical next phase of a startup in growth mode.
I asked Mr. Bitincka what it was like moving from Splunk to becoming a co-founder at Cribl and found his response both interesting and informative. First, Dritan said that when he was at Splunk, he saw the observability market through the lens of Splunk only. But, what became clear is that many Splunk customers were using as many as a dozen other tools besides Splunk to perform observability. This insight of the fact there are dozens of other tools out there is part of what drove the current products at Cribl because what Cribl’s LogStream product does is it integrates with many “sources” and many “destinations,” with Splunk being just one of them. His second response was that Amazon S3 is one the biggest of the dozen data destinations that customers use and that customers are building analytical solutions on top of S3. He is seeing his customers adopt S3 instead of local storage increasingly. He explained that in the past, organizations would place their data into Splunk or Elasticsearch or other analytics solutions, keep it there for 90 days and then send it to archive. These systems tend to be costly, explains Mr. Bitincka, and hence the data must be sent to archive. But, sending the data to an archive means those organizations cannot use analytical tools on the data. S3, though, while not as responsive as down to the millisecond range as local storage, is now quick enough, explains Dritan. He explained that since S3 is far more affordable, the economics favor using S3 for both current and old data and then making the data accessible for periods much greater than 90 days. And third, I asked Dritan to elaborate more on the trends between local storage and S3 (or other cloud object storage), and what I learned was that the Cribl team is getting a lot more customers requests for S3, or object storage in general. More specifically, customers are “reading” the S3 data, which means they use it to do functions such as “data replay.” Customer requests for object-based storage and for more object reading activity give Mr. Bitincka confidence that his customers will deploy in the cloud.
Mr. Bitincka’s background is in deploying multi-terabyte distributed systems, so I asked him to explain the challenges in deploying these kinds of large-scale systems. I enjoyed this discussion because it shows that as the growth of the observability industry has soared, this growth has caused new headaches. Dritan explained that he had deployed Splunk at somewhere around 150 customers in his years there. During that time, he learned that it became increasingly difficult to manage the systems effectively as they got larger, and often customers would need external tools like Chef or Puppet to handle configurations. The problem is that increasing the size of these systems to higher capacities when using these third-party tools became large drains on administrative or development operations professionals. So, in Cribl’s system, these version-controlled, deployment and configuration authoring capabilities are built-in, and thus they’re more accessible for customers to deploy, maintain and increase capacity. Additionally, he said Cribl’s products also have built-in health monitoring and native cloud tooling that deal with user and machine roles.