NetApp INSIGHT: Driving Innovation for Customer Success
The Six Five Media was at NetApp Insight 2024! Host Daniel Newman sat down with NetApp’s CEO George Kurian, and Hoseb Dermanilian, Sr. Dir, Global Head of AI Sales & GTM, for a discussion on how NetApp is spearheading innovation in data management and storage, ensuring customer success in the rapidly evolving tech landscape.
Their discussion covers:
- The alignment of NetApp’s strategic direction with the evolving needs of data management and storage
- NetApp’s competitive differentiators in the data management and storage market
- The role of partnerships and alliances in bolstering NetApp’s market position
- Emerging trends in data infrastructure and the influence of AI and machine learning technologies
- NetApp’s approach to harnessing the power of AI and GenAI, detailing their AI-ready solutions and strategies for businesses
Learn more at NetApp and Dive deeper into NetApp AI solutions and connect with NetApp experts.
Transcript
Daniel Newman: Hey everyone, The Six Five is On the Road. We are here at NetApp Insight 2024 in beautiful Las Vegas, my second home. Very excited to be here, big event, lots going on, a ton of innovation being talked about here at the event. And I’m very excited for the first time to sit down with two guests from NetApp to have a conversation about this week’s event and all that is going on here. I don’t think it requires much introduction for either of you, I’ve gotten to know you both very well, but we have CEO George Kurian.
George Kurian: Good afternoon.
Daniel Newman: And we have senior Director and go-to-market strategy, AI Chief, you do a lot of things here, Hoseb Dermanilian. Appreciate you both spending some time with me, welcome to the show. How are you guys doing?
Hoseb Dermanilian: Good, thanks for having us.
Daniel Newman: All right, I like to start with the big picture, George, and look, I’ve had my phone blow up a little bit since your keynote. I’ve had a number of the analysts on my team and I’ve been tracking the socials. People are really excited about what you have going on here. We know AI has been super disruptive, it’s opened the door for a lot of growth and companies that have the data have a lot of opportunity. I’d love to get a little bit from you on the strategic direction of the company in this current landscape of data management, storage, and AI. Give me the overview.
George Kurian: Yeah, thanks for having us, Daniel. I think that we have been in the intelligent data infrastructure business for a long time. We are the unquestioned leader in unified data storage, unifying data of all types, unifying it across all of the places that you have your data and that’s really important as you look at tools like AI which can address a wide scope of data to bring differentiated insight. And we combine that unified data storage with intelligent services so that you can protect your data, you can stay within the guardrails of compliance and privacy, you can apply ways to optimize the data so that you can best utilize it. And then as you saw, we work with the industry to bring solutions, people like NVIDIA, and AWS, and Microsoft, and Google and others.
Daniel Newman: You’ve had a really great showing and it was really a strong indicator of intent and market support that you had all three cloud providers here.
George Kurian: Thank you.
Daniel Newman: I think that I should say all three of the largest cloud providers here, but the ones that everybody roll off the tongue, that is not an easy get these days. We know how competitive things are, but we also know when they all show up, it’s something very important and I think it has a lot to do Hoseb, with what’s going on with data infrastructure. There’s a lot of trends, of course, you can’t really have it pod anymore in a conversation and not talk about AI, not talk about ML. But talk a little bit about what you’re seeing as those trends as it relates to what NetApp is doing and what are businesses and what are your customers really needing to be aware of?
Hoseb Dermanilian: Yeah, thanks for having, Daniel. We’ve been doing AI for the past six years, we haven’t just started because gen AI became something, right? And we have seen evolution from a customer base as well. Six years ago, our first customer was in healthcare where they were building a pod with DGX ones from NVIDIA and us to train neural networks, and computer vision was a big thing. As you probably know, deep learning was a big thing at that time as well. And as we evolved, as gen AI became a more profound, we started seeing two other types of customers pop up as well. One who are building their own large language model clusters, basically building and training these models and these are probably the Fortune top 50, top 20, you can tell more.
And then the wider enterprise are the ones who are tapping to the power of the hyperscalers to use this large language models, the open source ones, the tools like Vertex, SageMaker, but at the same time the need for them to bring their own data to make these models more meaningful for their own businesses. And then on top of all this, we have also seen customers who are modernizing their data lakes nowadays because they’re getting that data ready, whether they are going to train their models on their own or they feed these LLMs in the cloud. And this is where we’ve been seeing more customers coming up on the enterprise level, this is why Insight has been heavily focused on how we bring AI to the use of the enterprise customers.
Daniel Newman: Yeah, I like that you pointed out a couple of things that have been very important in my analysis, and one is the idea that AI is this new thing, is problematic. Most enterprises have had at least more than 10 years that they’ve been really focused on ML and they’ve been focused on varying, whether it’s been attrition management, whether it’s been better understanding customer success, AI, ML has been there. Now of course generative tools have changed the game, and of course I also think the access, right? How applications have made it really accessible.
Another thing you said though that I think is really worth noting is your data is the key. For the first 18 months after the ChatGPT explosion, everyone was really focused on this kind of open data. Companies have immediately come to recognize at least in the last six months, it’s not that data, that’s the table stakes, right, George? I mean that’s what everybody has, so it’s what you have and it’s making that data visible to your applications so you can do something special. And I think my take was that was where your differentiation starts to really stand out in what you’re doing. George, I’d love for you to share a little bit about that because right now I feel like with this important inflection, differentiation is going to be the key in making data easy for these companies to get access to at scale is the next step.
George Kurian: Exactly. I think that as you said, Daniel, we are firm believers in the fact that when everybody has really sophisticated pre-trained foundation models and the ability to use that technology with very easy user interface, natural language, your differentiation becomes about your unique data and your domain knowledge. How do you use that data plus the large language models to understand to how you can improve your business, right? And we are in the third era of the digitized data journey. The first stage was I just want to digitize my data, get it into computerized form. The second was I want to maintain a longitudinal history of the data. And the third phase is really about I want to unify all my data so that I have a wide scope in addition to long history. We’ve always been the leaders in helping clients unify their data, our data storage capabilities unifies data across time, and type, and on-premises, and all the clouds so that you can have one integrated view of all of your data.
The second is that we have super high performance infrastructure that can serve the wide ranging needs of training environments and inferencing environments. And the third differentiator is we have built into our infrastructure intelligence about the data that resides on the infrastructure. So, knowledge of the metadata, knowledge of the data, what changes are happening to it, and now we are linking those data capabilities into the ecosystems for modern applications, whether it’s cloud-native applications or AI applications. And so unified data storage, the ability to support a wide range of use cases on a single common architecture, intelligence about the data and data services and then the integration with the ecosystems that matter.
Daniel Newman: As I listened to what you presented this week, I am increasingly seeing a sort of simplification in the removal of some of these abstractions. You know that in the long run you’re going to have this multimodal interface. And I keep saying a lot of these apps we’ve come to use are the sad future, which is also an exciting future for innovators, is that a lot of these softwares we’ve used are going to start to either need to be greatly in innovated upon or they’re going to become disrupted. And it’s going to be the ability to expose all the data to app developers where they can then use a gen AI-like interface and I’ve actually seen this by the way, in some companies in the labs, and they can abstract a user experience for you, a UX right in front of you from the data.
I mean that kind of brings me to you. It’s like you’ve seen the advancements, you’re hearing the innovation, you guys have addressed all the types, not just file, file, object, of course you’ve got vector now, which is super important in this AI world. Talk a little bit about the specific NetApp solutions that George was sort of pointing to that your customers can look at to build their AI future.
Hoseb Dermanilian: Yeah, absolutely. And again, it’s going to depend on what each customer is doing. So if you are a customer who wants to build their own models on premises, for example, we will be certifying our ONTAP platform to be connected with NVIDIA and their super pods, so that will be a solution that they can leverage. We have it today with our other solutions, so we even have one of the world’s largest industrial large language model running on NetApp today and NVIDIA. So for those customers who are building their, let’s say factories, we’ll have that solution for them.
However, for the enterprise who will be leveraging the cloud, that data needs to be vectorized, and this is what we have announced this week. It’s not the vector, we’re calling it inferencing in the box, but also all the things that is related to the data that needs to go into that inferencing, the metadata cataloging, querying the data. We have seen on the day one keynote how you can ask the data what type of information you need to be contained in the data you are querying, so that then you can feed the inferencing the specific types of the data you are asking for.
And all this Daniel, the goodness of NetApp is that we are in commitment with our customers to avoid silos. We are building all this on our strong platform of ONTAP, and our strong foundation of ONTAP that will bring all the capabilities that we’ve been serving our customers with our workloads, different workloads, SAP, Oracle, VMware over the past 30 years now to the AI workload. Because the last thing we want to do for our customers is to build another infrastructure, whether it’s in the cloud, the same with our first-party services, it’s the same ONTAP software running in the three hyperscalers and on-premises. So that is literally the cornerstone of what we’re offering this week and building on top of that strong foundation.
Daniel Newman: And you store so much data and as I am thinking about this, like I said, the way people, consumers are using ChatGPT for instance, eventually we’re going to be able to do it for enterprise apps with what you’re building in NetApp, right?
Hoseb Dermanilian: That’s right.
Daniel Newman: Where you can remove some of the databases, remove some of the warehouses, and they could just talk because you’re going to have the metadata, you’re going to have the different data types, and it’s going to be cataloged in a way that an application can have a coherent conversation with it and then provide what the customer’s looking for. I mean, am I getting that right?
George Kurian: Absolutely. I think that what we see are two important trends. I think one of them we talked about yesterday, which was that when cloud started, cloud and on-premises were two separate silos and it really made it very hard to build something that used the best of cloud and had the capabilities of on-prem, we bridged that. Today we see much the same with AI, right? It’s like AI systems sit out here, they don’t really have the data, the data sits on data systems and so we’re going to bridge that divide. The second is if you look at the AI stack today, it’s an emerging stack and it’s highly fragmented and so it’s really complex to build an application that can talk natively to the data.
And what we are doing is working to simplify the ecosystem so that you can present the data in whichever way the application wants. Traditional applications might want a POSIX style interface with files and blocks, new cloud native applications might want to use an object interface or a key value interface, and then AI applications might want to use a vectorized interface, and we can do all of them. And what we see, for example, with the data lakes that are being built out is most clients want to unify data across a couple of those types. They’ve got data in files and they want to read it in object, we can do that.
Daniel Newman: And let’s be clear, there’s not a bigger problem right now to actually realize the potential of AI than data fabric.
George Kurian: That’s correct.
Daniel Newman: The availability of your data, especially the own data and of course the public data and making that all secure, and you have to deal with that, right? You got to deal with firewalling data that should be exposed to an app and what shouldn’t. This is a hard problem though, and that’s why for many years the big data thing, I mean, how long did we talk about big data? This has become kind of the moment for big data and now it’s gen AI is just making it go so much faster. George, as we talk about all this partnerships and alliances, you kind of heard me and I wasn’t meaning to sound swoon, I was genuinely impressed. I can only think of two times in a recent era where I’ve seen a keynote where all three major clouds have shown up, and so this was a big moment. You’ve been very big on partnership, you’ve been very big on alliances. Talk a little bit more about how you’re thinking about that beyond just everyone showing up for this particular event.
George Kurian: Yeah, listen, real innovation to solve customer problems comes from multiple people working together, especially in emerging areas like AI. We have been working with the cloud providers for more than 10 years and it has a deep partnership that involves co-engineering solutions as well as working together in customers and supporting customers in a unified manner, and we have been able to do that across all the three major cloud providers, right? Amazon, Microsoft, and Google. I think it’s an indication of two things there. I think one is our capability set and our quiet self-confidence in that, and the humility and the willingness that we have to be a good partner to these major cloud providers. And we work with others in the industry like NVIDIA and MLOps vendors because I think what customers really want is they want our foundational data capabilities, but they want it integrated into these platforms. So what we see these alliances and ecosystems doing is bringing multiplicative benefits, their strengths together with our strengths, delivering a really compelling value proposition for the customer.
Daniel Newman: Yeah, that’s really important. And of course, building for what these IT leaders and business leaders need now, AI’s kind of created this new construct because it used to be about kind of servicing the business. Now AI is about servicing the customer and creating productivity and growth.
George Kurian: That’s right.
Daniel Newman: We did recently talk to over 650 CIOs with over $3 billion in spend between them, and what they’re all telling us though is they’re also looking for simplification. This is a hard problem, they’re trying to figure out which SIs can help them integrate, they’re figuring out which consultants and which vendors can provide the services they need. AI created a whole new set of complexities. Remember, we were all just moving to hybrid and multi just a couple of years ago. We were still having a debate, do we call it multi-cloud and hybrid? And then AI came and it just accelerated things so much, and Hoseb, you’ve heard a little bit from me throughout the conversation about what I think the problems are and what I think the opportunity that you’re creating, but NetApp has the opportunity here to really provide customers that simplicity that I’m talking about. Share a little bit about, first of all, are these the same challenges you’re seeing as you face these customers? And two, how does NetApp give these customers what they need to deliver on the potential of AI?
Hoseb Dermanilian: Yeah, definitely. I think, again, when we started this, we were thinking the biggest problem of AI is how fast you can feed these GPUs with data that turned out to be the simpler problem. The biggest problem is the data management. Daniel, you can’t believe how many times we walk into a customer where the data for AI is being copied all over the place, that’s because the developers and the data scientists need their own version. Now, the problem you get into that is the cost and all the security that comes at it, and then obviously not simple, it’s complicated. So with our features when we are now trying to get that into a single master copy, while at the same time different users have their own version that simplifies it, and when George calls it the integration, we can even do this on the MLOps level down. They don’t even need to know what’s happening on the storage level, that’s the biggest value add.
The second way is when data is scattered all over the place and shared between departments, one of the biggest challenges is as building and training these models as accurate as possible requires data to be shared. The problem is happening over there is data scientists and developers are waiting days, and weeks, and sometimes months for departments to decide to share this data amongst each other. We’ve even seen people copying the data between different continents. So when we come with our solutions now we can get that access down into a matter of seconds from whether it’s a Jupyter Notebook, whether it’s a Domino, Datalab, MLOps platform, you name it, that simplifies it as well.
So all the things that we are using our features and our technology to help that and help this data access and data efficiency and the security at the same time is simplifying the process a lot for these customers. Now, we’re not saying we’re solving all the problems in the world related to AI, but at least we’re trying to do our best from a data standpoint to remove that barrier so that customers can now get access to that data and leverage the power of data. That’s the biggest challenges that we’re seeing nowadays.
Daniel Newman: Well, it seems like a really significant opportunity to fix. As we sort of wrap up here, George, I’d love to ask you a final question on your audience. You had a great turnout here, everybody that I’ve talked to, like I said, very compelling and that’s not always the case. I’m a professional event attender, I attend hundreds of these in a year, and people were impressed beyond what I would say is normal, and I think it was because you are doing something that is really shaking up what people are trying to accomplish in their data space. What do you want our audience to walk away with from this conversation from NetApp Insight 2024?
George Kurian: I think the most important takeaway is that we have always believed, and it’s even now more apparent, that those organizations that have a good handle on their data, where it is, how to unify it, how to catalog it, how to standardize it and keep it in high quality form are going to be much more advantage to use the modern data analytics tools. Whether it’s advanced clustering analysis or generative AI, you are much better positioned and that NetApp together with our partners can help you achieve that objective.
Daniel Newman: Yeah, that’s a great way to wrap this up. And by the way, doesn’t that problem sound a lot like the big data problem of the past decade?
George Kurian: Totally.
Daniel Newman: The companies that got it right early that made those investments, they really are reaping the benefits right now.
George Kurian: That’s exactly right.
Daniel Newman: Well, I want to thank you both, George and Hoseb.
George Kurian: Thanks, Daniel.
Daniel Newman: Thanks so much for spending time and thank you for joining us here. It’s always fun to have all of you sit down with new guests, first timers here on The Six Five. We’re going to have to have these guys back, they were great. Thanks so much for joining, thanks for coming to NetApp Insight through The Six Five. Hope you’ll attend all of our coverage of the event, and we’ll hope to have you back soon. Hit that subscribe button. Got to go, talk to you later, bye-bye.