AI Readiness: Infrastructure’s Strategic Impact
Explore the foundational role of infrastructure in AI’s evolution and how to harness infrastructure for AI success and propel business innovation.
Learn how Dell Technologies’ ecosystem-first approach supports customer needs.
Transcript
Keith Townsend:
Welcome to the Cloud Infrastructure track of Six Five Media Summit. I love these things. I love having conversations with industry experts. The theme of this year’s summit or virtual event is AI Unleashed and what better guest to have on than Varun Chhabra of Dell Technologies. Varun, welcome to the program.
Varun Chhabra:
Thanks so much, Keith. It’s nice to be talking to you again.
Keith Townsend:
You know what? We’ve talked across multiple platforms, across multiple events, and I got to tell you, Dell Technologies World, Michael called it at the start a AI conference. And I think if before the conference if folks had not known what Dell had been doing in the industry around AI, would’ve challenged him in that statement, that Dell is a hardware company and that there’s not much to do with AI. I’ve gone back and forth folks online with this, but what I witnessed in during the Dell Technologies World is what I’ve witnessed throughout the larger industry. Customers are having the most difficult time understanding the different components of the AI journey, the AI factory. What are you seeing within customers?
Varun Chhabra:
It’s a great point, Keith. I think first of all, yes, Michael did start by calling the event Del Tech World: AI Edition. And certainly as you probably saw at the event, AI was a big, big theme, probably the theme of the event. Look, it’s really driven by what we’re seeing with customers, and you touched upon a few of these things. Obviously, most customers that we talk to see the opportunity with AI. Actually, all customers we talk to see the opportunity with AI, and where they need help actually, it’s different for different customers, and it’s a continuum of our journey and different customers are at different points of the journey. But at a very high level, most people can see, hey, there’s a lot of different use cases. They’re looking for help on where do we get started first, what use cases will give us the best bang for the buck? Is it largely productivity or are there other gains as well? Productivity is usually where people focus. So that’s another question that comes up.
There’s certain customers that have kicked the tires, they have POCs that they’ve tried, and now they’re looking to scale into production. And what they’re now starting to see is a lot of interest in the POC, a lot of good success, but there’s a difference between the level of accuracy you’re expecting or okay within a POC versus when you’re taking a production, how do you limit bias? How do you make sure your answers are trustworthy? What’s the exposure the company has from a legal and compliance perspective when they take these gen AI models and move them into production to power their customer experiences, their end user experiences, their partner and supply chain experiences?
So that’s another question we get. And then there’s also the age-old question of, hey, I’ve got stuff all over the place. My data’s growing in the public cloud. My data’s growing in my data centers. My data is growing, especially out of the edge. It’s increasing exponentially. How do I bring all of this together? What is the right data preparation strategy? What’s good data? What’s bad data? Do I train my models from scratch? Do I tune them? What’s going on with retrieval, augmented generation? And then once you have all of these questions answered, how do I look at the stack when it comes time to implementing it? How do I think about the infrastructure layer? What software parts do I use? Do I go open source? Do I look at specific vendor solutions? Do I design open from the ground up or do I go with a vertically integrated stack? How do I think about professional services? How do I align my team members? There’s just so much to work out.
So while the promise and the potential is very easy for people to see, and they are seeing it, what we’re being asked most often at Dell Technologies by customers is we see the promised land, help us get there, help us go through this forest and navigate all these different obstacles. And then every customer has their own set of obstacles they’re working.
Keith Townsend:
I like to say that AI demos are easy, AI projects are really hard. I don’t think I’ve seen this in the industry for my entire career. We’ve talked about technology across several roles that you’ve had in Dell Technologies, one of those roles was telling Dell’s hybrid cloud story, and I don’t think there’s a technology that’s more geared towards hybrid cloud or multi-cloud than AI. Can you talk to me about what you’re seeing in the industry? I know what I’m seeing is that customers are saying this is not an either/or conversation, this is an and conversation. So what are you seeing and how is Dell helping customers along that journey?
Varun Chhabra:
Absolutely. So what’s happening there, Keith, is that I think it really starts with the data. You’ve got to follow the data. And what’s happening with customers, as I was saying, is that the data is all over the place. And what customers have a choice to do once they’ve got their data strategy figured out, which is not trivial, but let’s say they understand which data sources they want to use for powering or augmenting their models and their gen AI applications, let’s say they figured that out. What they’ve got to then figure out is, hey, do I bring my AI to the data or do I bring my data to the AI? And what I mean by that is how do you think about the placement of data versus where your compute intensive infrastructure is located to be able to run the data processing capabilities that you need to make your AI more effective, more impactful, more accurate, and more specific to your business and your insights?
And our perspective on this is it’s not really cost-effective to actually move your data to one central location and have that be where you process all your data. That’s just not feasible, given how distributed everybody’s data is, not just across the three locations that I mentioned, but even within multi-cloud or public cloud, for example, there’s just so many different locations. As we’ve talked about in the past, there’s so many different public clouds that a single enterprise could be using, or different applications and different data stores. Edge locations are more distributed than ever, and many customers have separate data centers or CSPs they work with.
So our belief is actually you’ve got to bring AI to your data versus the other way around. And I think that’s where what you mentioned around hybrid is done at come to fruition, I think, is that you really need a technology approach that allows you to be able to use the data where it is without having to do unnatural things to move it or costly things to move it and be able to get your best AI insights on that data in the location with the fastest time, lowest latency and most possible accuracy.
Keith Townsend:
So Varun, I’m going to save you some hero numbers or some brags, some humble brags. Dell EMC, world’s largest storage company. Dell fights for number one and two in the OEM server space, data center server space. Dell is the preferred partner of NVIDIA to sell GPUs. That’s all great, but at the end of the day, how is Dell differentiating between you and your competitors? I’m hearing some of the same messaging coming out of some of the world’s largest competitors to Dell.
Varun Chhabra:
So great question. Let me answer that in two parts. I think the first is let’s level set on what is our strategy? I talked about data, bringing AI to your data, et cetera, et cetera. How do we actually make that real? The number one thing that people are asking us for is, it is a little trite sometimes to call these things easy buttons because there is no easy button in IT, but what customers are asking us is how do I make, as I was saying, the promise real? And the way to do it is you’ve got to simplify the complexity. You’ve got to help customers work through some of the choices they’ve got to make on the various forks in the road.
And to do that, we actually talked about this a lot at Dell Tech World, is this notion of a Dell AI Factory and an AI factory for us is something that brings together all the various components that a customer and organization needs to make AI real, and that the foundation of the Dell AI Factory is AI infrastructure. So think about your servers, network, networking gear, storage, data protection, PCs. All of these, in our opinion, are the foundation of what makes an AI application or AI workload work. But the infrastructure by itself is not enough. So we think about the ecosystem and the work that you’ve been doing and we’ve been doing, and you just mentioned it, we’ve put in a lot of work over the last year working with the AI ecosystem across the board, forging strong partnerships to make sure that our infrastructure works really well with the software layer, the models, the ISV applications that customers are going to want to run on it.
At Dell Tech World, we featured heavily our collaboration with NVIDIA. In fact, we’ve got a very specific flavor of our Dell AI Factory called Dell AI Factory with NVIDIA that combines NVIDIA networking, NVIDIA GPUs, our infrastructure with NVIDIA AI Enterprise software and our professional services. So NVIDIA is a huge partner for us. We’re obviously working with AMD and Intel to bring their GPUs to market as well. And then we’re not stopping there. In the model and ISV ecosystem in place as well, we’re doing a lot of work. We had Hugging Face come on stage, announce the Dell Enterprise Hub with Hugging Face where customers can actually get their specific vetted models, the most popular open source models, specifically vetted for Dell infrastructure. They can choose what Dell infrastructure they have, and then it’ll tell them what open source models they can use for that Dell infrastructure. Then we make it easy for people to deploy things. We’re working with Meta on Llama 3, et cetera, et cetera. There’s just so much happening in the multi-cloud space with Red Hat, with Azure, et cetera.
So ecosystem is the second part. Infrastructure, ecosystem, and then it’s all about professional services. Professional services plays a massive, massive role in this, whether it’s professional services delivered by Dell or by our partner ecosystem that we are finding is so important to help customers ask the right questions, align their strategy, think about business use cases before they go down the path of, well, let me go order a bunch of gear or do POCs, think through their AI strategy end-to-end before they actually go down this path. So this combination of infrastructure, ecosystem integrations and services is really what we think of as a Dell AI Factory. This is a really, really important part of what we’re delivering. This is how we make things simple for our customers.
Now, let me answer your question directly. What’s different about what we’re doing with AI Factory? And to me, Keith, it comes down to five things. First of all, we believe we have the broadest AI portfolio in the industry. Nobody else has desktops, data center solutions with server storage, networking, compute, data protection, to multi-cloud solutions where you can think of a consistent data platform with our storage software running not only in customer’s data centers, but also in different public clouds, providing that consistent data platform to be able to run AI on and use the same workflows, get the same performance, et cetera, et cetera. Broadest AI portfolio.
Second one: At the end of the day, a huge component of AI is scale and performance and what can your solutions, what can the vendor solutions do to really help customers be able to get the best performance for their AI workloads? So the co-engineered solutions that we have, for example, with NVIDIA, deliver the leading AI density, leading throughput, and leading energy efficiency. We announced a lot of these solutions at Dell Tech World. 2.5 times the energy efficiency of previous platforms, most dense racks, 70 new GPUs and video GPUs in a single rack, incredible throughput from our storage platform, our scale, et cetera.
So leading AI performance is a second differentiator. Going back to third one, going back to what I just talked about, reducing the complexity, ultimately, it’s not about speeds and FeTS. It’s not just about speeds and FeTS, it’s actually about how fast can we get customers to see value from AI? So the third differentiator for us is really accelerated time to value. So not only do we have all these incredible piece parts like compute, storage, networking, we’ve done a lot of work to actually pull these together into cohesive solutions, full stack turnkey solutions alongside our professional services. And we’ve actually done software automation now, we showed some of that at Dell Tech World, that actually pulls together use cases like gen AI digital assistance or retrieval augmented generation platforms and helps deliver those to customers 85% faster than if they were to do it themselves. So software driven automation and integration with the ecosystem at a deep level. So accelerated time to value is a huge part of it.
Fourth is cost. You and I have never had a single conversation in all these years where we have not talked about cost and how important that is as a factor for our IT stakeholders. And we really have best in class TCO. Customers are making choices about where they want to deploy these workloads. And I agree with you that this is going to be a large hybrid deployment always. But we do believe that when it comes to long-term inferencing and inferencing over the long-term, our research indicates it’s actually 75% more cost-effective to do inferencing on-prem with Dell than doing it just in the public cloud.
And then finally, fifth differentiator is all about trust and minimizing risk. So we work closely with partners like Hugging Face with NVIDIA to make sure that we have built in data security, data privacy, and we’ve got software tools that we are integrating with whether they are or software tools that our partners like NVIDIA and Hugging Face have to make sure we maximize the accuracy of the models that customers are using, as well as the appropriateness, the reducing bias, making sure IP is protected, et cetera, et cetera.
So all of these things together I think are really what differentiates us. There’s really those five things I talked about: the broadest CI portfolio, the ability to deliver the leading AI performance, density, throughput, whatever you call it, accelerated time to value, 85% faster than doing it by yourselves, best in class TCO. So 75% cheaper to do inferencing than doing it just in the public cloud. The fifth one is maximizing trust and minimizing risk for your business as you move forward with taking gen AI to your customers and to your regulars. I know that was a bit of a long answer, but hopefully I was able to address what you were asking.
Keith Townsend:
No, that is very helpful. And for watchers, these five areas, you want to put Dell to the test. I had a great conversation with Jeff Boudreau, the person in charge of making this happen within Dell and Dell adopting this approach and these technologies. I encourage you to have a conversation with Dell’s team and ask him about their AI journey. For my guest, Varun Chhabra, Senior Vice President Infrastructure Solutions Group and Telecom Marketing at Dell Technologies, thank you for joining us on Six Five Summit. Thanks, Varun.
Varun Chhabra:
Thank you for having me, Keith.