Unlocking Enterprise AI Through the Power of Open Ecosystems

Enterprises are rapidly embracing artificial intelligence to enhance productivity, foster innovation, and enrich the experiences of employees and customers. As organizations navigate the era of AI, the demand for scalable, flexible systems that seamlessly integrate with existing infrastructure is paramount. Intel is at the forefront of this transformation, collaborating with industry partners to cultivate an open AI ecosystem and aiming to offer competitive price and performance, tangible outcomes with security and privacy.

Join us to explore how Intel’s initiatives are shaping the future of enterprise AI adoption.

Transcript

Daniel Newman:
Hey everyone. Welcome back to The Six Five Summit, Daniel Newman here, CEO of The Futurum Group. Very excited. This summit is rocking and rolling. We’re here as part of the cloud infrastructure track and I couldn’t be more excited about this next conversation.

I am joined by Anil Nanduri. Anil is the Vice President and Head of Intel’s AI Acceleration Office, General Manager, Data Center, AI category, and he’s part of the Intel Sales and Marketing group. Now that is a long title and so I should get some points, Anil, just for reading that correctly. But you have a big job at Intel, you have a big job. You’re handling AI acceleration, you’re handling consumption, you’re driving go-to-market as well as marketing objectives for the company. Let’s dive right in. I mean, it’s a red-hot topic. Just say hi to everybody, welcome everybody, what’s on your mind these days, Anil, as it pertains to AI?

Anil Nanduri:
AI is a super exciting topic, Daniel and gen AI is transforming every industry you can think of. Customers want to adopt gen AI, but they want to do it in a way that they can find it cost-effective, scalable, and actually solve their business outcomes. So that’s where all the aspect of it is.

From an Intel standpoint, I mean, AI is ingrained into everything we do. I mean, we are the only company, if you want to think of it from the perspective of bringing AI compute to PCs, to the edge, to the network, all the way to the data center, we have a full portfolio of technologies, products, silicon capabilities. But even more interestingly is if you think of this whole consumption of AI compute that we are looking at an exploding trajectory need to be manufactured as well. And with Intel’s foundry services and capabilities, we can actually also enable customers who want to build their silicon using our fabs. So if you think of it from that portfolio, we are able to address every part of the AI TAM both as a foundry as well as a product company, even though we may not be participating in all of them individually.

Daniel Newman:
Yeah, you move really quick and I appreciate that. We talked to Sachin and Justin at the recent Vision Conference and you really did make an impression on me when it came to bringing AI everywhere. And look, AI is in its earliest days, and I love the sports analogies, are we in the pregame or the first quarter? Are we first inning or are we on the first tee? But you could pick your sports. And to some extent you’re sort of seeing some parts of the AI market are moving a little quicker than others, some products are selling a little faster than others.

But I want to double click on what you just said because I think one of the things I get excited about, and I know sometimes the market, everyone wants to say, “This company’s the winner,” I keep saying it hasn’t been decided yet. There are early winners, we maybe got runs in the first inning if we keep with the sports analogy, but you guys are really focused on everywhere. So you said a couple of things there, you talked about product portfolio, so from devices, PCs to the edge to the cloud to the data center. You talked about fabrication and foundry, you guys, you are not just designing chips for compute, for programmability, for ASICs, for hardened AI, and as well as no down the road GPUs. Talk a little bit about that kind of bringing AI everywhere focus. Double click on what you started to tell me, give me that whole kind of portfolio look as to how you’re approaching this.

Anil Nanduri:
Yeah, so if you think about our PCs, we’ve actually led them off the bat with our AI PC with our core ultra processors that launched last year. In fact, we have shipped over 5 million units, we’re going to ship over 40 million units this year. Just think about it, this is built-in AI computing your PC. And we are quickly following up with our next generation laptop processor, the Lunar Lake, which will bring in even more AI compute. Now, these have on ship AI capabilities where you can keep your data with you, users can have their local AI agent or AI assistant if you want to think about it, be able to do tasks on your PCs and enhance your productivity in enterprises. So there’s clearly a lot of work around there.

On the edge platform, again, this is where a lot of the inferencing is going to happen on the edge. One of the things we have to think about AI is follow the data, right? And if you think about the data itself, a lot of the data that’s created on the edge, very hard to go move them all over to the cloud and so you’re going to have a lot of AI compute needed at the edge itself, right? And so for that we have our Tiber Edge platform, which not only comes with the fully validated with the silicon, with the platforms and with the software stack, you can actually deploy for edge use cases where there’s lot of need both from an inferencing standpoint and also from a data ingestion standpoint.

The third thing is the data center itself, right? In the data center. If you think about where AI is today, like you said, it’s a very early part of the innings, there’s a lot of rush towards creating new models. And so if you think about all this compute demand and everything that has gone up exponentially is about the race to build the largest models and the race to build the most useful models and there’s a lot of model development innovation going on, which requires a lot of compute. But the real thing that needs to be unlocked is, how am I going to use them and use them at scale? Which is where inferencing at scale starts to come in. And that’s going to take a long process to be able to be effective and bring that business outcome thinking is, how do I be able to actually solve the business outcomes and actually make it more productive, make it more capable, whether it’s in the context of retrieving data or reasoning or generating new content.

And to connect all of these, you need a network that is open and scalable, and this is where it comes with the ethernet-based approach, a standards based where we can actually have a multiple choice of vendors where there’s interoperable standards. And Ultra Ethernet is a open standard that’s been set up, Intel is a founding member, there’s big name partners in there in the founding member list. And idea is to build the network that can actually scale for this future of AI.

Daniel Newman:
Yeah, Anil, when I hear you, I basically think there’s a couple different ways we can cut this, but for the audience out there that’s weighing and they’re making decisions about the market and they’re basically saying, “Well, what’s Intel doing?” Intel’s long been the leader in PC, long been the leader in data center compute, now we’re in this reset era and now it’s a new kind of PC, which we’ve had some conversations here and we’ll talk about AI PCs a whole other session, but definitely shown leadership, early to market, Lunar Lake got pulled forward, some very compelling designs, of course new competition and that’ll be weighed out as these get deployed. And then the data center itself, I often talk, Anil, about the most inferencing still happens, our own future of intelligence data actually showed that it happens on CPUs. Now, that doesn’t mean there’s not a massive ramp of spend on GPU, there’s not the need for accelerated parallel compute, but a lot of people forget this. There’s still a really, really significant opportunity there.

And then there’s the network, right? You got to actually connect edge to cloud to the device and then you need to actually have all the things tied together, you just talked about Ultra Ethernet. There’s a big debate, I call it the Apple/Android debate about what the network for AI is going to look like. And by the way, it’s going to be huge because we’re talking about up to a 400 billion tab just for compute for AI in the next few years.

I want to have you touch on the open and closed thing that you just kind of alluded to. I heard you talk about closed open, we talked about Ultra Ethernet, this is going to be huge. You’ve got companies that are basically coming out and saying, “We do it all. We’re going to offer you the whole stack. You got to buy it all from us. You can’t move from compute to compute, you can’t move from open connectivity.” And some people like that. And then there’s another side that’s saying lots of vendors, lots of disparity. Where does Intel stand? Where do you see things falling?

Anil Nanduri:
So it’s actually a really good question and I think you got to step back in history and look at, especially in the data center ecosystem, what was the real innovation engine? And if you think about the innovation engines in the data center, they’re built on a very, very open set of frameworks, right? Start from the operating system itself, Linux was the one that has the largest foothold in the data center. You kind of look at it in terms of the containerized world that we live in, Kubernetes, that’s built on a standard layer where you can actually have multiple vendors play in. You look at virtualization, again, a very standardized mechanics on how do I scale virtualization into the data center ecosystem?

As you think and get into the AI domain now, you start to look at, okay, how is AI getting leveraged in the data center? It’s actually PyTorch is probably the most common place of abstraction point. If you ask any AI developer, most likely he knows he is coding on Python and running on a PyTorch framework. And if you ask him what’s under the hood, he’s probably not even going to know. And so where you get into the aspects of these abstraction layers where developers can actually broadly access. You start to look at it and saying, in the AI world, in the gen AI world especially with platformer architectures, the dependency on something like a CUDA is a lot smaller than in HPC or a high-performance computing where they optimize every layer of the software optimization that needs to happen to get the best out of the hardware, right? That’s been more of an HPC DNA. In the AI DNA, it’s been more time to value, time to results and time to scale. And so they’re more looking at it from a standards-based as a PyTorch level.

Then when you start to look into these framing, how do you build? And then when you look at models, there’s proprietary models and there are a lot of open source and openly available models. Now, as you look at AI models deploying, customers want to be able to trust it. They want to know what training corpus was used, what weights and biases were done. And so customers are going to be more looking at saying, especially as you get into the adoption of enterprises, they’re going to start looking and saying, “Hey, can I trust the data that I’m going to be training with and can I train a basis on a model that I can trust the outcome from?” And so that transparency is huge. Security is huge.

So being able to look at it from an aspect of, “How do we integrate this into an ecosystem where trust, open, compliance has always been a rich history in the data center domain?” is how we philosophically look at it from Intel standpoint. And that’s been our DNA as well is standards-based approach. And I see this is no different. As AI begins to scale, they’re going to look for frameworks where they can actually have these options, but in an open, standardized way. Now, it doesn’t mean everything is open source, but clearly kind of modular and plug and play.

Daniel Newman:
So Anil, I could probably drive this interview to an hour or more just having you answer all the questions. Unfortunately, we only have about 15 minutes, but I have one more question, we have a few minutes left, I really want to hit on. I know everyone out there, I sort of alluded to early on about sort of the market conditions, the perspective that some companies have already won. Pat would never suggest that, Pat Gelsinger absolutely sees the path to becoming an AI company, and I believe you, I’ve been on the record saying this.

I’ve been challenged though, some of the other analysts in the market will challenge things that I’ve said. Media at times challenge things I’ve said. Your competitors are challenging some of this. They’re saying there’s not enough validation, that you’re not winning enough customers. Maybe it’s at the PC, maybe it’s in the data center. They’re saying new architectures, new designs, new companies are going to win. Tell everybody about the momentum that you’re getting in AI, the wins that you’re achieving, because I think the metrics suggest that there is a lot of progress being made and that Intel is well on its way to becoming an established leader in the AI space.

Anil Nanduri:
Daniel, really good question. And before we jump into the customer traction, we kind of have to segment the market as well, right? We haven’t talked much about enterprises because this is a very important part of the segment. The reason for that is AI today has been primarily created on openly, publicly available data that’s web-sourced and things like that. But we all know, and you especially would know this very well, is that more than 80% of the data, both structured and unstructured, is sitting behind enterprise walls.

And so when you think about what do enterprise want, they want to have data compliance, data security, they want to be able to trust the models as we spoke earlier, they want to be able to be scalable, they want to deploy it at scale, and they want it to be accessible, which means they want to have vendor choice. And last, and most importantly, do this at an affordable point.

But most importantly, even if you do all this, they want an easy button. It needs to work. They want to be able to have a turnkey solution, whether it’s an inferencing solution, I’m doing knowledge discovery, a chatbot, or I’m trying to be able to create some generative applications. They want to be able to have the easy button as well to go do this, right? So for this part of the work, we have actually worked with the Linux Foundation and there’s an open platform for enterprise AI that was kicked off recently a month or so ago to actually bring a structure, a standards approach to enterprise deployments at scale.

The second part of it is in terms of what do you do when we want try to bring about AI into enterprise? Think about this, 80% of the data has traditionally been hosted through a Xeon server, Xeon clusters, Xeon infrastructure, and X86 CPU based clusters because that’s most efficient for doing database managements. Gen AI, as you saw, and you talked about this whole first wave, they’re all being created on accelerators, GPUs. And so you’re creating all these models that run pretty efficiently on GPUs, especially the large language models.

And so when customers are trying to connect these two, they’re trying to figure out, what’s the best way to take data that is best hosted on Xeon, but then I want to take the benefit of this gen AI, what’s the best way to bring these two worlds together and do it at a very cost-effective and a scalable way? Follow the data, why do we want to redo the whole data architecture and years and decades that they’re spent on trying to get the data regulatory compliances done and then move it all over to the side of the spectrum? So there’s new capabilities like RAG, retrieval augmented generation, and other techniques that are being now deployed.

Now, we at Intel are trying to help accelerate that. And so when you look from our Xeon capabilities, we have a rich history of the data side of things. And then with Xeon plus an accelerator as a head node with Xeon as a head node and then Xeon with our Gaudi, we are trying to help provide choice for our customers across all the three and bring the best and the most effective way to solve their use cases where things that run well on Xeon, leave it there, things that run well on accelerators, keep it there and find a way to provide the compute needs to our customers on both fronts.

So Gaudi is the accelerator that competes with the GPUs in the space. And a lot has been asked about, and this is where, going back to your proof points, that we’re getting to. And so first let’s look at the performance. It shows up at ML perf. We just recently announced some of the performance against H100, it gives 40% more time to train against H100, which is the leading GPU available in the market today. But the most important thing is perf per dollar. It offers a 2.3 x perf per dollar for inferencing throughput. This is what people care about. How many tokens can I process per second, and at what cost is it going to cost? How does it take to deploy it? And this-

Daniel Newman:
And power of course.

Anil Nanduri:
And power, of course, yes. And being an accelerator, it’s very power efficient as well. It’s over 2x power efficiency for some of the critical workloads that you can go look at. Now, what is most important, like I said, is the value.

And we’ve actually going to be much more front footed on sharing that more publicly. At Computex, you’ll see we are going to disclose what are pricing guidance for customers to actually model their AI investments. This has been a big problem, customers don’t know how much compute they need and how much it’s going to cost. And so we’re going to publicly share. In fact, we’re going to say that a Gaudi 3, which comes in a kind of a UBB or it’s a form factor with eight cards in it, is going to be at $125,000. And a Gaudi 2, which is already in market today, is at $65,000. So Supermicro is basically selling a server today on Guadi 2 at 90k. It’s an amazing value. Just allows customers to be able to understand what kind of compute it takes, what perf per dollar you can get and perf per dollar per watt for those who really want to look at it from a holistic view.

Now we’ve had the strong momentum of customers. So with Gaudi 3, all the OEMs and ODMs you’ll see at Computex are enabled, they will be available in volume by Q3 this year. And then as far as end customers, we worked with partners like Bosch, Airtel, Naver, one of the largest cloud service providers in Korea, IFF, it’s International Flavors and Fragrances. Who would’ve thought there’s actually protein AI models to generate food taste and perfumes and other kinds of things? We have had significant traction with customer deployments. We have our developer cloud where customers can come try Guadi and then go decide how they want to go deploy. That’s been overwhelmingly successful. And so again, it’s part of a long journey. We are at the early phase of it from Guadi 2. We are getting to Guadi 3, this is out third generation of the Guadi architecture. And we’re super excited about what customers are saying about its performance as well as its capabilities.

Daniel Newman:
Anil, I wish I could keep going. I appreciate you breaking that down, I think those cost value propositions are going to be very important. I think different architectures for different cases are going to be significant. I think people need to remember that a lot of AI can be done on a CPU, and of course accelerators will be answering a lot of power challenges that we’re going to have. We are running out of power so there are places where GPUs are absolutely the right architecture, and that will be determined by the different workloads and use cases. Anil, I’m going to be following up with you, we’re going to need to keep talking about this, this is not over, but congratulations on the progress. Work to be done.

Anil Nanduri:
Thank you, yeah.

Daniel Newman:
And I look forward to sitting back down and having this conversation in a year.

Anil Nanduri:
Yeah, thank you. And it’s been a fun discussion and really appreciate your time.

Daniel Newman:
All right, everybody, thank you so much for joining us here for this Six Five Summit session. That was a great one. But stay with us because we’ve got so much more. I’m going to send it back to you in the studio.

Other Categories