Home
Solidigm, CoreWeave, and Supermicro: Powering Next-Gen AI Cloud Solutions – Six Five Media at NVIDIA GTC
Solidigm, CoreWeave, and Supermicro: Powering Next-Gen AI Cloud Solutions – Six Five Media at NVIDIA GTC
Greg Matson, Jacob Yundt, and Vik Malyala discuss how Solidigm SSDs and Supermicro servers boost CoreWeave's cloud solutions, offering scalable and efficient AI computing power.
How do we actually deliver value through AI? 🤔At a packed NVIDIA #GTC25, the focus was laser-sharp: It’s not magic; it’s about the token economy, and enhancing cloud solutions for AI workloads.
Six Five Media’s Keith Townsend hosts a discussion with industry leaders Greg Matson, SVP, Head of Marketing & Products at Solidigm, Jacob Yundt, Director of Compute Architecture at CoreWeave, and Vik Malyala, President & Managing Director EMEA and SVP Technology & AI at Supermicro as they discuss data, hardware, and infrastructure driving the value in the AI ecosystem.
Key takeaways include:
🔹AI’s Data Demand: High-capacity, efficient storage is paramount. Modern AI systems thrive on massive datasets, necessitating solutions that can deliver this data to GPUs rapidly and reliably.
🔹Optimized Hardware Ecosystems: The architecture underpinning AI workloads must be fine-tuned for peak performance. This includes everything from design and development to manufacturing and integration, ensuring seamless operation.
🔹Powering Efficiency: As AI scales, power consumption becomes a critical bottleneck. Solutions that maximize performance per watt are essential, enabling greater compute density and reducing operational costs.
🔹Simplified Deployment: Ultimately, the ability to deploy and scale AI infrastructure quickly and easily is what unlocks widespread adoption. This requires solutions that abstract away complexity and deliver resources in a consumable manner.
Learn more at Solidigm.
Watch the full video above, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Keith Townsend: All right. Leather jacket. The only thing that was missing from the keynote stage was a cowboy hat. So I brought it to fill the system requirement. Speaking of system requirements, this is Six Five On The Road. We’re here at an extremely active San Jose convention center. Jensen said that we’re going to have to roll San Jose for the next event. I don’t know if you can do…25,000 people are here for this AI event. The AI event of I think the year. I have Vik, Greg, I’m sorry from Supermicro, from Solidigm, Greg and Jacob from CoreWeave. We’re going to start the conversation right off. I talk to every one of the major cloud providers and Jacob, we’re going to start with you. None of them tell me what equipment is running in their data center. We have Solidigm 122 terabyte drives, we have Super Micros AI servers and we have your services. Why is this relationship so important to highlight?
Jacob Yundt: This is great. So Supermicro has been an incredible partner. Solidigm has been an incredible partner in terms of like the partnership with supermicro. They are super collaborative. Anytime that we need to sort of work together on future products or anytime we need to make changes to something, they’re truly the best. And the same with Solidigm. The engineering support we get from them is fantastic. The product itself is just great. Like we can talk about high cap drives and how we just wouldn’t be able to scale our business without some of their offerings because we just can’t do that without super high cap QLC drives.
Keith Townsend: So speaking of super high capacity drives, the drive race is on. There’s a couple of 122 terabyte drives on the market. Vik, talk to me about the importance of the Solidigm drive, especially when it comes to AI workloads.
Greg Matson: First is, you know, AI wants data, right?
Keith Townsend: Right.
Greg Matson: And unique capacity, high capacity, but delivered efficiently and reliably. And that’s where our 122 terabyte drives do. We’ve been the leading provider of high cap drives for a few years now. First to 60, first to 122 and partnering with CoreWeave to understand their workloads and understand how to tune our drives and make them the most reliable and performant for their infrastructure. But also with Supermicro to deliver the drives to the customer. Right. Without the chassis, without the servers, you know, our drives do nothing.
Keith Townsend: You know what we’re at the event, we’re at Nvidia GTC and getting the cards and the servers and the high cap drives to the customers, CoreWeave has got to be the challenge of the day. Talk to me, where’s the magic? Why can’t Supermicro do what other vendors seemingly can’t?
Vik Malyala: I mean one good thing is that the close engagement, right. So which everyone can claim but I think we are the only company honestly that is totally vertically integrated. So we design, develop, manufacture, integrate, ship and service everything in house. And in every one of these aspects we need to work very closely with the customer as well as the partner. So just talking about what we are doing with say for example someone like Solidigm in this case is to have a roadmap view and also look at what kind of platforms that we need to develop to support that in order to bring the efficiency, performance and scale. So those are three things that are absolutely important to keep the AI infrastructure busy. Because the GPUs are the most expensive piece of the puzzle. We want to keep them absolutely not idle at any point of time. And that’s what the focus of that. So in that aspect there’s another angle which is making sure that the software that runs on it is able to support for example what CoreWeave uses. So we work with let’s say the likes of Vector, Vast, and DDN and all of those in order to certify the platforms. So ultimately Jacob and team, when they start looking at what kind of stories that they need to take, they don’t have to do any guesswork on that. It’s fully validated, they are ready to roll with it. And on the AI infrastructure side, it again boils down to getting them the most performant infrastructure right in time first to market. So because if you see anything, even Jensen’s keynote, CoreWeave is in the smack center of it because they are bringing the absolute best technology before anyone else. And how do we do that? We need to know the pace at which they are operating and we need to be prepared to support them in the right way. So in a way, being able to do everything in house and working closely with both of them gives us this unique capability to support them better than any others in the market.
Keith Townsend: So Jacob, let’s talk about the results. Tokens in, tokens out. Jensen has this theory around the token economy. This ability to demand of your providers of your systems the performance and capacity that you need. How’s CoreWeave answering the bail to that call?
Jacob Yundt: That’s a good question. So I’m a hardware guy, I like to talk about the hardware. One of the key differentiators for CoreWeave is that we are a bare metal cloud provider. We don’t have the overhead performance hit of a hypervisor. We give you direct access to the metal. In addition to that, we are using Bluefield DP to offload as much as we can to the DPU itself. But we want to make sure that resources on the server, CPU and GPU time, that are available just for customers, we want to make sure that they’re not spending a bunch of resources shuffling around DPC traffic, whatever. In addition to that, we use the latest and greatest InfiniBand NDR400 GB InfiniBand to have ultra high bandwidth and ultra low latency communication between the GPUs. But it’s really about making sure that we have a super fast, super scalable, very fast to deploy solution for our customers.
Keith Townsend: So Greg, none of this matters. If we can’t save data to a device and get data out, why 122 terabyte drive? Why does that matter to the AI capabilities of an organization?
Greg Matson: Well, there’s a few different reasons. Right? First is just sheer capacity and data locality to the GPUs. And you know, by having the data sitting in a solid state drive versus say a hard drive for example, you get.
Keith Townsend: Wait, do they even do hard drives in AI?
Greg Matson: No one who’s doing modern AI data centers does. But there are, there’s a big legacy footprint of hard drive based data centers out there. Right. And but anyone you know that’s facing performance challenges and the constraints both from performance power, you know, you heard Jensen yesterday say it’s really the power is now the limiter to data centers.
Keith Townsend: The more power I save, the more GPUs I can buy.
Greg Matson: Absolutely. Compared to a hard drive based storage solution with high capacity solid state drives that can save as much as 80% of the storage related power in the system. And so they’re really essential to getting the data to the GPUs.
Keith Townsend: So talk to me about that efficiency. Either one of you can take this.
Vik Malyala: Few ways to look at it, right? Just to add to what Greg is saying is that if you were to take 122 terabytes right in a typical system, I mean we just want to make sure to get the best performance of everything right. Take it as an Intel or AMD processor. Most of these drives are PCI Gen 5×4. We want to give without any PCI Express or anything giving the maximum bandwidth access. So let’s say 24 drives UDA 2 drives times 4 is like a 96 PCIe lens and it still gives enough throughput from the I/O side of it whether it’s 400 gig or even 800 gig on the networking to bring all the data. That’s one. The second thing is with this kind of a drive, with 24 drives in a single enclosure, we can get like 3 petabytes of storage per enclosure. And if you add up all these numbers for a given capacity, you are reducing the footprint in terms of the physical footprint, you are improving the efficiency with respect to power consumption and more importantly you are not compromising on performance. So what used to be taking several racks we are able to compress into a smaller number of racks. Less power, more performance, better efficiency, better cost. I think ultimately we want to make sure that rule number one, we want to keep the infrastructure busy doing what it’s supposed to do without compromising anything. But at the same time we want to bring the other economics in a way where it becomes affordable and efficient. So that’s what we are trying to do in this equation and that’s important especially given the scale at which companies like CoreWeave are operating and the amounts of data that they are adding on a regular basis for whatever the customer demand is.
Jacob Yundt: I’d like to echo Vik’s statements. I was literally just having a discussion with one of our storage PM’s about a new customer requirement like tens of petabytes like oodles of storage. And we’re having a discussion of where do we put it, how many racks, where are we going to locate this data? And the power efficiency is so crucial to this discussion. If we were to use non high cap drives we would need like a bajillion racks and a bajillion servers and like we just like that is power that we can’t use for GPUs that we can’t give to the customer. Right? Like those are GPUs that we just can’t provide to them. And so that power efficiency is super crucial as really scaling storage.
Keith Townsend: So I’m a reformed CTO. I understand the low level storage. If I have the biggest, fastest, most efficient drives, if I have that package in a hardware platform that delivers me the GPU performance that I need, I can get the power, including portions of this. What’s missing in the story, Jacob. I’ll pick on you a little bit. It’s packaging this up in a way that I can consume. Tell me about this efficiency story. How can you do it better than I can do it? One, I don’t want to do it because I can’t keep pace with the investment. What’s the CoreWeave story?
Jacob Yundt: That’s a good question. So I would say that the CoreWeave story is that our stack is just designed from the ground up to be laser focused on delivering as many GPUs online as fast as possible, as healthy as possible, as efficiently as possible, and making it easier for our customers to consume. And so part of that is that we don’t necessarily have the baggage that some of the legacy hyperscalers have. Our entire stack is just essentially laser optimized for this. So for your experience, you’re going to have the best experience because we are just super focused on making sure that we have as many GPUs online as possible. They’re efficient, they’re healthy, and that, you know, you have a great customer experience.
Keith Townsend: All right, this is not magic. In the token economy, you need the most data to get to the thing that processes the tokens the fastest, the GPUs and a system that can be delivered to you in a way that you can consume it. Great story from the three of these. We are going to continue our coverage throughout this show. Really driving home this idea of what is the token economy, how do you get to it, and how at the end of the day you get business value out of your vendor relationships. Make sure to stay tuned for more coverage of Six Five On The Road. I’m your host, Keith Townsend. Talk to you in the next episode.
MORE VIDEOS

More Wide Area Networks: A New Vista for Sustainability Success - A Deeper Dive - Six Five On The Road
800G Ethernet is here and key players are working to make it sustainable. Join The Futurum Group's CEO and Chief Analyst Daniel Newman as he sits down with AE Natarajan, EVP & Chief Development Officer at Juniper Networks, as they explore Juniper's latest WAN advancements, including 800GE leadership, intelligent traffic control, and the company's ambitious Net Zero 2040 commitment!

Liquid-Cooled SSD: Solidigm and NVIDIA’s Innovative Solutions – Six Five Media at NVIDIA GTC
Avi Shetty, Sr. Director at Solidigm, joins Keith Townsend to discuss the innovative liquid-cooled SSD solution developed in collaboration with NVIDIA, marking a significant advancement in high-performance computing.

Building Unified Intelligence for SMBs: The Foundation for CX with Epicor – Six Five Media Virtual Webcast
John Carrico, VP of Product Management at Epicor, shares his insights on how Epicor's Grow Data Platform is revolutionizing the way SMBs handle data for superior CX. This provides a deep dive into leveraging enterprise-level intelligence effectively within SMBs.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
quantum

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.