Liquid-Cooled SSD: Solidigm and NVIDIA’s Innovative Solutions – Six Five Media at NVIDIA GTC
Avi Shetty, Sr. Director at Solidigm, joins Keith Townsend to discuss the innovative liquid-cooled SSD solution developed in collaboration with NVIDIA, marking a significant advancement in high-performance computing.
Liquid-Cooled Storage? 🤔 NVIDIA #GTC25 isn’t just about GPUs; it’s about the entire ecosystem that powers AI.
Host Keith Townsend is joined by Avi Shetty, Sr. Director – AI Market Enablement & Partnerships at Solidigm to discuss the collaboration between Solidigm and NVIDIA to overcome a critical challenge: as AI workloads and compute demands surge, so does the heat. 🥵The solution? Liquid-cooled SSDs.
Key takeaways include:
🔹Keeping Up with Compute: Next-gen servers demand innovative cooling. Solidigm’s 9.5mm E1.S SSDs with liquid cooling directly address the thermal challenges of high-performance AI.
🔹Cool Under Pressure = Peak Performance: Liquid cooling maintains SSD temperatures, preventing performance throttling and ensuring GPUs get the data they need, fast.
🔹Data Center-Level Impact: This tech isn’t just about the drives. Liquid cooling reduces the need for aggressive air cooling, shrinking server footprints, and lowering overall TCO.
🔹Density and Efficiency: We’re talking about packing massive storage (like 122TBs!) into minimal space, which translates to power and cost savings.
Learn more at Solidigm.
Watch the full video above, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Keith Townsend: I’m dying. All I want to do is sit down there. You can think about it that way. Put them up until you. All right. Good to sleep. Side aim. All right. Somebody hold my 122terabyte SSD because my good friend Abby is promised that he’s going to blow my mind. But we’ll. We’ll hold out judgment. Cuz I don’t know. Iby. I don’t know if you much get much cooler than 122 terabyte drive. You’re watching Six Five on the road. We’re in Solidine Food. 25,000 people. They might as well be in this booth. 25,000 birthday conference. Small town. GTC has outgrow San Jose. Jensen said on stage yesterday that we need to grow San Jose. Maybe we can, you know, maybe we can find a lower floating point or something and. And shrink it some other way. Welcome to 65 on the Road. Welcome.
Avi Shetty: Thank you. Thank you Keith. Great to be here and what an amazing time to be, you know, be in the conference. AI is everywhere. Jensen’s keynote yesterday was just mind boggling. And yeah, the ecosystem is buzzing and we are lucky and proud to be partnering with Nvidia on a new technology which we’ll talk about today. Yep.
Keith Townsend: So last time we met, we’re on top of a truck. We were on top of an AI truck holding 122 to terabyte TLC drive that you guys have promised and proven to me is performing at the level that allows some of the world’s largest cloud providers to provide AI services to the world. And you said, Keith, I promise you that we have something cooler in store. What is that? Something cooler in store.
Avi Shetty: All right. As things get faster, more compute. You heard Jensen talk about more compute is going to be driven over the next multiple generations of servers. Guess what? When things get more compute and more performance, things get hotter. It needs to be cooled down. So you have to actively look at cooling solutions. What we are unveiling today this week at GTC is essentially today’s AI servers have liquid cooling. Liquid cooling is not a new concept. Liquid cooling has been available on GPUs and CPUs but liquid cooling extended to Storage. Now for the first time with the introduction of our 9.5 mm E1s Solidigm PS1010 SSD, a quick visual demonstration. Here you have the liquid coolant.
Keith Townsend: Before you give me a demo hobby, I don’t see where you put the tubes.
Avi Shetty: Oh, there are no tubes. You connect it to the cold plate and the cold plate touches the back front side of the ssd. And we have internal mechanism in and thermal management designed as part of our solution. So both sides get cooled and as a result you get active cooling support across the full ssd.
Keith Townsend: All right, you gotta show me this. Okay, you’re getting cool. Please show me.
Avi Shetty: Yeah, so what, what’s actually happening here is the coolant, liquid coolant comes and touches the cold plate. Pull one out, Kate. Pull one out. Yeah, go ahead, pull one out.
Keith Townsend: Wait, the last time I pulled a drive out of production.
Avi Shetty: No, no, no, pull one out. No, we, we’re fine. Pull one out. Yeah, pull one out. There you go. So you, what do you see is a spring loaded mechanic? Yeah, the spring loaded mechanism. A spring loaded mechanism where the coolant comes and touches the coal plate. The cold plate touches the backside of the SSD. We have internal thermal material and our IP. So this is a Solidym IP we have. We not all E1s’s have the same thermal connection. So most of the E1s’s in the market today only have the backside being cooled while the front side continues to be hot. But in an SSD you have components on both sides, so you need active cooling on both sides of the ssd. And that’s why today you have fans which are air cooled Solutions. In our E1s, what you see is essentially a cold plate kit. So solidigm had to not just invent and work on you.
Keith Townsend: Go ahead.
Avi Shetty: No, not just work on the ssd, but we actually had to partner and design a coldplate technology to take advantage. Because storage is one component in the server which needs serviceability. It needs hot pluggability. GPUs and CPUs are soldered down. You don’t take them out. But SSDs, you want them to be pluggable. Hot pluggable. And as a result we had to design the core plate to meet this requirement.
Keith Townsend: So we’re physically. So right on this server. You have it sitting outside. Yeah, which is not what the final solution is going to be. Where’s in the server? Where is this going to is. It’s going to be where I’m. I need to call into my colo I need to get remote hands to replace ssd. Like where in the, where in the chassis do you expect this to be sit.
Avi Shetty: It’ll be front and center. So typical AIs or any servers, you’ll have SSD components always in the front. And that’s where we want to maintain the same continuity, even in a liquid cool setting. And that’s where the innovation had to happen and the solution work had to happen because you want serviceability, manageability and extendability of hot plugging an ssd. So as a result, what you see here is a concept prototype. But in actual AI deployments, you’ll see liquid cooling coming and touching the cold plate. The front of the panel will have the storage solutions in them.
Keith Townsend: So it’s important to understand the whole point of this. As equipment heats up, no matter what the technology, you have to throttle back performance to maintain the heat profile.
Avi Shetty: Correct.
Keith Townsend: So what are the expected results of being able to make a tool cool your ss?
Avi Shetty: Yeah. So what we are demoing here is exactly that. Right. Every time you have more compute, you have more heat generated. And the whole point is to keep your GPU fed with data. You want your SSDs to perform at the highest bandwidth even in a constrained power environment. As you heat up, typically your drive throttles. But with liquid cooling, what you’re able to show here is we are able to maintain an SSD temperature much lower than fans, as well as performance being no throttling of performance, it’s running full peak performance even in an extreme workload.
Keith Townsend: So on the show floor, there’s a lot of dcd, but liquid cooling systems that you can plot next to your system. From a physical cabling perspective, is this impacting me from a. From a cooling perspective? One, I’m losing fans.
Avi Shetty: Yep. Which is no more fans.
Keith Townsend: Huge advantage.
Avi Shetty: Yep.
Keith Townsend: And now I don’t even know if I can tell them I’m inside a data center if I don’t hear all the fans. But two, from a, you know, from a liquid cooling management, from a cooling management system perspective, what’s the overhead?
Avi Shetty: Yeah. So there are benefits across the ecosystem. I’ll start with the SSD first. Right. So SSDs perform better. You’ll have maximum bandwidth at the server level. What you see is no more fans, which allow for more real estate for GPU deployments or shrinkage, your servers becoming smaller and more efficient. And at the infrastructure level, you don’t have to maintain your air cooling H vac systems at a much lower temperature because everything’s being cooled by the liquid. So you have savings and TCO benefits across the pipeline.
Keith Townsend: So talk to me about this relationship with Nvidia, because on stage, I’m going to infer something that Jensen said on stage. Like less money you spend on X, you can spend it on gpu.
Avi Shetty: Yes.
Keith Townsend: Cooling power. The less I spend on cooling my storage components and the less I spend on energy, the more GPUs I can buy. So talk to me about this special relationship you have with Nvidia.
Avi Shetty: Yeah. So with Nvidia we have a full AI portfolio of storage solutions for AI. We talked about our E1S solution, which is more on the server, like next to the gpu. From Nvidia we only get. I’ve been in the storage industry long enough. There’s only four requirements coming from us. Low latency, high bandwidth, high density and lower cost. You heard Jensen talk about three of those yesterday. He kind of articulated the fourth one as well. But whenever he talks about more compute, what translates to us in the storage world is I need a lower latency high bandwidth ssd. So what we have here is a Gen Phi ssd, the highest performing SSD in our portfolio and one of the best real world performing SSDs for gen 5.
Keith Townsend: So obviously this is going to sit as close to the GPU as possible because one of the consistent messages I’ve talked to practitioners about, they walked away and said GTC25 is about IO.
Avi Shetty: Yep.
Keith Townsend: Getting as much data into the GPU as quickly as possible to reduce the overall latency and create higher efficiency use of the gpu. So this goes right next to the gpu. Yes, but this is typically not what I’m going to put my big production data on.
Avi Shetty: Yes. For that, Keith, we have our one, we have our 122 terabyte SSD.
Keith Townsend: He does this to me every time. He brings these things here and he never lets we take them home. But I digress. Go ahead.
Avi Shetty: Yes. So while we talk about the low latency, high bandwidth close to the gpu, you still have storage servers we’ve talked about. You heard Jensen yesterday talk about training and inferencing. Just kicking off inferencing, we’ve heard from our analyst. Inference data generated on inference will outpace training data by a factor of three, which means you need more storage, which means you need efficient storage. And that’s where our high density QLC SSDs come in. So we’ve had amazing success with a product, QLC product last year when we announced we have our customers building AI data centers grounds up who are deploying 122 terabytes into their AI servers for their storage needs and going forward the direct attach with our E1s solution.
Keith Townsend: So we have a couple of minutes going to let you do. I’m going to let you do the champions donut around the show for ber. Give me some hero numbers about the efficiency of packing 122 terabytes of SSD into a couple of U’s of no.
Avi Shetty: Let’S not say two U’s. I’ll give you one U. One U one.
Keith Townsend: Challenge accepted.
Avi Shetty: One U 24 base 24 petabytes in one single year.
Keith Townsend: So I know we’re at a AI conference and density is not a concern because we’re not as a concern when it comes to how much rack space I have. Density is a concern when I come to when it comes to power.
Avi Shetty: Correct.
Keith Townsend: What’s the effect of meaning of now that I’m putting this cool small form factor drives right next to the SS, right next to my GPU that in a 1U chassis. What’s the net effect for customers who need a lot of data and their.
Avi Shetty: AI data, they have the data instantaneously to them. Now you know for what all your training needs, your inferencing needs, you have your SSDs, your E1s SSDs connected directly to the GPU using Nvidia’s GDS technology which is GPU Direct Attach and over the network you have storage servers being connected and fulfilled where you have all your density needs being fulfilled. So yeah, from an overall AI portfolio standpoint, we recommend direct attached low latency solutions. And for all your network attached needs you have efficiency improvement in terms of all tco. You’ll have the slides, you can look it up on storage4ai.com, which is a Solidigm webpage which talks about rack reduction, TCO benefits as well as performance and scalability. The other key thing Keith, is scalability. We’ve talked about scale up. We can also scale out, right? Data is only going to grow. You’re not going to keep building new AI infrastructure in this. So you need solutions which are ready and for today’s environment. But also come future you scale up using the same footprint. So we’re not going to stop making high density SSDs. We’ve got Nvidia telling us to make more high density SSDs and more low latency SSDs as well. So that’s our message coming from Nvidia platform partners to us and we’re going to be driving that into the solardym roadmap.
Keith Townsend: Know what? I have this new thing optimize for value versus vanity. I’m not going to lie to you. I want 122TB Enterprise SSD. I have no use case for it, but I want one. I come from a world where the idea of having non moving parts, big drives that provide data as fast as in as possible to GPUs, CPUs and memory just makes my job as an enterprise architect easier. And sree, I’m going to put in another bid for it. I want 122terabyte drive. Even though I can only practically use 2terabytes. We’ve had a blast talking to Solidyne over the past couple of days. Understanding where they’re at in the ecosystem, how these high density drives, these liquid cool drives fit in your AI factory. Stay tuned for more coverage from Six Five on the show floor of GTC25. We’re going to squeeze into even smaller places just like these drives. Talk to you next episode.
Avi Shetty: Thank you.
Keith Townsend: Hold please. Happy with that, that weight. All right, I’m going to keep you there. Did we miss anything?
Avi Shetty: No, I think we got it.
Keith Townsend: Both. Look at me. 3, 2, 1. And then if we get standing up. Right there. All right, perfect.
Avi Shetty: I missed. I miss giving. I miss talking about this where we. We’re actually giving this as a giveaway. This is a liquid cooled ice cube.
Keith Townsend: Oh, that is. No. Whiskey ball. Whiskey ball, Whiskey ball. Get that 122. Do something. Yeah, there we go. Hold the drives. There we go. All right. One, two, three. I’m taking a couple. Awesome. Thank you guys.
Avi Shetty: That’s for you. If you use these.
Keith Townsend: This is a very practical gift.
Avi Shetty: Goes with, goes with a liquid.
Keith Townsend: Cool. Yeah, cool. My whiskey.
MORE VIDEOS

More Wide Area Networks: A New Vista for Sustainability Success - A Deeper Dive - Six Five On The Road
800G Ethernet is here and key players are working to make it sustainable. Join The Futurum Group's CEO and Chief Analyst Daniel Newman as he sits down with AE Natarajan, EVP & Chief Development Officer at Juniper Networks, as they explore Juniper's latest WAN advancements, including 800GE leadership, intelligent traffic control, and the company's ambitious Net Zero 2040 commitment!

Building Unified Intelligence for SMBs: The Foundation for CX with Epicor – Six Five Media Virtual Webcast
John Carrico, VP of Product Management at Epicor, shares his insights on how Epicor's Grow Data Platform is revolutionizing the way SMBs handle data for superior CX. This provides a deep dive into leveraging enterprise-level intelligence effectively within SMBs.

The Six Five Pod | EP 255: From Intel to Innovation: Pat Gelsinger’s New Ventures
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss recent AI model announcements from OpenAI, Microsoft, and Google, and debate the $300 billion valuation of OpenAI. The show features a special guest, Pat Gelsinger, who shares insights on his new roles as executive chairman of Gloo, and reflects on his time at Intel.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
quantum

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.