Home
More Wide Area Networks: A New Vista for Sustainability Success - A Deeper Dive - Six Five On The Road
More Wide Area Networks: A New Vista for Sustainability Success - A Deeper Dive - Six Five On The Road
800G Ethernet is here and key players are working to make it sustainable. Join The Futurum Group's CEO and Chief Analyst Daniel Newman as he sits down with AE Natarajan, EVP & Chief Development Officer at Juniper Networks, as they explore Juniper's latest WAN advancements, including 800GE leadership, intelligent traffic control, and the company's ambitious Net Zero 2040 commitment!
As AI and data demands surge, how will networks keep pace sustainably? ♻️
800G Ethernet is here and key players are working to make it sustainable. Join The Futurum Group's CEO and Chief Analyst Daniel Newman as he sits down with AE Natarajan, EVP & Chief Development Officer at Juniper Networks, as they explore Juniper's latest WAN advancements, including 800GE leadership, intelligent traffic control, and the company's ambitious Net Zero 2040 commitment!
Key takeaways include:
🔹From Connectivity to Intelligence: Networks are evolving beyond simple data transfer, transforming into dynamic platforms capable of understanding application needs and prioritizing traffic for optimal performance.
🔹Powering the AI Explosion: As AI demands skyrocket, energy efficiency becomes paramount. Juniper is committed to Net Zero by 2040, developing power-conscious solutions from silicon to systems.
🔹The 800G Revolution: The emergence of 800 Gigabit Ethernet is crucial for connecting massive AI data centers, enabling the high-bandwidth, low-latency communication essential for AI applications.
🔹AI-Driven Networks: Juniper Networks is leveraging AI, via Mist Paragon, to drive network automation and optimization, reducing complexity and enhancing efficiency.
Learn more at Juniper Networks. For more information, see these resources:
- More Wide Area Networks: A New Vista for Sustainability Success
- More Wide Area Networks: A New Vista for Sustainability Success
- Are Wide Area Networks Ready to Drive Sustainability Success
- Unlock sustainable transformation: Valuable insights from the 2025 STL Partners +
- Juniper Networks research brief
- Silicon, Systems, and Operations: A practical framework for sustainable
- technology adoption in telecoms
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Daniel Newman: Hey everyone, welcome back to The Six Five On The Road. We are here today for an exciting conversation talking about wide area networks, sustainability, and AI. We are diving in with a first time guest not from a first time company. We've got Juniper Networks joining us. We've got AE. AE, welcome to the show. This is your first time. Excited to have you here. How are you doing today?
AE Natarajan: Oh, wonderful. It's exciting to be here and exciting to participate in this conversation. It is all about AI. All of this excitement that is going on in the network and the inflection points, it's really up there.
Daniel Newman: Yeah. I really believe you have a really important role to play AE, Juniper, the business you're in. We know that there's a lot of excitement about silicon and compute right now. A lot of people tend to forget that all this data has to be routed and even when we spend a lot of time talking about scale up infrastructure and optical. But a lot of stuff has to go beyond these data centers. We know, you know, as it moves not just within data centers but from data center to data center, from, you know, data center to branch to edge to, you know, there's so much going on and with all this, there's a lot of kind of these, you know, tangential conversations. You've got energy and consumption and sustainability. You've got “where does AI” not just come in terms of all the compute being used for AI, but how does AI actually help the network more efficiently? So these are a lot of the things that are going on. Really glad to have you on the show. Give me, give the audience just a quick rundown, you know, your EVP chief development officer. What does that remit sort of have you doing day in and day out at Juniper?
AE Natarajan: Oh, okay. It's, it's, it's an exciting job. You know, I'll, I'll, I'll give you a quick preview of what I did. I was in the networking field for the longest time. I thought, okay, networking is just building more fatter pipes, making things more, cheaper, faster, better. But then I went on to build an application and the application actually happened to be video conferencing, the one that we were using. Only to realize that the network became such an important piece of making this application win big. If the network does not give the guarantees of jitter throughput, latency, delay, any of those metrics in a proper way. We wouldn't be able to have this virtual video conferencing and recording be really good because it'll be choppy voices, choppy faces, all sorts of things. Brought me back to a moment saying hey, the innovation inside of networks needs to be there. Brought me to Juniper in a very exciting role to build that innovation into our products. AI native products, which essentially is important for these days. Not just AI with compute and storage, which actually delivers training models and then inference models and things that we talk about. How do you use it in every walk of life? The first part of it is how do you take AI and AI models to help you with what we build as network routers, switches and connectivity devices that enable us to provide the guarantees. Today's network is using critical applications. If the network goes down, you probably have 911 calls not happening. It is as critical as that, right? So with that in mind, we really strive to innovate for this modern era. But with the advent of AI that has come in, it has actually exponentially increased the changes and demands that we have in the network today. So that is what is the excitement that I'm here with Juniper because we drive these innovations with our own silicon and with systems and solutions that really are future looking and enable us to adapt to these changes much faster and much more easily so that we can build it out.
Daniel Newman: That's great. So let's geek out a little bit and talk a little bit about some of your products and as well as the chipsets that you build. I don't think a lot of people fully attribute credit that you guys are in the chip development space, you know, custom chips. But you know, you've got ACX, you've got MX, you've got PTX on the routers and then you, you built the Express 5 ASIC Trio 6 chipsets. Talk a little bit about how they, these, these developments, these build these products that you're, you're, you're creating at Juniper are enabling, you know, sustainable, energy efficient designs and bringing out the performance required for these next generation networks.
AE Natarajan: So fundamentally, you know, like you look at these networks like we mentioned, right? When there was an Internet boom, connectivity became very important, right? We wouldn't have survived the pandemic without the network. Right. We were educating our kids, we were ordering our foods, entertainment, to everything with, with the network and the changing patterns, right. What drives us, really important, is we start off with silicon. Silicon is a fundamental piece of what we build here at Juniper. So you mentioned the Express and the Trio and I want to tell you why we build two different variations of the silicon. Trio is a silicon that gives you a flexible pipeline, which means it's very programmable, and it gives you capabilities to adapt and adjust to any new environments and applications that people use. Typically, Trio helps us power what we call the MX product line. And this has been one of the forerunners for Juniper. And the MX product line recently has been the clear winner for being the on ramp to AI and AI clusters of every large major hyperscaler and cloud provider when they want to onboard their customers into their AI clusters. These are the devices that actually enable it because it has the programmability, it has the capabilities. The second silicon that we build is the Express silicon, which gives you the speeds and feeds. God, when we thought we did 100 gigs, that would be enough, it wasn't enough. 200 gig, 400 gig, now 800 gig, and tomorrow we're going to talk about 1.60 pipes. And these pipes are getting filled faster and faster with AI really coming into play. So the performance is really important. So Express drives that performance and we build the PTX product line with that performance and that scalability that we do. While doing both the Express and the Trio silicon with our MX and the PDX platforms, we also take commercial Broadcom silicon and build out ACX products that actually are beneficial to the customers because networks have different needs, so you need different devices in different places. That drives us to give a full portfolio with AI native capabilities in the portfolio. So that is important for us. You touched upon another aspect of what happens which is sustainability or power reduction. Constantly we strive to reduce power, starting with our own silicon. Silicon, every time we improve upon it, if it is 2x the performance, it is 2x the performance with half the amount of power that you need to use for it. Why is it important? The cost of power is skyrocketing. The cost of energy is most important for us to actually have a sustainable way by which we can build and grow these networks. So we build it with our silicon, we build it with our systems, we build it with our software, and we do phenomenal things like green traffic engineering and things like that to enable people to reduce power consumptions in networks using even to the extent of 70 plus percentage points. Right.
Daniel Newman: So let me layer this in. You also have done a very good job of being among the leaders in 800 gigs and then of course 800 GE tech. How does that sort of connect to Everything that you just talked about from a hardware standpoint and then of course with AI and intelligent traffic, how do you tie that all together to sort of set us, you know, I know the ambition within Juniper, I've spent time with Rami, I spent time with Manoj, now I'm spending time with you. I know you want to, you want to be the standard in sustainable high performance networking. Are you seeing all this come together and get you to where you want to go?
AE Natarajan: Absolutely, absolutely. The 800 gig couldn't have been more perfectly timed. We lead and pave the way with performance with our 800 gig products. We're the first to actually deliver 800 gig routers. That is becoming a lot more important because if you really take the AI clusters that people are trying to build out, they need a seamless way to connect these huge clusters of GPUs. When people were first talking about 1K GPUs and 16K GPUs, now they're talking about 200K, 250K and some of the things that are.
Daniel Newman: A million GPUs right?
AE Natarajan: Huge. And, and, and guess what? These GPUs need to talk to each other, to train models or to infer things. And when you need to connect these GPUs together, you really need the network to be there. And the network has to be seamless and provide complete reliability and complete seamless, no bottlenecks in terms of messages and messages delivered between the GPUs and the data that is delivered so that they can orchestrate and do really well. 800 gig has become a powerhouse for people to actually connect AI data centers. AI data center interconnects. We also do a phenomenal innovation on top of that in the WAN, where we use coherent optics, where the router itself drives these optics and saves you power and energy and complexity in the network, removing intermediary devices like repeaters and others that you don't need anymore. Large content providers are thinking of spanning the entire United States with these kinds of optics. We call them coherent ZR, ZR plus optics, which is also part of our portfolio. Amazing to be there at the right time and really, really enable this AI transformation that's happening in the industry.
Daniel Newman: Yeah, and we're going to talk about AI here a little bit, but I think it's really important for everyone out in the audience too to sort of understand that, you know, you're really talking about where copper and light meet. Right. It's that, you know, there was sort of this bit of over rotation I think in the market that everything was going to go light and optical. And you know, we know within the interconnects, within the clusters, there's some extent of that in, in the rack. But when you get outside the rack, and especially as you move, you know, from, you know, maybe these biggest hyperscalers, you're seeing a lot even within them, you're seeing a lot of standardization on Ethernet. And that I think bodes really well for what you're doing. But then of course you do need optical in the right places for certain types of connectivity. And so the fact that you're addressing all of it, I think some people really kind of sometimes think Juniper, they think a little bit more about traditional networking, but you're really attacking all the vectors to address AI. So let's talk about AI a little bit more. You're using AI, you're enabling AI from what you just talked about. You're also using AI though, you know, Mist, Paragon. And you're doing this to drive more efficiency out of the network. Again, this brings sustainable outcomes, it lowers energy utilization. You know, talk a little bit about how you guys are thinking about your AI innovations.
AE Natarajan: Actually, the way we started this was Juniper was one of the first to talk about what we call as self driving networks. And self-driving networks wouldn't have been possible without, without using AI and AI models built natively into our routers and switches to enable people to operationalize their network in a more easy fashion. Right. The first thing first about trying to do this is to get visibility and observability in the boxes that you're doing. Collecting a rich set of telemetry ability for us to view that telemetry not just from one device, but from the network onwards. Right. And be able to actually utilize that inference, build advanced heuristics, build models to do this. I'll give you a simple example of what we did. About six, seven years ago, one of my key engineers came to me and said traffic engineering is actually the most complex thing that you can do in the network. That means the ability for you to traffic load balance across the entire network paths. This is no different than the freeways here in the Bay Area. You use WAZE or something, it redirects you based on traffic patterns and stuff like that. The complexity of that required us to build AI models and to train them. And when we trained those AI models, what we discovered was something interesting. The links were used with only 44% utilization on an average, but there were peaks and then there were valleys. The valleys made us think a little bit harder and say, hey, could I shut off this link and save power? Could I shut off this node? Should I rebalance the routes? Should I rebalance the traffic? Should I get better SLAs? All of those things are how we started with the genesis of how AI was built into it. We also have a recent innovation where we have built the capabilities known as an LLM connector that allows somebody to go into our router and actually use GAI capabilities with their own LLM and query the router and say, hey, how do you want to do this? And it was funny because when we first displayed this innovation in front of customers, it was an Italian customer and we started typing in Italian. The answers came out in Italian. People just loved it. We did the same thing in Japanese. And now guess what? I don't need to actually have translators or translations. People can actually train in their local languages and people can understand and comprehend the complexities of what we do in a very simple way. Built into our AI native products that enable this general AI networking. Right? And with that and with the performance, we also do the networking for AI.
Daniel Newman: And there's a lot of networking to be done, a lot of routing to be done. AE, you know, one of the things I think is very interesting that Juniper has been working on is its universal routing platform. As networking continues to evolve, what we route and how we route and where we route, you know, needs more flexibility. You know, we often, you know, we work within a lot of constraints, but just like how data needs to be more malleable for AI, you know, you're seeing kind of all these NEO storage solutions that are coming out because like we don't want just file and block and you know, we need a, basically a system that can read all types of different data and access it. Otherwise, you know, it just puts more strain on everything on the network and computing, you know. So you're trying to solve this for routing. Talk a little bit about that.
AE Natarajan: Yeah. So this is interesting because we have been foremost in the innovation cycle for us to be able to control plane and admin plane and networks. What we mean by that is the ability for us to actually drive how the network gets built and how the routers talk to each other to discover how the network is being built and the links that are going on in the network. Not only the links, the capacity of the links, the throughput of the links, the current data that is flowing through the links, the ability to provide feedback on those links in terms of delay, throughput, jitter metrics and other things that we can do. We actively put elements into it and we constantly keep innovating to build on top of it. When we do this, these innovations actually are spread across every piece of our devices that we build. Whether it is the MX, whether it is the PTX, whether it is the ACX, whether it's the QFX or even the EX switches that we build, all of them carry this in a very seamless way and they all react to it and build total solutions that are needed for today's network. Whether it is enterprises, whether it is large service providers or cloud providers, all of them are able to use these elements to actually drive the performance. One of the embodiments of this is clearly our MX304. The recent platform that we launched has become a de facto for people to be able to use this to on ramp a lot of how you get customers onto an AI, AI network or an AI cluster, which is amazing. Every major hyperscaler uses this platform to actually onboard their customers into their AI clusters and AI native clusters to be able to do both inferencing and other elements where they can actually query, do all of that stuff. That's amazing. And this platform is rocking and it's really, really catering to the needs of the people. And, and by the way, this platform is built with the trio which I talked to you about, that gives you the flexibility and ability to program and change it for any growing and changing needs. Today we talk about something, tomorrow somebody comes and says hey, I need these filters, I need these changes, I need these different kinds of things put in the packet. We can do that and it makes it a lot easier and simpler for us to do.
Daniel Newman: Yeah, well like I said, I think fungibility is going to be very important as network standards continue to evolve, as we sort of see ourselves flowing back and forth between, you know, high speed copper and high performing optics. And of course we just, we've got so much change and it's happening so fast. I mean we're on annual cycles now, one year cycles, which I don't, I really don't envy the capex management that these hyperscalers are dealing with right now. But of course, I am absolutely thrilled with how exciting, fast and how much money is being spent to build out this AI future AE. Let's wrap up talking about something positive, something I think is really encouraging. You know, I know that during this sort of shift we've had a pretty big shift in landscape, political shift and this is not a political show, but it's, you know, we, we've seen a change a little bit in, in things like sustainability and how much it's been in focus. I think everyone, most of the tech industry agrees of its importance. I think agree with its importance for not just kind of the more altruistic, but also because, you know, if we want to keep innovating, we need to be thoughtful about how we apply energy to solve problems. Actually the tech industry has always been very thoughtful about this. But as we build out AI AE, we see exponential growth. It was like how do we keep committed to this, to this zero emission, like especially these hyperscales. How do we stay committed when all of a sudden we've increased energy consumptions by exponential amounts in like just a year? And the truth is they really can't. But you can, you have basically doubled down. Juniper is saying no, we're sticking to it. We're staying with our net zero 2040 commitment. I'd love to hear a little bit more from you about, you know, how you've been able to, to do this and why you feel it's important to stay committed to this net zero by 2040 that you put out there.
AE Natarajan: So the reason why this is important is because if you really look at our, look at how energy costs have actually gone up and up and up whether, and it is even more significant in the rest of the world and United States has started seeing it, right, you see our energy bills go up. And if you go back and talk to anybody who is building AI clusters, everybody thinks that those guys are paying huge amounts of bills to this small Silicon Valley company or a silicon company here in the Valley. But they are also paying a much larger bill, a recurring bill year over year to the power companies. Okay? And if they, if you dissect that budget, this, this bill is much huger. Our ability to tell our customers and our people when they deploy Juniper technology that they will have the power efficiency to the finest extent as they grow and scale up their network and don't necessarily have to burn more and more power is important for us. So the value and the commitment to make that happen requires us to drive it all the way from every aspect that we do from our silicon, which is the most power efficient. Tomorrow we're going to be talking about space technology because we have to launch these routers in space or launch these routers in low orbit devices and other things. Guess what? There are no power generators down there. You need to be extremely, extremely power efficient in space. And guess our chips are one of the most efficient ones that is capable of being launched into space. That is the reason why we continuously commit ourselves to building these kinds of chips, building these kinds of devices, solutions and continue to plow through why we need to have power efficient commitments that are important for us all the way through. If not today, tomorrow it is going to happen. And you're absolutely right, everybody is power hungry with AI clusters. People are consuming more power. If everybody pays attention to it, in the longer term, it'll become a lot easier for us to build much more scalable networks and we want to drive that innovation.
Daniel Newman: Well, I want to thank you so much for taking some time with me here on the Six Five. It was great to sort of get the story and really tie some themes together. I think sometimes we want to look at sustainability, AI, you know, we want to look at networking. We kind of look at these things as all these kinds of silos. And I think the one thing we're learning, especially as we move from this first era of AI and ML to this generative era to really this agentic future is increasingly, you know, it's, it's outcome driven. And the outcome driven includes kind of the incorporation of we got to manage the power situation. We've got to get data from point A to point B. High fidelity, low latency. You know, we need enough compute power to get this done. We of course need software that overarches all this stuff and gives whether it's flexibility in routing, you know, where your engineers can design routing, or it's the ISVs being able to code on top of this stuff and be able to get selected solutions that work to people. It sounds like Juniper's doing a lot. I know there's a lot going on there. You're in the middle of a pretty big deal. Crossing my fingers you're able to get that over the line. But it was just great to spend some time with you. Love to have you back on the show. Let's talk again soon, eh?
AE Natarajan: Oh, thank you. Thank you very much. Really excited to be on the show and I really enjoy it because like you said, it's all coming together and if you don't see it as a full picture, you're missing the plot. Right. So we're, we're glad we're seeing the full picture and we're driving towards all that. Thank you.
Daniel Newman: Appreciate it very much and I appreciate everyone out there for tuning in to this episode of Six Five on the Road. Great conversation here hit that subscribe button. Join us for all of our content. We have a lot of great content here on The Six Five. But for this episode, it's time to say goodbye. I'll see you all later.
MORE VIDEOS

Liquid-Cooled SSD: Solidigm and NVIDIA’s Innovative Solutions – Six Five Media at NVIDIA GTC
Avi Shetty, Sr. Director at Solidigm, joins Keith Townsend to discuss the innovative liquid-cooled SSD solution developed in collaboration with NVIDIA, marking a significant advancement in high-performance computing.

Building Unified Intelligence for SMBs: The Foundation for CX with Epicor – Six Five Media Virtual Webcast
John Carrico, VP of Product Management at Epicor, shares his insights on how Epicor's Grow Data Platform is revolutionizing the way SMBs handle data for superior CX. This provides a deep dive into leveraging enterprise-level intelligence effectively within SMBs.

The Six Five Pod | EP 255: From Intel to Innovation: Pat Gelsinger’s New Ventures
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss recent AI model announcements from OpenAI, Microsoft, and Google, and debate the $300 billion valuation of OpenAI. The show features a special guest, Pat Gelsinger, who shares insights on his new roles as executive chairman of Gloo, and reflects on his time at Intel.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
quantum

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.