AI & Us: Insights, Predictions, and Strategies for 2025

Are you an #AI skeptic? Find out why Dell Technologies and MIT Media Lab are optimistic about the future of AI. Host David Nicholson is joined by Dell’s CTO & CAIO​​, John Roese, and MIT Media Lab’s Director, Dr. Dava Newman, for Part 1 of Dell’s new series: AI & Us.They discuss predictions and preparations for AI and the importance of human-centered design, transparency, and trust.

📺 Tune in for:

  • The evolution of AI technologies and their expected impact on society and industry by 2025
  • Insights into the most exciting areas of AI research and development at Dell and MIT
  • It’s not just about speed, but a qualitative shift in how we integrate technology into our lives
  • The quest for AGI will continue alongside practical applications of AI to improve existing processes
  • Gen AI is just the beginning. Get ready for Gen Bio, large nature models, and AI-powered breakthroughs in healthcare and beyond
  • Governance is crucial. We need a collaborative approach with public-private partnerships to ensure AI is ethical, responsible, and benefits everyone
  • AI for good. AI’s potential to address complex global issues like climate change and revolutionize healthcare and education

Learn more at Dell.

Watch the video below at Six Five Media and be sure to subscribe to our YouTube channel, so you never miss an episode.

Transcript

David Nicholson: Welcome to AI and Us, a series where we explore our future together in the age of artificial intelligence. In this episode, we’ll be making predictions about how AI will shape that future in the coming year and beyond. I’m Dave Nicholson with The Futurum Group, and I am joined by Dava Newman, Director MIT Media Lab and Apollo Professor of Astronautics. Welcome Dava. And John Roese, Global Chief Technology Officer and Chief AI Officer from Dell Technologies. Welcome John. So John, when we talk about AI, do you see this as a revolution and evolution? Is this a movie we’ve been to before or is this something that is meaningfully different?

John Roese: Machines are doing more work for us, except the revolutionary part is this cycle. The ability to move cognitive work into the machine layer is empowered by every technology innovation we have done for the last a hundred years, the internet, the internet protocol, advanced computing, advanced data services. And because of that, it is happening at a speed that we have never seen before. It is not only gigantically impactful of potentially 30% of the world’s work suddenly gets done in a different way, but it’s also happening in a matter of years, not decades, which is very different than any other cycle we’ve ever been in. So yeah, this is just a logical progression of productivity improvement on a global scale. It’s just happening faster and in more interesting and novel ways than we’ve ever seen before in the last, I don’t know, 10,000 years of humanity. So yeah, it’s pretty interesting.

David Nicholson: Dava, as someone who regularly thinks outside the boundaries of humanity and even our solar system, do you think that this is something different? Is this just a quantitative difference or is there a true qualitative difference in AI and what’s to come?

Dr. Dava Newman: Thank you, Dave, and it’s so wonderful to join you, John. Well, AI advancement is certainly built on the increased computational power. We are seeing a profound shift. It goes way beyond the quantitative increase. Maybe the way to look at this, this shift, it’s about re-imagining how AI integrates and enhances our human capabilities. The Media Lab we’re saying helping people re-imagine, redesign their lives. And that’s I think where we’re at now. And also addressing really complex global issues. So we live in this chaos and messiness and complexity. Now, what’s happening in AI moving beyond narrow, becoming adaptive, responsive system capable of supporting critical human decisions. So if we take a look at this multifaceted interlinked complexity, whether it be climate change, healthcare, even living in our communities, it’s not just processing information faster. Now we can simulate potential outcomes Looking into those far futures, we have greater fidelity and it comes with all messiness of our real world complexities.

That’s the promise and workforce development, thinking about training for all really important skills, but that personalized support, helping close some of the learning gaps from really appreciating everyone’s diversity and different backgrounds. So taking a look at this moment in AI, I do think it’s fundamentally about a qualitative transformation. That’s exciting. That’s the hope, that’s the optimism. Here at the media lab we want to make sure to ask the really hard questions as well. We have to ensure that AI is ethical, responsible, trustable. So that’s a true difference. That’s a true difference than just faster speed, going faster. We have to walk it back and they’re really human questions. There are human interacting, augmenting that human capability.

David Nicholson: So let’s actually get into the predictions for the future. John, let’s start with you. And if we can look at this in the short term and longer term horizon perspective, that would be great. Starting with the short-term. Is it all going to be fantastic? Is there going to be some pain along the way? What are your thoughts?

John Roese: Yeah today, the way we experience AI is a big black box of magic somewhere else that we interact through an app into and its pretty powerful as we think about it today. But that’s not really what it’s going to be long term. The first dimension is it’s very likely that most of the AI interactions aren’t going to necessarily involve us as AI systems are able to interact with our physical world, with our other machine systems. So that clearly is going to happen.

The second instantiation of AI though, is to break up the big black box of magic. And there’s this technology emerging called agentic or using agent technologies. And what it really means, it’s an overused term, but what it really means is imagine a world instead of having one big monolithic AI, you have many AI still many times based on large language models or small language models, but each of them are tuned and optimized to do a task. Maybe one’s very good at writing software, one’s very good at analyzing financial data, one’s very good at interacting with robots. And the idea is as you start to instantiate AI as agents and those agents have specialized skills, and those agents can work with each other, two very interesting things happen that solve two of the big problems in AI today. The first is transparency. Trying to see inside of a big black box of magic is very hard. But if you look at a sequence of tasks done by a distributed pool of agents, even if you can’t look inside each agent, if the collective work always ends up producing bad software, go find the agent that writes the software and change that, you know where the problem is.

You’re incorporating a human into a machine system to make sure that ethics are in fact maintained. And so this idea that AI will always be a black box of magic that only spits out texts on an app, no, it’s going to be much more pervasive in every dimension of our physical world. And it also is going to be very, very adaptive and specialized, and it will appear more as digital skills than just simply some random technology. And when we actually have that outcome, the interweaving of humans and machines starts to become very real. And that is actually the outcome we’re shooting for. Nobody is trying to move everything to the machines. We’re just trying to make humanity better. And it turns out, it always works better when humans do what they’re good at, and machines do what they’re good at. But it works even better when they do it together and they speak the same language and they work together as a system. And that is in fact, what is happening right now. And we’ll probably characterize our industry for the next several years as people learn the new buzzword, agentic.

David Nicholson: So Dava, throw some logs on this conversational fire. If I asked you for your top three predictions, what would they be? But I’m frankly more interested in the ones that you are the most passionate about. Then part three of my nine-part question is how do we have to start thinking about things like governance? How do they fit in with your predictions? But start out, what are you thinking? You heard what John had to say, what do you say?

Dr. Dava Newman: I’d like to riff on John. First of all, I couldn’t agree more and then I’ll give the top three, but just imagine. So that’s what we like to imagine that future where it’s not just our LLMs, no, no, but it is these large nature models. Imagine if we could ingest not just gen AI, but get ready gen bio is coming. I mean that’s super exciting. Thinking about these, let’s call the large language models representing evolution. These would be new model ecosystems. They’re going to draw on massive interdisciplinary data sets. So you’re consuming all the earth science, ecology, our climate studies and the real building blocks of life, literally the biology. So that helps us simulate, look into the future. What are some environmental outcomes? What if we could do that in real time with actionable insights? So this is just the prep. I’m so excited, but again, it’s going to be human centered.

We have to take action. So what are those insights that help make? We’re the decision makers, the people at the table, but with these really powerful models coming into the future. So from sustainable decisions we want to make to urban planning, you name it. So I think that that’s my one to riff off of what John was saying, let’s look into that future with liquid AI. Let’s look into a large nature models. It won’t be just LLM. LLM, we’re at the infancy. That’s basic. That’s early, but in the not too distant future, it’s really going to ramp up and that’s super exciting to me. Again, the building blocks of life. So that makes me think about after Gen AI, just right on the heels is gen bio. What’s that? If you haven’t heard about that before, it’s really generative bio ’cause that’s again going back to healthcare.

Right now we have a pretty static approach. It’s going to become dynamic. Data rich modeling, that personalization, hopefully really accelerating the breakthroughs when we can combine AI and biology. Wouldn’t you love to know and even think about totally new drugs, new pathways, personalized medicine. We’re just on the beginning of this preventative healthcare revolution when we combine AI biology. And for the number three, I’m going to go back to learning and education that really demands AI collaboration. It’s not replacing anyone. It’s really empowering both all of our students. And what about lifelong learning and skill adaptability.

So if we look at this in the people and all of our incredible new AI tools, again, it’s many, it’s multiple. Then we think about moving again from static tools to really interactive, go back to real time. What’s data you can serve up in real time? And it’s not just one person. I believe in teamwork as engineer and getting people to the moon and Mars, it’s always teamwork. And so we take the best of everyone. And now AI, it’s just a partner. It’s just a partner around the table that helps our teamwork. And thinking again about that future of knowledge, that future of education, if we get it right and trusted and it’s accessible and equitable to all. I think that’d be pretty nice results for society.

David Nicholson: It’s one thing to be concerned from a governance perspective about what the next word is that the model predicts leading to things like misinformation and disinformation. But Dava, when you talk about the amazing opportunity that AI represents for us in the field of bio and health sciences, governance starts getting even a little scarier, doesn’t it? I want to hear what your thoughts are on that because you brought up those two subjects and then also let’s let John chime in on it. But Dava, what are your thoughts from a governance perspective, what do we have to do to make sure that this genie doesn’t completely get out of the bottle?

Dr. Dava Newman: Yeah, thank you for the question. It’s really important. We’re behind. We don’t have the standards in place. We don’t have the policies in place today. So let’s convene, let’s get around the table and we have to take a look at these global issues, so it’s not one size fits all, but we can look at best practices. We could all work with each other, do believe that we need some guardrails. We should always be asking just those fundamental questions. Is it trusted? Is it responsible? As researchers, let’s run through these scenarios, but let’s not let these things, in my opinion, out into the wild until we really know that they’re very robust.

And let’s spend a lot of time too. We’ve talked about big complexity, but the foundational models, the foundational models that are simpler, they can be very precise. We know everything that goes into the training of those models and their predictive capabilities, I think that’s important as well. So not one size fits all, some of these very, very large models and then some of these foundational really high precision foundational models. That’s also where I think we have the opportunity to hold ourselves accountable, put in the right standards, put in the right policies about how these should be designed and how they should be deployed.

David Nicholson: Yeah. John, you’re in a unique position because you straddle the divide between academia and industry. But for the purposes of this conversation, I’m definitely going to want to double click on your industry expertise. Thoughts on what is the private sector’s responsibility in all this? But in general terms, what are your thoughts on governance?

John Roese: Yeah we have to realize AI is a very big continuum and trying to regulate all of it under one regulatory framework is like saying healthcare regulations are going to govern transportation. That is literally the analogy that we’re attempting to do. Those are very different environments, and fundamentally, you have to think about AI not as a single domain, but as you may need best practices in enterprise use. Maybe best practices in things that reach our children and influence our populations or touch the political spectrum or touch regulated environments. It’s going to be more complex. And our fear is oversimplification, trying to move fast just to feel good, is creating this chaos of 700 conflicting regulations. And quite frankly, none of them actually reflective of what real technology is. So it’s a bad situation, but it is solvable and it will be solved with public-private partnership. And my recommendation to most of the policymakers is you cannot do this in government. You have to do it with industry.

Because if you are not at the front of this technology as it’s being built and deployed and learning from it at the same time you’re trying to regulate it, you will miss something dramatic. And so that’s why, quite frankly, doing the work in the open, sharing it and making it accessible to our policymakers is critical to make sure that we actually have effective AI regulation going forward. But the bottom line is we will never regulate this technology. We can only regulate what we want it to do to our society and what positive or negative impacts it could potentially have.

David Nicholson: Dava, are you generally hopeful about all of this? That with all of what John just went through, the very, very real considerations that we have in front of us, are you hopeful that we’re going to be able to figure this out and that artificial intelligence on balance is going to uplift humanity?

Dr. Dava Newman: I am because we’re in charge, and that means the people and we all have to believe in alliances. John mentioned, so public-private partnerships, we have academia, we’re doing the research. We work hand in hand with our industry partners and we all have to be at the table. You want those policy makers at the table with you as well, showing the simulation, showing the scenario, taking that don’t get caught up in the near term. Look at further term when we’re talking about these things. If we get it right, what does it look like when we get it right? And I want to go right back to the transparency and the trustiness. You have to show people what you’re doing, run those scenarios and always at the table have the researchers, have industry, have some government decision makers so that we could all work together. This is really going to that type of approach rather than being in our silos.

I think that alliance is really important. We want to get it right. I am hopeful, but we have to be, again, very upfront about let’s ask the right questions, not just unleash the technology. Let’s say who’s it for? Who’s it helping? We can run those case studies. Is it equitable? Is it working for all? What did we miss? Okay, what other training? What did we miss? Always checking ourselves. What did we miss? How can we make it better? It is very dynamic. It’s moving fast, so there’s no one answer. It’s complex. Let’s admit that. And we love, as a systems engineer, we’d actually love to work in the complexity and think about that network and what we can get right. So not to be overwhelmed, but if we all work together, I think that’s the only way I see going forward with some of the regulations, some of the standards. It’s exciting because it really will change, I think, life and work as we know it. But let’s keep an eye on making sure that it works for everyone.

David Nicholson: Whenever we’re talking about issues facing humanity, the subject of climate and sustainability inevitably come up. What’s the connection with AI, Dava, from your perspective?

Dr. Dava Newman: We have a huge opportunity. We take a look at climate and all of our climate models, those are huge, enormous, holistic worldwide models. They use a lot of energy. So what we talk about is a dual challenge of energy and climate, right? The world needs fast, cheap, affordable energy that’s just factual. What are we going to do to hit our climate goals? In comes AI with, especially when we put in the physics, that physics-informed neural nets. How do we downscale? So what we have the capability to do now is bridge between AI, our machine learning tools and the climate models, make them much faster, much, much faster, a thousand times faster. That’s a lot less energy, which we all need. And with these results, I call it satellite imagery of the future, we’re getting so much data, it’s eyes on earth from space. More than 50% of all of our climate variables, our vital signs are now measured from space.

So in just that, we look at that data and we can show people not just a holistic what’s happening on earth with our climate, vital signs, carbon dioxide emissions, methane emissions, zoom right down to your zip code. That’s what you care about. Where’s your work, where are your plans? Where do you live? You care about your family and friends and your neighbors. And so serving up that data, that’s what our new machine learning tools, again, informed by physics informed basis, and algorithms can really put maybe climate in your pocket. I call it the climate pocket. And this is where we give humans agency, what are your actions? So armed with all that knowledge, all that data, what can you about it? So we’re empowering people. We need the story to be compassionate, compelling, we all love and care about earth, right? Earth’s going to be just fine. It’s about us and the humans and what decisions do we make,

David Nicholson: John, some final thoughts. I honed in on the bad, but solvable comment you made because I think it’s brilliant and it’s actually the kind of adults the room perspective that I think we need to have to move forward. But what are your final thoughts on that? Are you overall hopeful?

John Roese: I do really believe, but I have a unique position. I am implementing this in one of the largest global multinational companies in the world. So it is not theoretical. We have put these technologies into production. We have used them very specifically against processes and functions in our company would be software development or how we service our customers, how we sell, how we build products. And we have seen material impacts. We have seen it bend the curve, not just in terms of economics, but in terms of customer satisfaction, in terms of quality of experience. And this is very early. And so the fact that using what I would consider to be version one of this modern gen AI cycle has already yielded significant impact in a positive way.

And it has given us clarity about what our company looks like in the next five years and what humans’ role in this company. And we’re just wrestling through these, getting to a conclusion that at the end of it, I am incredibly hopeful because all I see is a future in which humanity is healthier, happier, more productive. We wrestle to the ground some of the great challenges that we’ve had for decades like antibiotic resistance and personalized healthcare and curing cancer of various flavors. And you just cannot help being optimistic about what that means. At the same time, back to being pragmatic, it will be an interesting journey. It’s happening in the same cycle as the industrial revolution, but it’s happening in 1 100th of the timeframe. And that will be bumpy and it will require us to be patient and to be careful. But the bottom line is as we get through the knothole to the other end, every time we put these technologies into production in a trustworthy, intelligent way, things get better and they get better collectively, not just for a few. And so I am very optimistic, but like all technology, we have to be diligent. We have to realize it works in service of humanity, and it’s on us to guide it.

David Nicholson: Well, folks, I think if we have folks like Dava and John responsible for AI, I think AI is in good hands. It’s good to know that those are human hands. It’s important to understand that we are at the very beginning stages of this journey. This really is about AI and Us. It’s not about AI controlling us. For Dava Newman and John Roese, I am Dave Nicholson with The Futurum Group. Thanks so much for tuning into this conversation. I hope you’ve found it thought-provoking.

Other Categories