What’s New with Azure AI at Ignite 2024

🔥 New Azure AI announcements coming in hot from #MSIgnite

Catch the latest updates in this episode of Six Five On The Road at Microsoft Ignite. Hosts Patrick Moorhead and Daniel Newman are joined by Microsoft‘s Asha Sharma and Sarah Bird, for a conversation on new developments in Azure AI and its impact on developing generative AI applications.

Tune in as they cover 👇

-The Azure AI ecosystem for developers

– The evolution of generative and multimodal AI over the past year

– Azure’s solutions to customer needs and challenges building #GenAI apps

– Key announcements from Ignite, including Azure AI Foundry

– Strategies for building consumer trust in AI usage and the future of AI agents

Learn more at Microsoft Azure.

Watch the video below at Six Five Media at Microsoft Ignite and be sure to subscribe to our YouTube channel, so you never miss an episode.

Transcript

Patrick Moorhead: The Six Five is On the Road here at Microsoft Ignite 2024. We are in Chicago and we have been talking enterprise AI for the entire week. And Daniel, what’s incredible to me is the multiple modalities that Microsoft brings to the table for enterprise AI. Whether you’re a user, an IT professional, a developer, if you want to consume it differently, if you want to consume it IaaS, PaaS, SaaS, they pretty much have everything for you.

Daniel Newman: Yeah, the week has been very interesting, Pat. It’s been palpable the amount of launches, disclosures, new technologies, products, and of course tying this all together around a blanket of security and responsibility. We had the chance, you and I, to spend some time with Satya, and it was really good to get the lens from him taking it from left to right. And then of course for everybody out there, we had many Microsoft executives in here, Scott Guthrie, others have come and join us to talk across the portfolio, hardware, software, platforms, developers, security, responsibility. And so it’s been a really big week and for everyone out there, I think it’s a lot to consume so hopefully we can help break that down.

Patrick Moorhead: For sure, and Azure AI was a big one. And one of the things that Microsoft always reinforced was, okay, there’s all this great stuff you can do. Here’s how you can do it. But it’s buttressed by not only security, but also governance. Responsible AI, out of one side of the mouth they’ll say, well, of course enterprise, that’s top of their mind. But if you’re delivering consumer services, that’s important to you too. It’s pretty much important to all of these companies.

Daniel Newman: You had me at buttressed.

Patrick Moorhead: Okay, there we go. That’s a real word. With that said, I’m pleased to introduce Asha and Sarah, great to see you, first time on The Six Five. I know it’s been a huge week. Thank you for spending time with us.

Asha Sharma: Thanks for having us.

Sarah Bird: Thanks for having us.

Daniel Newman: It’s great to have both of you. I’ve followed and watched both of your work over the last few years. Generative AI has risen into the consciousness of the whole world in two years. It’s changed everything from economies and markets to, of course, our tech sector. Asha, I’d love to get that first take from you. I mean, two years ago, ChatGPT, Microsoft made this big partnership announcement, exploded onto the scene. Now we’ve got multimodal, we’ve got just so much change and innovation. Talk about how you see its evolution over the last year and your customers. How are you seeing them evolve?

Asha Sharma: I mean, it’s been a wild year.

Sarah Bird: Couple of years.

Asha Sharma: Yeah, this year it feels like 10 years, but it’s been tremendous. I mean, if you think about it, we were releasing a model every six weeks not too long ago, and now there’s a new one almost every hour that’s coming out. There’s more than 1,800 models on our platform alone and we’re seeing the rise of open-source models, task-based models, industry models. And I think what we’re seeing is the intelligence is starting to mirror more of human intelligence, so different modalities are extremely popular right now. And I think we’re going to continue to see that. We’re also seeing the rise of agents everywhere. So instead of these simple PoCs, our customers want to automate all of the workflows. And so more and more I think applications are going to start to become really dynamic and they’re going to learn and iterate over time. And so we’re seeing lots of customers move in that direction. I think last season was the season of experimentation and prototyping, and now this is the season of production and scale and figuring out how to do that responsibly, which we spent a lot of time on.

Sarah Bird: Yeah, I think we’ve also really seen the rise of the importance of trust in these systems. So Microsoft’s been working in responsible AI for a very long time now, and we’ve been talking about why this is important, but really in the last year we’ve seen the rest of the world fully understand that. And so now when we talk to enterprise organizations, one of the first things they’re asking about exactly is what are the risks I need to address here? How do I address those effectively? How do I govern my systems? And that’s, of course, increasing even more this year as AI has started to become regulated and we’re seeing laws appear around the world. We’re focused a lot on the EU AI Act coming into force this year, and so lots more interest and questions from customers about how they get AI to be regulation ready.

Patrick Moorhead: So Asha, I’m a recovering product person. I did that.

Asha Sharma: I’m sorry.

Patrick Moorhead: I did that for 20 years before I created my analyst firm. But I’m thinking through the rate of change and how you would put a product strategy against this, something that might have to have legs for five years. I’m just wondering how are you evolving? What’s your strategy to keep up with the change, deliver the innovation, and stay within I would say the box of responsible AI?

Asha Sharma: I mean, I think everyone’s figuring it out as we go. We are in that boat as well. Consumption on our platform for Azure OpenAI service has doubled in the last six months alone. And so when you start to see numbers like that and usage like that, it’s a fun challenge. The way that we think about product shape and product strategies is a few fold. One is, I don’t think about it in three to five years. I think about it in seasons, and seasons are marked by secular changes. And the secular change we’re in the middle of is every developer is going to be an AI developer. Every organization is going to want to customize their application to get a set of outcomes, so the value, the cost, et cetera. Every organization is going to need to start to think about trust by default and not as an afterthought. And so we feel this new season, that’s why we introduced a new platform this week called the Foundry. And the idea is that, hey, everybody is moving in this direction. How do we have the building blocks for everybody? How do we make sure it’s modular? How do we make sure it’s open so as things change, we’re not boxed in, our customers aren’t boxed in, and they can learn and grow?

Sarah Bird: And on the trust side, one of the things that’s so great about doing this as Microsoft is that we ship maybe more AI than any other organization on the planet. And so we get to learn from our own experiences also on what’s working and what’s not working and that allows us to have a really tight iteration cycle of trying out new innovations in this space. And so while it feels like we’re changing really quickly and we’re constantly experimenting, everything starts with our AI principles, which haven’t changed. And so we’ve always been grounded in what we’re trying to achieve with the AI system. And then we’re just regularly experimenting with how and what is the latest technique for that.

And actually one of the coolest things about generative AI technology is that it’s actually been a huge breakthrough for how we can do responsible AI. It’s a really important tool. For example, our Azure AI Content Safety, which is the safety system we put around all of the generative AI models that we ship, that’s built on the latest generative AI technology and it would not have been possible to do that a couple of years ago. And so we experiment with the new techniques, but the foundation is still the same.

Patrick Moorhead: So it sounds like architecturally putting together the right architecture that builds in flexibility on the things that are going to fundamentally change over time, and that’s a lot harder than just putting something out there that might have legs for a year and then, oh, let’s rebuild it when something new happens.

Sarah Bird: But we’re building on the foundation we already have. So for example, for a long time we’ve been talking about zero trust architectures. That’s something that still applies now and we just adapt it for what does it mean in the world of AI, but we’re not starting from scratch on any of this.

Daniel Newman: So Asha, let’s pivot to the developers a little bit more deeply, a couple product people here. You guys want to talk that.

Patrick Moorhead: I’m a former product person.

Daniel Newman: Oh, come on. But y’all had a big slate of announcements and some of them were very centric to developers. Do you want to share just a little bit about what were the announcements that developers should be really excited about, maybe a little bit more on the Azure AI Foundry because that seems like a really big opportunity and something developers, if they’re not paying attention to, need to be?

Asha Sharma: Yeah, so we envision a very focused world of every application becoming an agentic application. And in order to build that, host that and run it and scale it and secure it, that’s what Foundry is for. So Foundry is modular. It’s flexible. You can bring your own data, you can bring your own tools. We want developers to use what is most familiar to them and what is most helpful, so that’s a big thing that we announced. We announced partnerships with Weights & Biases, Statsig, Gretel, a number of others in the ecosystem. I think the second thing worth knowing is the SDK. And so we’re starting to think a lot about how do we make sure that when you’re building your application we make it super simple to get set up to go and as the models change, as the technology changes, you just have one simple place to actually go and grab all these libraries.

Third thing that we talked a lot about is just meeting developers where they are. We have the world’s largest and most loved IDEs, and why make them go somewhere else when they’re in the flow of development? And so we announced a number of tools around that. Specifically, I’m very excited about RAG coming into GitHub. The model is only as good as its memory, and I think that’s what RAG we’ve seen in the world does. And then we made a number of announcements around safety that I think will be very important too for developers.

Sarah Bird: One of the things we want to do with Foundry is make sure that when developers are adopting AI they’re starting from the safety and security best practices by default. So for example, Azure AI Content Safety that I mentioned is integrated by default around models, so they already have that best practice of a safety system built there, but then they can configure it in the way that it makes sense for their application. And here at Ignite, we launched a couple new things that I’m really excited about. A really important part about developers being successful and being able to really use this technology is the ability to evaluate it successfully. And so we’ve been investing a lot in tools to make it easy to evaluate applications for quality, but also for risk and safety. And so one of the things we announced here is bringing that capability also for images now.

So not just being able to test text, but as Asha said, multimodal is a huge area that we’re seeing growing adoption and so we bring our evaluation suites there. The other thing, I mentioned regulation earlier, people are really focused on how do they govern their AI systems. And so we announced AI reports, which is bringing some of the key things you need to document about your AI system forward so that you can use it. And we have great partnerships with Credo and sci. that we just announced to integrate with our governance pieces that we’re developing in the Foundry so that they can have those great compliance workflows and you can take those even to different clouds and things.

Patrick Moorhead: So Sarah, I’ve chronicled multiple inflection points in the industry, and I think we all have. I mean there was social, local, mobile, there was the web, there was e-commerce, and every one of these shifts, there were a lot of variables that went into adoption, but there was the what can it do for me, how much does it cost and can I trust it? I remember when e-commerce first came up, literally like 95% of people did not trust putting their credit card in on the web. And I think here with AI, it’s very clear. I mean, I talk to enterprises all the time, and the second or third thing they might talk about is trust after the what it can do for me. And consumers, I saw some numbers that were thrown around that 72% of consumers actually were interested in a company’s policy on this. How can companies increase the level of trust with their constituents?

Sarah Bird: So obviously one of the things is to build with the best practices, and Foundry is a great start in helping you do that. But another element is for an organization to provide the right level of transparency. So Microsoft, for example, we do this at multiple levels. Our responsible AI standard is something that we actually published several years ago so people can see the standard we’re holding ourselves to, but then we also want people to understand how we put that into practice. And so we just this year published our first transparency report, which talks about how Microsoft does our responsible AI practice in many different dimensions. And another thing we’ve been doing for a long time though is then how does the specific technology work?

And so we, of course, have security commitments and privacy commitments and all of those things we make public. But another thing we do is for the AI models we develop, we put out a transparency note that tells you what the model does well, what you should think about when picking a use case, what are the different risks that you might need to address so that we’re transparent about how we built it and what we know about it so you can use it successfully. And then, of course, you want to design the user experience to bring those users in and help build trust. And so we have something called the HAX Toolkit that puts those user experience best practices out so people can adopt them as well. And so you have to think about this multi-layered approach of how you’re building trust and having the right level of transparency.

Patrick Moorhead: That’s good.

Daniel Newman: It’s really interesting. We’re in this super speedway right now of how fast you go and what kind of car do you put on the track. I love that because you can go as fast as you want, but at the same time, risk, responsibility is that gating factor. And it seems that there’s a continuum, not just within tech, but also geographically driven. We see sovereignty issues across different spots and parts of the world. In the US, I think we’re a go, go, go part of the world. And we’ll figure it out like, oh, we broke the rule, let’s pull it back. Whereas in other parts, they’re like, we’re not going to try. And so you guys have to enable that part of the world, which is this interesting construct because everyone wants to do it, but how fast we go is really to be determined.

Sarah Bird: One of the things, because people often are bringing up this, the trade-off between how quickly you move and responsibility, and I mentioned that Microsoft has been doing our responsible AI practice for more than eight years now. And so we’ve been working for years to make sure that we can go fast and that we can innovate because it’s a bad situation when you have to feel like it’s a trade-off. So we don’t think of AI innovation separate from responsible AI innovation because it’s also part of just building a quality product. Users don’t want a product that’s doing something that they don’t expect or has errors. It’s just how you need to build AI. And so a lot of our innovation has been about removing that trade-off and having it really be this is just the way you build AI and you’re not thinking about if you want to take more risk or not because we’re going to make it such that you can innovate and have appropriate risk at the same time.

Daniel Newman: Yeah, that’s definitely the goal.

Asha Sharma: I was going to say I think at the end of the day there is no business without trust. There’s no AI without trust. And I think it’s the most important feature, not constrained. I think it drives growth. I think it drives the ability to meet customer needs. And so I very much agree with what Sarah said. I mean, Sarah’s team builds all of our evaluation capabilities. They’re helping customers just build the best quality application, which includes safety. It includes security, it includes all those things. And when you think about it that way, gosh, it is the thing that will drive the next frontier, and not hold it back.

Daniel Newman: And it’s definitely a differentiator for Microsoft because Pat and I track a lot of companies doing AI, and I don’t think that’s universal. I’m just being candid. There’s a lot of companies that are going as fast as they can and then looking at, oh, how do we fix this now that it’s broken?

Asha Sharma: And Sarah’s org is in, they are building the products. They’re not a separate-

Daniel Newman: Monitoring.

Asha Sharma: …compliance team that’s coming down. No, they are reinventing their own processes, reinventing their own products every single day. And so I think that mentality is also good where if you put it on the site, it will be a side. If it is a core part of the product and everything we build in the platform, then it will be that.

Daniel Newman: Let’s-

Sarah Bird: Can I?

Daniel Newman: Go ahead.

Sarah Bird: This is also one of the things that’s important I feel very much about Microsoft moving quickly in this space because we want to set the standard to say this is just what’s expected. And the people that are out there first are setting the standard. And so it’s important to have industry standards and regulations and things that holds everyone to the same bar, but we want an existence proof of what’s possible and that this thing doesn’t need to be a trade-off and make sure that customers in the world demand that of tech organizations.

Daniel Newman: So let’s wrap this thing up talking about agents. Agent and agentic has come into the… We talked about AI rising into the consciousness in the last three to six months, then all of a sudden, agents are all the thing. They’re the future of apps. Enterprise apps are going to go away, agents are going to replace them. There’s different theories. I’m not saying I agree with all this, but I do have a thesis that well-designed, you guys are meeting the moment of where RPA always lets people down and the intelligence of AI can come together to actually do what automation has long been intended to do. So this is evolving quickly. Asha, I’d love to just get your take how quick does this go, what are the capabilities with agents that you’re most excited about, and can you share how do you see this changing the app ecosystem?

Asha Sharma: I mean, I think that it will move quickly. I think we’re already seeing it. I think we’ll start to see agents come closer to the models themselves. I think we’ll start to see agents completing microtasks almost wherever it makes sense. I don’t think about them as this crazy world of agents out there that’s really scary that you hear about. I just think about all the things that we do that could be automated so you can spend time doing other things. And so I think everything from how we write the applications themselves to how we run the applications to what applications do for customers will change. But I personally believe the app construct will stay. I just think they will become more intelligent. They will learn with you. They will improve, and they will do things on your behalf or assist you in ways that I think they can’t today.

Sarah Bird: But this is only going to be possible if we get the trust part right because if you are having a system take actions on your behalf, then you really need to trust it. And of course, with the systems taking actions in the real world, not just printing text on a screen, then the impact of an error can be a lot higher. And so this is a space where we’re spending a lot of time and extending the guardrails and the foundation we already have, the evaluation systems, but looking at how we adapt this to the agent use cases we’re seeing today, and of course, where we want to go with the technology.

Daniel Newman: Well, hopefully the agents can keep me from continuing to hallucinate off of the planned. I love this. Sarah and Asha, I want to thank you both so much for joining us here on The Six Five. Great conversation. I think we found a couple hot buttons too.

Patrick Moorhead: Yeah, for sure.

Daniel Newman: We’re both very, very excited it sounds like with everything that’s going on and we’re excited to hopefully have you back again soon.

Asha Sharma: Yeah, it was great being here.

Sarah Bird: Thanks for having us.

Asha Sharma: Thank you.

Daniel Newman: And thank you all for being part of The Six Five. We are On the Road here in Chicago at Microsoft Ignite 2024. Hit that subscribe button, be part of our community. Join us for all of our coverage here at the event, and of course, all of our content on The Six Five. But for Patrick and myself, it’s time to say goodbye.

Other Categories