Today’s AI is Tomorrow’s Security Threat: Is Your Executive Team Ready?
How does AI complicate or advance malicious intent that could potentially cripple a company and why does the executive team need to be prepared and accountable. CEO of Sonatype, Wayne Jackson, will discuss the importance of software supply chain security, highlighting the power of AI – when fueled by the right data – to protect and safeguard organizations in driving software innovation.
Transcript
Daniel Newman:
Hey, everyone. Welcome back to the Six Five Summit. Daniel Newman here, CEO of the Futurum Group. I’m excited for this next conversation, part of our DevOps track. We’ve got Sonatype joining us, and its CEO, Wayne Jackson. Wayne’s a first timer, and I couldn’t be more excited to have you, Wayne, as part of the conversation. Welcome to the Six Five Summit.
Wayne Jackson:
Thanks, Daniel. Great to be with you.
Daniel Newman:
It is. It’s really exciting times. Look, Sonatype, the company has a huge pedigree, very respected, very known in the communities in which it’s served. Six Five Summit, we go chips to SaaS. We’ve got executives across from semiconductors to automotive and so much more. Give me the quick 60 second on Sonatype for those that maybe aren’t familiar with all the work you’re doing,
Wayne Jackson:
I’m happy to. We’re very much focused on innovation by way of software. For those who aren’t aware, most software these days is assembled rather than being written, and most of the parts that are used in that assembly are open source. We think of that all in supply chain context, so helping organizations optimize their supply chains using the best projects, the best open source, and of course securing their supply chains to keep out emergent threats like malware.
Daniel Newman:
There are a lot of different problems that can be solved with software. I like how you use that assembly line. Of course, anybody that’s following the GitHub growth, the copilot innovations that have been discussed in the marketplace, you listen to even Nvidia CEO Jensen Wong, talking about just how program and coding developing is going to change, I imagine that you guys are identifying and seeing different new problems. What is the big problem that Sonatype is solving right now for its customers?
Wayne Jackson:
I think, probably a multidimensional problem. There are so many open source projects out there, tens of millions. We see literally a million to 2 million commits a week now. For developers, picking the highest quality projects, picking the sets of functionality that most align to the goals for innovation that they have, is just not easy. Then you start to layer in the security dimension: Now you’re asking folks who aren’t necessarily trained in making security decisions, and you’re entrusting them with making choices that directly relate to and affect the cyber hygiene of an organization. You may remember back to Struts, which was such an impactful event for companies like Equifax, more recently Log4j, which was a global impact related to open source and open source vulnerabilities. Helping people avoid those kinds of scenarios is a huge responsibility. Then, as a more emergent phenomenon, we’re starting to see nefarious actors targeting open source ecosystems themselves and delivering malware into those ecosystems with the intent of actually compromising developers and development pipelines.
Daniel Newman:
Yeah, I’m glad you pointed that out. The session title is “Today’s AI is Tomorrow’s Security Threat.” I love that, because as I’m listening to you talk, I’m thinking to myself, there’s these two parallel tracks. Develop critical applications as fast as you can, and then there’s the, “We got to do that securely.” AI is accelerating things super fast. You can build apps really, really quickly. By the way, this was revolutionized even before this last few years of generative and speech to code and image to code, but even just with low-code and no-code, how fast app development got. But whose problem is this? The developers, probably most of them would say that they’re not security. They’re focused on building the app. Is it the CISO? With problems of this size and security problems scaling as quickly as they are, whose job is it to solve this problem?
Wayne Jackson:
I think there’s a shared responsibility. The thing that’s become obvious to me is that is, the original design of application security and application related hygiene, where you have a silo that is security and a silo that is innovation, and in between, there’s this thundercloud of dispute for who’s creating problems and who’s slowing down innovation… I think that model’s broken. Ultimately I think there’s a shared responsibility and hopefully the same kind of unification of function that we saw with QA. You may remember back in the old days when I was actually writing software, QA was a standalone function. Ultimately we had to see that that function dissolve into the process of innovation, and I think you’re going to have to see the same thing happen with traditional AppSec. Now, what that means is that we’re going to have to get much better at delivering the ideals of the AppSec function into the innovation process itself, much in the way that we have with AQ.
I think that’s a cultural shift, which is always a bit of a challenge, that has to happen. Companies like Sonatype, who are developing tooling and providing data and better insights for the development processes, we have to get better at integrating into developer workflows, and to help security do the assurance efforts that they need to as developers are doing their thing. Because ultimately, developers are working because businesses need innovation. We can’t get in the way of that process, but as evidenced by some of these horrible events over the last few years, we can’t just turn a blind eye to the risks that are associated with vulnerable software.
Daniel Newman:
Now we enter AI. The whole event that we’re doing here, AI unleashed. AI accelerates everything. AI drives faster both innovation, adoption, shorter diffusion of innovation into the market. It also creates… With every new technological wave, it creates just as many bad actors as good. Sometimes it feels like even more, meaning the speed and keeping up and dealing with the bad actors. Like I said, there’s a lot of debate on the good and the bad of it, but when it comes to the security aspects of it, how much are you seeing AI complicating it? How much are we seeing more malicious intent? Has it been notable in your world? What are the potential risks? What are the downfalls for companies if they can’t figure out how to deal with this?
Wayne Jackson:
Well, I think most folks agree that one of the main challenges of especially generative AI in the context of cyber is that it makes developing malware easier. There are actual models out there on the market, both the light net and the dark net, where people can acquire models to generate even very sophisticated polymorphic malware. That is going to make things a challenge. It’s just inevitable. But on the other hand, we’re getting better at leveraging models and AI to do malware detection. I mentioned the number of commits that we see, numbering a million to 2 million commit events per week. If we care about stopping the malware problem in open source, then we have to inspect every one of those commits, and doing that with humans would be practically impossible.
But fortunately, AI does make inspection and identification of suspicious behavior, illicit committer activity, something that is possible. Insofar as AI is making malware more prolific, we’re also being able to use AI to detect suspicious behavior and to stop the delivery of malware at the source, so a bit of a double-edged sword. In terms of the use of models, one of the things that we’re also focused on is helping organizations get a handle on what models are being brought into their organizations. Because, as you know, they all have different uses of data, different copyright terms, and different implications in terms of the quality of output. As with back in the “old” (quote/unquote) days where we were helping people get their hands around open source and creating transparency around the number of libraries in their infrastructure, we’re also trying to do the same thing for AI models.
Daniel Newman:
When you look at that, a lot of AI, even in the way it’s fragmented or segmented in the market right now, is they’re open and closed. That’s the way it’s being debated. There’s a large subset of AI that’s being developed in open, and you’re seeing the rise of the hugging faces and others that are really trying to democratize it. Then you’re seeing other companies… The irony of open AI is that it’s actually not particularly designed to be open. It’s open for us to use. But having said that, is that the layer where a lot of the exposure and risk comes too, though? Because when it’s open, obviously the way it gets iterated upon, the way it gets developed and built upon, it also does create more risk, because of the way it’s managed. It’s cohorted, as opposed to being in its own little bubble. Wayne, what are you seeing there? Is that the big exposure point, open source, or is it something else?
Wayne Jackson:
It’s certainly a reasonable debate point, but I’m of the view that openness and transparency ultimately win. If you look at traditional cyber as an example, I think the days of security through obscurity are dying. The notion of making your software more secure by obscuring its flaws is ultimately a recipe for disaster. If you look at regulation now, the White House pronounced requirements for software bills of materials, as an example, basically an ingredients list of the things that are being delivered to the federal government. Even more aggressive regulation in the EU are all about transparency and democratizing what vendors are sharing and what consumers should know about the software that they’re using. I think the same thing is going to happen inevitably in AI, and I hope that what we’ve learned about openness and cyber will translate into an acceleration of the openness of AI models.
Daniel Newman:
By the way, I’m a huge open source fan. It’s simply just the size and scale with, what, about three quarters to 80% or so? All software is developed that way, so it’s a huge part. It creates risk because it’s also just the largest part. It was kind of like, remember the old Apple-windows debate about how many vulnerabilities? Well, look at how many users. You always have to sort of weigh out the actual market adoption to the market size, and you go, “Of course there’s going to be some risk created there.” You said something that is really important, too, and it’s probably… I don’t want to hit you on regulation, but actually right before I want to do that, I want to go back a little bit and couple that with Sonatype’s market position. You guys have something around 70% of the Fortune 100. You work with a lot of these regulated companies, financial institutions, et cetera. Why do regulated industries find what the work you do, what Sonatype does, to be so important to their DevOps and the continued development of their software?
Wayne Jackson:
I think in the regulated industries… As you know, finance is one of our biggest verticals. They have a massively important critical mission. Beyond just being regulated, I think they instinctively care about the quality of their infrastructure, the quality of the software that they produce, and the integrity of their organizations, generally. I’d like to think that we’re pretty well proven as a best of breed provider of what we do. The folks that naturally care the most about innovating in a very hygienic way are going to be biased towards making best of breed choices. That often doesn’t happen further down market where there isn’t the bandwidth to make best breed choices about everything that’s being used. But in the large organizations, again, in regulated industries like finance, there generally is the bandwidth and the interest to make those kinds of choices. You touched on the level of open source, the typical software, but critical systems, the software that runs the world, facilitates interbank transfers: Most of those systems are mostly open source, and so the choices that are made with the regard to the open source is literally critical to the world functioning.
Daniel Newman:
Listen, I’ve got about a minute with you left. Wayne, it’s been a lot of fun learning both about what Sonatype is doing and your perspective on the impact AI has on the market, and why executives are going to be chasing, keeping up with the challenges and security. But for those regulated industries, the government stuff, what is a piece of advice or two that you would give as a best practice to stay up to par given how fast things are moving?
Wayne Jackson:
I’ll lean on a gentleman named Edwards Deming, who, as you may know, was the transformative figure in supply chain automation and optimization for Toyota: helped transform them from a textile manufacturer to the world’s leading automobile producer, or one of them anyway. He emphasized transparency, first and foremost. Be aware of what’s being integrated, how it’s being integrated, and so forth. Picking the best suppliers and picking the highest quality parts from those suppliers, so on open source terms, picking the best projects. But maybe most importantly, it was empowering developers to make decisions in the natural course of what they do, just as Toyota does with the Accord: empowering line workers to identify defects, to stop the assembly line if necessary, to not just fix the defect but fix the issues that led to the possibility of a defect. I think embracing supply chain principles and software can have the same kind of transformative effect, especially nowadays with the acceleration of innovation with AI that we saw in traditional manufacturing.
Daniel Newman:
Well, Wayne, I think you did a really nice job there of summing it up and bringing together the good old “people” in people, process and technologies. Wayne Jackson, CEO, Sonatype, thanks so much for being part of this year’s Six Five Summit. It’s going to be very exciting to watch the company Sonatype continue to innovate in advance, and of course, the impact that AI has on all of us and all of our businesses. Stay tuned, everybody. Stick with us for more coverage and content here at the Six Five Summit. Studio, sending it back to you.