AI & Us: Building an AI-Skilled Workforce: Overcoming Talent Gaps in GenAI Adoption
The future of work is here, and it’s powered by AI. But are we prepared?
As businesses navigate the complexities of AI implementation, talent scarcity, technical hurdles, and the need for interdisciplinary skills emerge as significant barriers.
In the second installment of Dell Technologies’ “AI & Us” series, Six Five Media host David Nicholson is with Cynthia Breazeal, Dean for Digital Learning at MIT Media Lab, and Vivek Mohindra, SVP of Corporate Strategy at Dell Technologies, to share actionable strategies to proactively build an AI-skilled workforce.
Their conversation covers:
- How AI is transforming every aspect of society – creating a need to equip everyone, from K-12 students to business leaders, with the knowledge and skills to thrive in an AI-powered world
- Bridging the skills gap through collaboration between academia, industry, and governments
- How MIT Media Lab is developing innovative AI curriculum and tools, while Dell Technologies is focused on upskilling its workforce and providing responsible AI infrastructure
- The importance of diverse voices and perspectives in its design and development to ensure that AI benefits everyone
- Addressing the need for creating inclusive learning opportunities and pathways for underrepresented groups
- Ethical and responsible AI development for data privacy, algorithmic bias, and the societal impact of AI
Learn more at Dell Technologies and MIT Media Lab.
Watch the video above, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Transcript
David Nicholson: Welcome to AI and Us, a series where we explore our future together in the age of artificial intelligence. In this episode, we’re looking at what we call the skilling challenge, this idea that there’s a gap between the skills we have today and the skills we’ll need tomorrow, actually later this afternoon, in order to fully take advantage of AI. I’ve got two amazing guests with very, very different perspectives on this subject, but I’d like to hear from both of them just to find out what their thoughts are. First, we have Cynthia Breazeal, she’s the Dean for Digital Learning at MIT, and Vivek Mohindra, who is Senior Vice President of Corporate Strategy at Dell Technologies. Welcome to both of you.
Cynthia Breazeal: Thank you.
Vivek Mohindra: Glad to be here. Thank you.
David Nicholson: Great to have you here. Let’s set the table. There was a recent study by ESG that basically asked the question, “Hey folks, what’s up? We’ve got this amazing generative AI technology. What’s keeping you from realizing the pot of gold that’s at the end of the rainbow?” The number one thing that they came back with was this idea that there’s a skills gap. That employees, members of organizations just don’t have the skills necessary to take advantage of AI to the fullest.
Cynthia Breazeal: Yeah, so at MIT, so I’m the Dean for Digital Learning. I’m also the Director of the RAISE initiative, and RAISE stands for Responsible AI for Social Empowerment and Education. Through that initiative, we believe AI is for everyone. Yes, we’re going to talk about the workforce today, but let’s face it, the AI genie is out of the bottle. If you’re using a digital technology, it’s affecting you, whether it’s your beliefs, opinions, how you learn, how you find information, you name it. This is beyond being digitally literate and a digital citizen. We now need to create an AI literate world. And so when you talk about what are we trying to prepare people for, at the Media Lab, we believe that it’s where technology and people come together. And so I would say we want to prepare people for a future where AI and people together can do more, be more creative, create more value together than either AI alone or people alone. That’s what we really need to prepare people for.
And so what do we do about it? Again, the Media Lab, the home of constructionist learning, Seymour Papert, so learning by actually making and creating. We have been creating a whole host of curriculum materials starting in K-12. Because that is one way you can reach generations to be able to learn about these technologies and how they work to demystify them, but also the responsible design and social implications of the technology through open free curriculum and tools. We could talk about low floor, high ceiling of creativity, and wide wall. We are basically trying to create curriculum based on learning through making, learning through doing, but also at MIT we talk a lot about computational action. Now because you can make, say, working mobile apps using AI with these kinds of tools like MIT App Inventor, kids can actually make things that make a difference to themselves and to their community. That is super empowering, and that gives them a broad perspective. How do you create, you innovate, how do you create new value by harnessing AI in a responsible way? To be able to do that in a much broader way, a much broader segment of population, a much more diverse and inclusive set I think is really important for our future with AI.
David Nicholson: Vivek, I think it’s great to hear from Cynthia that she’s got the future covered, but for those of us outside of academia who have to deal with yesterday, it’s great that we have generations coming up over time. I know Dell is involved in education of course, but what about from the enterprise perspective where you have adults? This is the classic teaching old dogs new tricks. Have we been through anything like this before? That’s the first part of the question, and how is your perspective maybe different on this when you have companies demanding ROI from AI, from everyone in the industry? What are your thoughts?
Vivek Mohindra: You asked what we are doing about it is we’re first making sure that our team members and employees are well-versed with AI. We created a very simple four module AI fundamentals training, which we made by the way optional. We did not make it mandatory. Lo and behold, pretty much the entire company took it and just to make sure that people understand the basics of that. Second thing we’ve done is we’ve created very specific to particular jobs AI skilling courses, which are more… I would call them 201, 301-level courses. If you want to use AI for content creation, if you want to use it for coding, so we have created those particular tracks that people can actually pull from and train themselves on their own board.
The third thing we are doing is we have created in partnership with NVIDIA a skills and certification program around AI, which we have taken to our customers’ partners and communities so they can also leverage what we are doing from our learnings. The ROI, really you’re exactly right. Companies are asking about ROI and it all starts with use cases, data, making sure their models and how they’re implementing AI is the best and the most economic possible, and then the infrastructure is the most economic and responsible. I think Cynthia touched upon those points as well. All of that requires fundamentally new skills.
You touched upon this, the industry’s seen this before. This is not new. When PCs came about, people had similar sentiments around PCs. But look what’s happened to desktop publishing and productivity since that time. When spreadsheets came along, human calculator and companies were adding numbers. People were worried about that. Those jobs then migrated to financial planning jobs. Prompt engineers didn’t exist about 18 months ago, and now that’s one of the hottest fields around which combines classic computer science training with the more classic humanities-oriented training we’ve ever had. This is a really exciting era. I think there’s a lot that will be different, but companies who are embracing it will see massive ROI. But they’ll have to go about it in a thoughtful way.
David Nicholson: Cynthia, Vivek mentioned spreadsheets as an example. I’ll freely admit that I am far from a power user of Excel. I would say that I probably know how to leverage 5% of its capabilities, yet it’s an extremely powerful tool for me still. Of course there are others who are power users who probably can leverage 30% of what’s available. When we talk about skills from an AI perspective, the skills required to fine-tune a large language model are very, very different from the skills necessary to just use the tool as an end point worker or as a consumer. Are you focusing on both of those things at MIT? What are your thoughts there, that sort of spectrum of requirements for skills?
Cynthia Breazeal: One of the things we’ve definitely learned is when you learn through making, you get a much more visceral understanding of what it takes to actually make these systems work than just watching some videos and doing problems. I mean, there’s a lot of intuition that you also build when you’re actually trying to create something with these tools and technologies, so I think that’s important to appreciate.
But I think along with that, trying to lower the floor and trying to give people the creative power to create things that are interesting and meaningful to them, again, is making sure that… I think because in industry teams of people create solutions, I think it’s important that all people on the team be able to have a shared understanding of vocabulary of AI. When I talk to folks in industry, I ask them, “So how much are your designers or these other kinds of people talking to your core technology people?” There’s still not enough conversation and collaboration around AI of these kinds of tasks and skills.
Part of it I think is trying to build a common enough understanding of vocabulary so that you can have much more effective collaboration when you’re trying to create innovative solutions with these technologies as well. I think there’s a lot of opportunity in the broad upskilling aspect of this for more intuitive tools and making sure everyone on the team is able to have a common vocabulary so they can collaborate more effectively. But I think there’s tremendous opportunity.
Vivek Mohindra: I was going to piggyback on that, Dave, and I think common vocabulary is a really important point that Cynthia talked about. This is why the four simple 15-minute Dell AI fundamental modules we put in place established that across the whole company. It’s a very practical way for companies to go about and establish that. I think the lowering of the floor Cynthia talked about is really important for lots of companies and enterprises as well. You could lower the floor and actually be able to use these tools and empower a whole range of other people in lots of economies to be able to take advantage of coding with lower threshold, similarly content creation and a whole slew of other areas. Both of these points are really important and I’m really glad that institutions like MIT are attacking it from one vantage point while we as companies are attacking it from the other. They all are consistent, and they meet in the middle in some way.
David Nicholson: I’m going to toss this back to Cynthia. Do you have any thoughts on that? I mean, is that a taboo subject at MIT, the idea that maybe you’ll be graduating really, really capable AI folks who don’t necessarily have some of the computer science skills? What are your thoughts?
Cynthia Breazeal: Yeah, so I can tell you that MIT actually anticipated this. We created the Schwarzman College of Computing specifically around that idea, which is instead of expecting… We were just getting overwhelmed by requests of our own students wanting to take computer science classes that let us bring these tools and technologies across the schools, across disciplines because that’s what you’re seeing. You’re seeing a lot of innovation and use of computation and AI in all these subjects: from science, technology, humanities, arts, you name it. Let’s push it into all of those disciplines as well, and they could advance those tools and technologies and practices within those disciplines.
I think that was just very responsive and just seeing where you want to always want to skate to where the puck is going. Just appreciating that AI is transforming so many different industries and aspects of society that it’s just, like we said, a really powerful tool that we want many, many more people to be able to use to their advantage, to create value, have higher IOI, and just enable opportunity. I also agree with Vivek. I think we need to figure out a way to be way more inclusive of who can master these skills to get access to these jobs and opportunities, and I would say beyond the four-year institutions. I think community colleges are a terrific place to consider. What are those practical skills that we can create and credential against or certify against that are meaningful to industry so that they can get into those jobs much faster without incurring as much debt, et cetera, et cetera? I think bringing the middle class into this wave I think is really important. It’s going to take, I think, a focused effort to say we want to innovate in these other segments of our population to make sure that this AI-powered future is inclusive and one of shared prosperity.
Vivek Mohindra: This is really, I think given this is what I find exciting about this technology too from a company perspective. Again, I think, Dave, you mentioned I went to MIT for grad school, so I’m partial to MIT. I’ve always been very impressed with how MIT has consistently over decades demonstrated this ability to be very forward-leaning into these types of things. When open learning platforms came about, for example, I remember I was speaking with the previous president of MIT, Rafael Reif, who used to be on my thesis committee for a few years until that direction changed. I’ve known him well although this year he retired, and even at that time, MIT was beginning to think through, okay, with these open learning platforms, how should a four-year degree experience change? Now with AI, I’m not surprised to hear at all what Cynthia is describing, that MIT and I bet other institutions are continuing to lead that way.
As are we. As are we as companies thinking rethinking, okay, what do we need in these different roles? We have always had a traditional definition of these job specifications, but what do we need now recognizing that these new types of tools are now available? That either somebody has learned them before they enter our workforce, or we can ramp them up pretty quickly and then allow them to do something very, very different? I think this will start emerging, this whole notion of it’s a very different workforce, lowering of the barriers. What do you do? How do you use these tools to get much better outcomes?
David Nicholson: How do you balance the requirements for speed of innovation in AI with making sure that we are being responsible? From a society perspective, there are all sorts of different angles. Privacy is just one. What are your thoughts there? And then Vivek, I’d like to get your thoughts on the same question, but Cynthia start us out.
Cynthia Breazeal: Yeah, so I mean, again, I think a lot of it begins with having the right education and training and the practices, the tools to help to ensure as much as possible responsible design that’s unlocking opportunity and minimizing potential harm. I can tell you that again, when we think about K-12, starting as young as possible, MIT was, I think the first educational institution to say the way we need to build AI literacy is not just to teach about AI and how it works, but dovetailed with that the societal implications, both potentially positive and negative and the responsible design of these technologies. So that no matter who you are growing up and whatever profession you end up taking, we all have that foundation of understanding how AI works in an appropriate way, having an informed voice of how I want it used in society, and then of course preparing young people to feel they have the mindsets and the skill sets to shape the future with AI.
Starting in middle school, we created the first curriculum AI and Ethics for middle school. I can tell you, when you weave those two together, young people’s eyes light up because the first assumption was math? Code. It’s all neutral, right? It’s all neutral. We’re like not so fast. Once you start to optimize for something, once you’re trying to train an algorithm to maximize a certain outcome, you have now encoded a value into that code. Values are not neutral. Whose values are those? What things are you trying to maximize? Who does that potentially preferentially benefit or harm by making that decision? And so we try to make those decisions, and these are every day designs decisions. Anybody who’s making an A powered solution has to contend with, which is you’re going to make a decision. You just want to be transparent and understand who your stakeholders are, what their values are. You may choose to design something a certain way, but you need to have a full understanding of why you’re doing it and how.
David Nicholson: I want to hear what you personally think, but then also I want to hear what you think industry’s role is in this? You represent Dell, but industry at large, would a company like Dell Technologies, do you just need to be sort of agnostic purveyors of the foundational gear for this and then let others manage the rest of this? Your personal thoughts, and then also where does a company like Dell come into this equation?
Vivek Mohindra: Yeah, great question. I was going to piggyback off of what Cynthia said too because I love the idea that K through 12 is being seeped in these notions so as they enter the workforce, they’ll be ready. Look, I think I go back, Dave, to thinking about it from a fundamental first principle’s perspective, right? First, just like I described earlier, companies are grappling with which use cases they should point this towards. Responsibility lies in there. You’ve got to figure out you’re not as a company enabling anything, which may not be responsible. Which obviously takes the form of not only training people, your employees, but also having a governance in place. We were one of the first in the industry to appoint a chief AI officer. We got a really head start on this and we put very good governance in place, so that’s where it starts. Number two, data is the fuel. You’ve got to make sure same principle of my own data, that you are using data that you are entitled to use. You’re using data in the way it was meant to be used. You’re providing the ability for people to opt out of the data usage. All the things that we’ve heard about.
And then there’s clearly processes and tools and technology that feed off of each other, all of which have an element in it. For example, in the tools and technology domain, responsibility for us basically take the form of making sure that whatever we are doing is the product development is responsible, our software development principle are responsible, our they can be validated. There’s openness around how it’s all developed. We’ve been working with customers to make sure they understand the governance principles we put in place so they can actually apply the same. But at the heart of it, from a very practical matter, Dave, this really is the broad shared responsibility. The trickiest thing over here is the speed and innovation and responsibility, how you keyed off your question. Because that really does require very strong governance and a moral compass for a company, and also requires how the government and the regulations come into play because I know governments are grappling with this. This is the number one topic when I speak with different government ministers all over the world. That is something that really needs to be balanced well.
Cynthia Breazeal: I’d like to add to that. I think we’ve implied it, but just to make it explicit. Right now, AI is not a particularly diverse or inclusive discipline, and a lot of the responsible AI is making sure we have people from very different lived experiences being the designers and makers of this technology because they’re going to really bring, again, the view points of the users, the broad base of users that we’re trying to support through these technologies. Diversity and inclusion is really important in how we reach and train those learners. Again, this is why I think K-12, community college is so important. We got to meet those learners where they are and, again, help create a path for them into these positions of opportunity.
David Nicholson: Cynthia, with that, I think that’s a great place to wrap this conversation. Hopefully folks have found this interesting. Diversity, inclusion, the name of this series is AI and Us, the inclusive us, and that’s critical to keep in mind. This is primarily, frankly, not a technology discussion. This will be studied by social scientists for decades to come. Again, I hope you found this conversation helpful. This truly is about AI and us. Cynthia Breazeal and Vivek Mohindra. I’m Dave Nicholson. Thanks for joining us today. Keep an eye out for further installments in this series.