Generative AI isn’t coming-it’s already here. And if you’re a leader waiting to see how it plays out, you’re already behind. Companies using it effectively are cutting costs, boosting productivity, and even creating new revenue streams. But here’s the catch: generative AI doesn’t fix bad leadership. It amplifies it. If your team is disengaged, your processes are messy, or your culture is toxic, AI won’t save you. It’ll just make those problems faster and bigger.
Stop Treating AI Like a Shortcut
Too many leaders see generative AI as a way to do more with less. Write faster emails. Generate reports in seconds. Automate scheduling. That’s not strategy-that’s tinkering. The real opportunity isn’t in saving an hour a day. It’s in redesigning how work gets done. Take USAA. Instead of slapping AI on customer-facing chatbots, they used it internally to cut case resolution time by 27%. Why? Because they asked: What’s slowing down our people? Not: What can we automate to cut headcount? The result? Faster service, happier employees, and no customer backlash. McKinsey’s 2025 data shows that organizations treating AI as a transformation tool-redesigning 30% or more of their core workflows-are three times more likely to see real financial returns than those just automating tasks. If your AI use case doesn’t change how your team works, it’s not worth doing.Build a Leadership Team, Not a Tech Team
You don’t need a CTO to lead AI adoption. You need a cross-functional team of leaders who understand both the business and the people. MIT Sloan’s research gives a clear blueprint: assemble your team within 30 days. Include HR, operations, legal, frontline managers, and one or two tech-savvy employees who aren’t in IT. Don’t wait for perfect data or a flawless plan. Start with one high-impact, low-risk use case-like drafting internal memos, summarizing meeting notes, or analyzing customer feedback. IBM’s approach is telling. They didn’t roll out AI training as a tech module. They made it part of leadership development. Their new online course, Leading in the Age of Gen AI, teaches managers how to use AI for difficult conversations-like giving feedback or addressing underperformance. The goal? To make leaders more human, not less.
People First, Tools Second
The biggest failure I’ve seen isn’t technical. It’s cultural. A director at a major retail chain rushed AI tools to their teams without explaining why. No training. No guardrails. Just: “Here, use this.” Within months, employee anxiety spiked 30%. People thought they were being replaced. That’s not AI’s fault. It’s leadership’s. Leaders who succeed are the ones who talk openly about AI. They say: “This tool won’t take your job. It’ll take the boring parts so you can focus on what matters-helping your team, solving real problems, and connecting with customers.” IBM’s data shows leaders who redirect AI-saved time toward human activities-like one-on-one check-ins, coaching, or strategic planning-see 37% higher team engagement. That’s not magic. That’s intentionality.Governance Beats Bans
Some companies banned generative AI outright. Others let employees use whatever tool they wanted. Neither works. MIT Sloan found companies with clear AI governance policies outperformed those with bans by 3.2x in productivity. Why? Because rules create safety. When people know what’s allowed, they use tools responsibly. Start simple:- What tasks can AI help with? (e.g., drafting, summarizing, brainstorming)
- What’s off-limits? (e.g., customer communications, legal docs, HR decisions)
- Who reviews AI outputs before they’re used?
- How do we protect sensitive data?
Don’t Chase the Hype-Focus on the Hard Stuff
The market is flooded with AI tools. Microsoft, Google, and Amazon control 81% of enterprise AI infrastructure. But tools aren’t the bottleneck. People are. McKinsey found 42% of organizations struggle with talent gaps. Not because no one knows how to code AI. But because no one knows how to lead with it. Frontline managers need 8-12 weeks of structured training. Executives need less time-but more courage. You have to model the behavior you want. If you’re using AI to avoid hard conversations, your team will notice. If you’re using it to free up time for coaching, they’ll feel it. Paradise Solutions’ 2025 guide says this: Only pursue use cases with two criteria-high impact (at least 20% efficiency gain) and feasible (can be done in 120 days). Most teams fail because they try to boil the ocean. Pick one. Nail it. Then move to the next.What’s Next? Redefine Leadership
By December 2025, 61% of Fortune 500 companies have formal AI governance frameworks. That number was 29% just nine months ago. The pace is accelerating. The leaders who’ll thrive aren’t the ones who know the most about prompts or models. They’re the ones who know how to:- Use AI to listen better
- Use AI to empower, not replace
- Use AI to make space for empathy, creativity, and judgment
8 Comments
Ashley Kuehnel
Love this post! I’ve seen so many teams panic when AI tools get rolled out, but the key is just talking to people like humans. At my company, we started with summarizing meeting notes-no big deal, right?-but suddenly people felt heard for the first time. No one got fired. Everyone got a little more breathing room. That’s the magic.
Also, typo: ‘formal’ not ‘formaly’ lol. I’m always messing up spelling but I mean it! 😊
adam smith
This article is overly complicated. AI is just a tool. Use it or don’t. Stop making it sound like a religion. I just want to write emails faster. That’s it.
Mongezi Mkhwanazi
Let me be perfectly clear: the notion that AI can be ‘humanized’ by leadership is a dangerous fantasy. You cannot inject empathy into a machine, nor can you magically transform toxic cultures through ‘intentionality’-that’s corporate jargon masquerading as wisdom. If your team is disengaged, it’s because you’ve failed as a leader-not because you didn’t use AI to ‘free up time for coaching.’
Moreover, the idea that governance policies outperform bans? Of course they do-because people are sheep, and sheep need fences. But fences don’t fix the barn. The barn is rotten. And no amount of ‘structured training’ will fix that. The real problem? Leaders who think they can outsource moral responsibility to an algorithm. That’s not leadership. That’s cowardice.
And don’t get me started on IBM’s ‘Leading in the Age of Gen AI’ course-what a pathetic branding exercise. Training managers to use AI for ‘difficult conversations’? You’re not teaching leadership-you’re teaching avoidance. If you can’t give feedback without a bot holding your hand, you shouldn’t be managing people. Period.
Mark Nitka
Mongezi’s got a point, but he’s missing the forest for the trees. Yes, bad leadership can’t be fixed by AI-but AI can expose it. And that’s actually a good thing. I’ve seen managers cling to control because they’re terrified of being irrelevant. AI doesn’t replace them-it forces them to evolve. The ones who adapt? They become better leaders. The ones who don’t? They get left behind. And honestly? Good riddance.
Let’s stop pretending this is about tools. It’s about power. And if you’re not ready to give up some of it to empower your team? Then yeah, you’re the problem.
Kelley Nelson
While I find the general sentiment of this piece to be somewhat commendable, I must register my profound reservations regarding its lack of theoretical rigor. The conflation of ‘human-centered leadership’ with operational efficiency metrics-particularly the citation of McKinsey’s 2025 data-is, frankly, an epistemological muddle. One cannot derive ethical leadership from productivity percentages, nor can one justify cultural transformation via anecdotal case studies of USAA and IBM.
Moreover, the assertion that ‘AI amplifies leadership’ is a tautology of the most banal variety. One might as well claim that ‘fire amplifies architecture.’ It is true, but vacuous. What is required is not more ‘intentionality,’ but a robust philosophical framework for human-machine symbiosis-a framework conspicuously absent here.
And yet, the tone is… pleasant. One cannot help but admire the prose. It is, in a word, palatable.
Aryan Gupta
Let me tell you something they don’t want you to know: AI isn’t here to help you-it’s here to track you. Every time you use it to summarize a meeting, they’re logging your speech patterns. Every time you draft an email with it, they’re building a behavioral profile. IBM’s ‘leadership course’? It’s a front. The real goal is to replace human judgment with algorithmic compliance.
And don’t fall for the ‘people first’ nonsense. They’ve been doing this since the 1980s-say you’re empowering workers while quietly replacing them. Now they’ve just added AI as the new surveillance layer. You think your manager’s ‘one-on-one check-ins’ are meaningful? They’re just data points now. You’re being monitored. And if you think the EU AI Act is about protection? Please. It’s about control. The corporations wrote that law. They always do.
Fredda Freyer
There’s something deeply poetic about this whole AI moment. We’re not just adopting tools-we’re being forced to redefine what it means to lead, to care, to be present.
For centuries, leadership was about authority, hierarchy, control. Now? It’s about creating space. Space for people to think. To feel. To fail. To grow. AI doesn’t care if you’re nice or cruel. It just reflects you back. So if you’re a leader who’s been avoiding hard truths, AI won’t fix that-it’ll just make your avoidance louder, faster, more visible.
But here’s the quiet hope: the people who are using AI to listen more-to notice when someone’s quiet in a meeting, to read between the lines of customer feedback, to give feedback without fear-those are the ones who are leading. Not because they know the best prompts, but because they remember that leadership is a practice of presence.
And presence? That’s something no algorithm can replicate. Not even close.
Gareth Hobbs
Oh, brilliant. Another American tech guru telling the world how to lead-while ignoring the fact that the UK’s civil service has been using AI for decades without turning into a dystopia. We didn’t need ‘training courses’ or ‘governance frameworks’-we just used common sense. And guess what? We didn’t need to ‘redefine leadership.’ We just kept doing our jobs.
And don’t get me started on ‘IBM’s course’-they’re selling snake oil to managers who can’t even write a proper email. Meanwhile, real leaders in the NHS and HMRC are quietly automating paperwork and getting on with saving lives. No fanfare. No buzzwords. Just results.
And you think the EU AI Act is about safety? Ha! It’s protectionism. American tech giants are trying to dominate global standards-and you’re all falling for it. Wake up. This isn’t progress. It’s cultural imperialism dressed in Silicon Valley jargon.