Twenty-five years in the past, Jay Bavisi based EC-Council within the aftermath of 9/11 with an easy premise: if attackers perceive methods deeply, defenders want to grasp them simply as properly. That concept led to Licensed Moral Hacker (CEH), which went on to change into one of the vital well known credentials in cybersecurity.
Bavisi thinks we’re at an identical inflection level once more—this time with AI.
The expertise is transferring quick. The workforce isn’t. And identical to the early days of software program improvement, a lot of the consideration is on what AI can do, not on how one can deploy it safely, responsibly, or at scale.
“We’re again in that period the place constructing one thing feels cool,” Bavisi informed me. “Within the early days of internet improvement, safety and governance have been afterthoughts. We’re doing the identical factor once more with AI—performance first, use circumstances first, and solely later asking what the dangers are.”
That’s the hole EC-Council is attempting to handle with the biggest growth of its portfolio in 25 years: four new AI certifications and a revamped Licensed CISO program.
The Abilities Hole Isn’t Hypothetical
The information behind this push isn’t delicate. IDC estimates unmanaged AI threat may attain $5.5 trillion globally. Bain tasks a 700,000-person AI and cybersecurity reskilling hole within the U.S. alone. The IMF and World Financial Discussion board have each landed on the identical conclusion: entry to expertise isn’t the constraint—individuals are.
I’ve spent the final couple of years speaking with executives about AI, and the tone has shifted. Early on, almost everybody insisted AI wasn’t going to exchange jobs. It turned virtually ritualistic. Comprehensible, positive—however not solely trustworthy.
Currently, the messaging has modified. Some roles will disappear. That’s not controversial anymore. The extra correct framing has at all times been: AI in all probability received’t take your job, however somebody who is aware of how one can use AI higher than you may. That’s the true threat—and the true alternative.
What EC-Council Is Really Launching
The brand new certifications are constructed round a framework EC-Council calls ADG: Undertake, Defend, Govern. It’s meant to offer organizations a method to consider AI intentionally, relatively than defaulting to “simply purchase a subscription and see what occurs.”
“It’s not nearly selecting Claude or Gemini or GPT,” Bavisi mentioned. “Your knowledge, your buyer info, your enterprise processes all get pulled in. You want guardrails.”
The 4 certifications are role-specific:
- AI Necessities (AIE) is baseline AI fluency—sensible, not theoretical.
- Licensed AI Program Supervisor (C|AIPM) focuses on implementing AI applications with accountability and threat administration.
- Licensed Accountable AI Governance & Ethics Skilled (C|RAGE) targets governance gaps, aligning with frameworks like NIST AI RMF and ISO/IEC 42001.
- Licensed Offensive AI Safety Skilled (COASP) teaches practitioners how one can assault LLM methods in order that they perceive how one can defend them.
That final one feels particularly on-brand. It’s primarily the CEH mindset utilized to AI: you may’t defend what you don’t perceive.
Why This Isn’t Tutorial
Bavisi shared a current instance that places the urgency into perspective. EC-Council took half in a managed take a look at with a top-ten international insurance coverage firm. They in contrast conventional human-led pen testing in opposition to the AI strategy.
Throughout three rounds, people discovered 5 complete vulnerabilities. The AI discovered 37.
That’s not an indictment of human ability. It’s a reminder that AI doesn’t get drained, doesn’t overlook, and doesn’t function throughout the similar constraints. The job doesn’t disappear—however the expectations round the way it’s finished change dramatically.
The CISO Position Is Altering Too
Alongside the AI certifications, EC-Council up to date its Licensed CISO program to model 4. Safety leaders at the moment are accountable for methods that study, adapt, and make choices autonomously, however that’s not what most CISOs educated for a decade in the past.
The up to date curriculum displays that actuality—much less guidelines safety, extra governance, threat possession, and accountability in AI-driven environments.
Why This Issues
Certifications don’t magically make somebody an skilled. I’ve collected sufficient of them over time to know that. However they do matter. They open doorways. They sign baseline competency. And proper now, that sign carries extra weight than typical.
“There are cloud engineers and GRC professionals in all places asking the identical query,” Bavisi mentioned. “How do you do governance and threat with AI? Till now, there haven’t been actual frameworks or actual coaching applications.”
AI isn’t slowing down. The workforce has to catch up. EC-Council is betting that structured, role-based schooling—grounded in sensible actuality relatively than hype—will help shut that hole. Given what they did with CEH, it’s a wager value listening to.

