I've sat through enough board meetings to recognize when eyes start glazing over. Mention "regulatory landscape" and watch executives suddenly discover urgent emails on their phones. But this AI regulation stuff? It's actually worth paying attention to.
The numbers don't lie. AI adoption among Canadian banks jumped from 30% in 2019 to 50% in 2023, and it has only continued to rise through 2024 and 2025. U.S. banks are throwing cash at AI labs faster than they can print new credit cards. We're using AI for everything from fraud detection to figuring out why Mrs. Johnson keeps calling about those missing statements.
Regulators have noticed these new toys, and they're not thrilled with the "code first, ask questions later" approach some institutions have taken.
Within the US, one administration has issued an executive order that attempted to establish standards for AI safety, privacy, civil rights, and consumer protection, while the other administration favors an approach focused on rapid innovation and reduced regulatory oversight.
Meanwhile, regulators insist their existing safety frameworks, e.g., GLBA (also known as the Financial Services Modernization Act of 1999), work quite nicely for generative AI. For example, U.S. banking agencies apply their model risk management guidance (Federal Reserve's SR 11-7) to newer AI and Canada’s Office of the Superintendent of Financial Institutions (OSFI) is updating its Model Risk Management Guideline (E-23) to explicitly cover AI/ML models.
OSFI warns about a "surge in artificial intelligence/machine learning analytics" increasing model risks that could hit your bottom line through financial losses, operational disasters, or reputational damage.
Reducing compliance burdens in the short-term places greater responsibility on institutions to self-regulate and ensure their AI systems are fair, transparent, and secure. Financial institutions must be proactive in establishing robust internal governance frameworks to manage AI-related risks effectively.
After wading through dozens of regulatory documents, I've spotted four recurring themes:
Regulators want an enterprise-wide framework governing AI models from development through deployment. i.e., clear policies, independent validation, and ongoing monitoring. OSFI, for example, expects "robust model risk management practices and oversight, including adequate controls."
What does this mean for your Monday morning? Inventory your AI models, assign clear ownership, and subject them to risk assessments. If your AI governance consists of "trust the data science team" or “Dave in IT says it works fine,” you're begging for regulatory attention, or possibly even worse.
As they did for CISOs years ago in response increased regulatory attention from cybersecurity breaches, many banks are creating Chief AI Officer (CAIO) roles. Someone at the executive level needs to own this stuff, because each bank is ultimately responsible for every decision its AI makes, whether approving credit or flagging suspicious transactions.
If you're considering becoming a CAIO, brace yourself for explaining to the board why that million-dollar AI keeps flagging Venmo payments marked "for drinks" as suspicious transactions.
What you want:
What regulators want:
What you'll end up with:
Even the most powerful AI models available resemble talking to, as I often call it, a sociopathic sophomore – lots of knowledge and impressive answers, but mysterious reasoning and without the ability to recognize (or admit) when they don’t know something.
If your AI denies a loan and your explanation is "the computer decided," you've already failed. Documentation isn't just for the compliance team anymore.
Regardless of political affiliation, several federal agencies have stated that emerging AI has the potential to produce outcomes that result in unlawful discrimination. And they have promised to hunt down such bias.
If your AI discriminates, regulators won't care how elegant your neural network is. Your AI needs to be fairer than humans, not worse. AI can potentially contain inherent bias, either because the underlying data set is biased or because the logic behind the decisioning is biased. Test proactively for disparate impact in credit, pricing, and marketing models. And document those tests thoroughly!
Remember when tech teams and compliance departments lived in separate worlds? Those days are gone. Today's AI models might be technically brilliant while violating fair lending laws six ways from Sunday.
Your CAIO, CISO, CRO, and CCO need to start having lunch, and maybe even drinks together. Many banks now run AI governance committees with people from risk, compliance, IT, and business lines. I've witnessed these meetings. They can start awkwardly, a bit like forcing the football team and chess club to collaborate on a project, but this uncomfortable mingling is now essential. Risk teams must update frameworks for AI. Compliance needs to translate regulations into practical guidance.
And someone must be able to explain all this to board members.
If you're wondering how to navigate this mess, here's one possible roadmap:
Regulators aren't against innovation. They just prefer innovation that won’t crash the financial system or discriminate against people. Reasonable enough. The banks that will thrive aren't those with the fanciest AI - they're the ones balancing innovation, responsibility and compliance without treating either as second-class.
The mandate is clear: embrace AI innovation with eyes wide open to regulatory reality. Dream big, document bigger.
Now if you'll excuse me, I need to help update our Virtual CAIO model governance policy templates and tools. Again. Feel free to reach out if you want some help updating yours too.
OBS Global helps financial institutions develop and strengthen AI governance programs that align with evolving regulatory expectations. Our experts work with your teams to build clear, actionable roadmaps that support responsible AI adoption—without slowing innovation. Looking for guidance? Message us directly or review our Financial Services resources.
About the Author