
Jordan Wiseman
Jordan Wiseman is a technology and security strategist with over 25 years of experience helping organizations navigate digital risk, regulatory complexity, and emerging technologies. As a Fellow at Online Business Systems and Senior Technical Advisor at XRSI, he focuses on AI, extended reality, and Web 3.0, contributing to global standards for privacy and ethical innovation. He speaks regularly on topics like AI governance, secure architecture, and responsible tech adoption, and holds certifications including CISSP, QSA, P2PE QSA, CISA, and CDPSE. His work bridges technical depth and practical guidance to promote safe, human-centered technology.
I've sat through enough board meetings to recognize when eyes start glazing over. Mention "regulatory landscape" and watch executives suddenly discover urgent emails on their phones. But this AI regulation stuff? It's actually worth paying attention to.
The numbers don't lie. AI adoption among Canadian banks jumped from 30% in 2019 to 50% in 2023, and it has only continued to rise through 2024 and 2025. U.S. banks are throwing cash at AI labs faster than they can print new credit cards. We're using AI for everything from fraud detection to figuring out why Mrs. Johnson keeps calling about those missing statements.
Regulators have noticed these new toys, and they're not thrilled with the "code first, ask questions later" approach some institutions have taken.
The Regulatory Picture
Within the US, one administration has issued an executive order that attempted to establish standards for AI safety, privacy, civil rights, and consumer protection, while the other administration favors an approach focused on rapid innovation and reduced regulatory oversight.
Meanwhile, regulators insist their existing safety frameworks, e.g., GLBA (also known as the Financial Services Modernization Act of 1999), work quite nicely for generative AI. For example, U.S. banking agencies apply their model risk management guidance (Federal Reserve's SR 11-7) to newer AI and Canada’s Office of the Superintendent of Financial Institutions (OSFI) is updating its Model Risk Management Guideline (E-23) to explicitly cover AI/ML models.
OSFI warns about a "surge in artificial intelligence/machine learning analytics" increasing model risks that could hit your bottom line through financial losses, operational disasters, or reputational damage.
Reducing compliance burdens in the short-term places greater responsibility on institutions to self-regulate and ensure their AI systems are fair, transparent, and secure. Financial institutions must be proactive in establishing robust internal governance frameworks to manage AI-related risks effectively.
What Regulators Actually Want
After wading through dozens of regulatory documents, I've spotted four recurring themes:
1. Get your Governance Act Together
Regulators want an enterprise-wide framework governing AI models from development through deployment. i.e., clear policies, independent validation, and ongoing monitoring. OSFI, for example, expects "robust model risk management practices and oversight, including adequate controls."
What does this mean for your Monday morning? Inventory your AI models, assign clear ownership, and subject them to risk assessments. If your AI governance consists of "trust the data science team" or “Dave in IT says it works fine,” you're begging for regulatory attention, or possibly even worse.
2. Someone's Head Goes on the Block
As they did for CISOs years ago in response increased regulatory attention from cybersecurity breaches, many banks are creating Chief AI Officer (CAIO) roles. Someone at the executive level needs to own this stuff, because each bank is ultimately responsible for every decision its AI makes, whether approving credit or flagging suspicious transactions.
If you're considering becoming a CAIO, brace yourself for explaining to the board why that million-dollar AI keeps flagging Venmo payments marked "for drinks" as suspicious transactions.
3. No More Black Boxes
What you want:
- Cheap, high-quality, and fast AI that magically boosts profits (why choose two, right?)
What regulators want:
- Clear explanations for every AI decision so citizens aren’t discriminated against.
What you'll end up with:
- Probably something uncomfortable between the two.
Even the most powerful AI models available resemble talking to, as I often call it, a sociopathic sophomore – lots of knowledge and impressive answers, but mysterious reasoning and without the ability to recognize (or admit) when they don’t know something.
If your AI denies a loan and your explanation is "the computer decided," you've already failed. Documentation isn't just for the compliance team anymore.
4. Fairness Isn't Optional
Regardless of political affiliation, several federal agencies have stated that emerging AI has the potential to produce outcomes that result in unlawful discrimination. And they have promised to hunt down such bias.
If your AI discriminates, regulators won't care how elegant your neural network is. Your AI needs to be fairer than humans, not worse. AI can potentially contain inherent bias, either because the underlying data set is biased or because the logic behind the decisioning is biased. Test proactively for disparate impact in credit, pricing, and marketing models. And document those tests thoroughly!
Breaking Down Walls
Remember when tech teams and compliance departments lived in separate worlds? Those days are gone. Today's AI models might be technically brilliant while violating fair lending laws six ways from Sunday.
Your CAIO, CISO, CRO, and CCO need to start having lunch, and maybe even drinks together. Many banks now run AI governance committees with people from risk, compliance, IT, and business lines. I've witnessed these meetings. They can start awkwardly, a bit like forcing the football team and chess club to collaborate on a project, but this uncomfortable mingling is now essential. Risk teams must update frameworks for AI. Compliance needs to translate regulations into practical guidance.
And someone must be able to explain all this to board members.
Your Six-Step Survival Guide
If you're wondering how to navigate this mess, here's one possible roadmap:
- Take Inventory: List every single AI system in your organization. Yes, even that spreadsheet Dave swears is only using "machine learning." Map each system to its regulatory implications. Using AI for credit? Connect it to fair lending requirements.
- Increase Governance: Dust off your Model Risk Management framework and make sure it explicitly covers AI/ML. Independent validation, documentation, regular reviews — the works. And make sure someone can explain why your AI does what it does.
- Assign Accountability: Name a CAIO, another executive, or which committee is responsible for AI oversight. Define their mandate clearly. Set up reporting channels for when things inevitably go sideways.
- Ensure People Talk to Each Other: Create regular forums where AI, security, compliance, and risk teams discuss upcoming projects and regulatory changes. Train your tech folks on elementary compliance and your compliance folks on basic AI concepts.
- Plan for Disasters: When (not if) your AI makes a headline-grabbing mistake or produces biased outcomes, what's your plan? How quickly can you pull the algorithm? How will you notify affected customers? What will you tell regulators? Figure this out now, not during the crisis.
- Stay Alert: The regulatory landscape for AI is evolving faster than explanations for crypto crashes. Assign someone to monitor developments and translate them into action items.
The Bottom Line
Regulators aren't against innovation. They just prefer innovation that won’t crash the financial system or discriminate against people. Reasonable enough. The banks that will thrive aren't those with the fanciest AI - they're the ones balancing innovation, responsibility and compliance without treating either as second-class.
The mandate is clear: embrace AI innovation with eyes wide open to regulatory reality. Dream big, document bigger.
Now if you'll excuse me, I need to help update our Virtual CAIO model governance policy templates and tools. Again. Feel free to reach out if you want some help updating yours too.
OBS Global helps financial institutions develop and strengthen AI governance programs that align with evolving regulatory expectations. Our experts work with your teams to build clear, actionable roadmaps that support responsible AI adoption—without slowing innovation. Looking for guidance? Message us directly or review our Financial Services resources.
About the Author
Jordan Wiseman is a seasoned technology and security strategist with over 25 years of experience helping organizations navigate digital risk, compliance, and emerging technologies. As a Fellow at Online Business Systems and Senior Technical Advisor at XRSI, he focuses on responsible AI adoption, privacy, and ethical innovation. He speaks regularly on topics like AI governance, data protection, and cybersecurity resilience.
Submit a Comment