Everyone’s talking about AI models
Almost no one is talking about the required evidence layer underneath them
To all my banker and credit union friends: the regulators are coming, and they’ll be asking hard questions about your “cool AI.” They won’t think of it as innovation; they’ll think of it as risk:
“Can you show us how this AI was governed, validated, and monitored?”
“Can you reconstruct a particular AI‑influenced decision and prove nothing was altered after the fact?”
As an executive in a financial institution, the question only you can answer is: Are you ready?
Treasury’s AI report, the National Credit Union Administration’s AI resource hub, and the broader AI‑risk guidance all circle around the same idea: if AI touches lending, fraud, AML, or member communications, you’d better have logging, recordkeeping, and documentation that can stand up in an exam or investigation.
That’s what I mean when I talk about an “evidence layer” for AI:
You will be asked if you have a tamper‑evident trail of who did what, when, based on what data and policy, to which version of which model or workflow.
You will be asked if you have boundaries and signatures that make it obvious when something has been changed—or when something is missing.
For most banks and credit unions today, that evidence layer is thin or non‑existent. What they have instead is:
AI pilots and vendor tools plugged into critical workflows.
Policies and PowerPoints that mention AI.
Screenshots, scattered logs, and tribal knowledge.
The gap you—and almost everybody else—have/has is the ability to “prove it.” That gap is where exams, disputes, and breach investigations get ugly. That’s why I say AI isn’t just a model‑risk problem; it presents banks and credit unions with an evidence problem.
This is why my team has built BetterSign as a cryptographic evidence layer under AI‑touched approvals and decisions—not just another way to slap a signature on a PDF. It’s designed so that, when an examiner or a hostile expert witness asks, “How do you know this is what happened?”, you have a bounded, verifiable “packet of proof” instead of a story.
So here’s the question I’m asking every bank and credit union leader right now:
You have AI models. Do you have an “evidence layer” for them?
If your honest answer is “no” or “I’m not sure,” what do you think your examiner will say?
I’m pretty sure I know.

