back

Infrastructure

April 07, 2026

11 mins read

The Life of a Transaction: Why Most AI Trust is Built on Sand

by Moniepoint Inc

The Life of a Transaction: Why Most AI Trust is Built on Sand

What I Learned About AI Trust from Reconciling over 100 Billion Transactions

by Wole Olorunleke

Over a month ago, I sat on a panel at AI Everything MEA in Cairo, discussing trust, transparency, and accountability in AI with two investors and one of the sharpest tech journalists in the business. I was the only operator on stage, the person who builds the systems rather than evaluating them for their investment opportunities and viability.

I want to discuss what AI trust actually looks like from the inside.

The Question That Changed the Conversation

The moderator, Mike Butcher, asked a version of a question I hear constantly: What should investors look for when evaluating AI companies? The panel of investors -Yehia Houry from Flat6Labs and Abdelrahman Hassan from Enza Capital-offered thoughtful insights on governance frameworks and founder credibility.

Then it was my turn. And I decided to start with a story about SMS charges and their impact on computing Monthly Active Users (MAU).

At a financial institution like ours, customers earn interest each month under conditions set by the Central Bank. At the start of each month, the system debits SMS notification charges for the period and posts interest income for that same period (if conditions are met). These are system-generated transactions — the customer did not do anything. They did not open the app. They did not transfer money. They did not make a purchase.

But if your definition of “monthly active user” is “any customer with at least one transaction,” those customers show up as active. Here’s what happens. Marketing reports a high Monthly Active User (MAU) to the board. Product looks at the same data and sees low engagement — average transaction values are depressed because genuinely active customers are mixed with essentially dormant ones. Finance sees a third picture entirely because they’re tracking revenue-generating activity, and neither transaction is revenue-generating.

Three teams. Same underlying data. Three different stories.

Varied MAU definitions (Image generated by Claude)

And here is the part that matters for AI: if you build churn prediction, credit scoring, or personalisation on top of this data, the AI inherits the confusion. It treats a dormant customer with an SMS debit as if they were transacting daily. Every downstream model is making decisions based on a definition nobody agreed on. That’s not a model problem. The model is doing exactly what you told it to. It’s a governance problem. And it starts with something as simple as: what does “active” actually mean?

I could feel the room shift when I told that story. Not because it was dramatic, but because every founder in the audience had either experienced it or was currently living with it.

How Governance Got Built Into Our Architecture

Full chain of custody (Image generated by Claude)

At Moniepoint, we process over 100 billion transactions across multiple entities in Nigeria and the UK. Our reconciliation systems, our analytics platforms, our financial reporting, all of it depends on data being trustworthy. But we did not start with a governance strategy. We started with a constraint.

In Nigeria, most banks don’t have APIs capable of handling the transaction volume we process (I know this is a bit of a humblebrag, but this is our reality). So bank statements had to be uploaded manually into our reconciliation system. Immediately, that creates a trust problem: if the same person who uploads a statement also approves it, you have no verification. So we built a maker-checker system. The uploader cannot be the approver.

That was not a compliance decision. It was an operational necessity. But it established a principle that carried through everything we built afterwards. Settlement reports, which translate the bank’s view of a transaction into a form comparable to our internal records, have the same level of traceability. When our system auto-reconciles a transaction, it records not only the match but also which pipeline made the decision and who built that pipeline. If a data engineer created the tagging logic and that logic categorised a transaction a certain way, you can trace it back. This outcome exists because of this pipeline, built by this person, using this logic.

Every transaction at Moniepoint has a full chain of custody. From the moment someone pays at a terminal, through settlement, through reconciliation, and onward into our ERP for financial reporting and analytics. I think of it as the life of a transaction. Like a note in a RioPY composition — from the moment it is struck to the second it resolves, every vibration is intentional. Remove one stage, and the harmony collapses into noise.

We did not build this because a regulator told us to. We built it because Nigerian banking infrastructure gave us no choice. And now that the world is calling for AI governance, traceability, accountability, and human oversight, we’re well on our way. Not (just) because we were visionary, but because the constraints forced good architecture.

The SQL Problem Nobody Talks About

There’s a moment in every growing organisation when data becomes central to how people do their jobs. At Moniepoint, that moment came as our finance team expanded. Everyone needed answers from data. The default solution was obvious: learn SQL. Write your own queries. Get your own numbers. And it worked. For a while. Here's what really unfolded. Finance professionals, those experts brought in to analyse trends, shape strategies, and make key decisions, were dedicating hours to writing SQL queries, troubleshooting table joins, and determining which of the three tables contained the correct revenue data. They became tactical instead of strategic. And worse: ten analysts writing their own queries against the same data, with slightly different logic, were getting slightly different answers. The same fragmentation problem as the MAU example, multiplied across the entire team.

That was when we realised the answer was not “make everyone learn SQL.” The answer was to build a layer where the data is already governed, already defined, and accessible without writing code. That’s what drove us to build a conversational analytics interface where a finance team member can ask “what is this month’s revenue by entity?” in plain English and get a trustworthy answer. But — and this is the critical point I made on stage — our conversational analytics interface only works because the governance layer exists underneath. The canonical definitions. The data lineage. The shared vocabulary. Without governance, a conversational AI interface is just a chatbot writing bad SQL faster. The governance isn’t a feature of the AI. Governance is what makes AI possible.

What AI Is Actually Doing (And Why It Needs Governance)

Different Layers to AI Data Governance (Image generated by Claude)

One thing I noticed during the panel: when people talk about “AI in fintech,” they tend to imagine chatbots or credit scoring models. The reality, at least in our world, is more infrastructural and less glamorous.

Our system automatically reconciles millions of daily transactions, matching internal records with bank data. It identifies, categorises, and often resolves exceptions without human help. Every automated match is fully auditable: we record the decision-making pipeline, logic, and data. This allows us to trace any incorrect match back to the failing rule or specific data, unlike relying on opaque AI outputs.

Traceability allows us to trust the automation. Without it, we’d need humans reviewing every transaction, which is impossible at scale. With it, humans review exceptions — the cases the AI flagged as uncertain — while trusting that the automated decisions are traceable and reversible. And the same principle extends downstream. Our revenue and expense assurance frameworks continuously validate whether transactions are correctly classified and whether amounts flow correctly across entities. Our conversational interface translates natural-language questions into structured queries. Each layer of AI capability sits on top of the same governance foundation. Remove the traceability, and you have a black box processing billions of naira. Add it, and you have an auditable, explainable system that a regulator, an investor, or an auditor can interrogate.

The Honest Journey

The moment on the panel that felt most vulnerable was when I was asked whether founders should present a polished or an honest governance narrative.

I chose honest.

I told the room that Moniepoint’s AI capabilities began in different teams, with my team, Finance Systems, among the early adopters. We identified operational problems that could not be solved manually at our scale and built solutions. Reconciliation automation. Conversational analytics. Revenue assurance. These work. They are in production. They’re processing real money.

But they were built for one domain. The definitions, the data pipelines, the governance — they served finance. When we looked across the organisation, we saw other teams building their own versions, with their own definitions and assumptions. The AI worked in each silo. It didn’t compose across the company. So now we’re doing the harder work: building an enterprise data framework that centralises definitions, governance, and data standards across the organisation. It is not a finished story yet. It’s an active project.

I said on stage: “Most companies pitch Phase 3 while living in Phase 1. They sell a vision of total AI governance while their internal teams are still arguing over which spreadsheet is the ‘source of truth.’ At Moniepoint, we are firmly in Phase 2: building the foundation. It’s unglamorous, it’s difficult, and it’s the only reason our AI is actually worth the compute it runs on.” The investors on the panel validated this immediately. Both said they would rather hear an honest account of where a company is and where it’s going than a polished narrative that falls apart in due diligence. One of them made a point I’ll remember: a company that can articulate its gaps has a plan to close them.

The “Operator’s Audit”: 3 Questions Every AI Investor Should Ask to Understand Data and AI Governance

Operator’s Audit (Image created by Gemini)

If you are evaluating an AI company — or any company claiming “AI-driven” growth — ignore the pitch deck. Ask these three questions instead. They will tell you more about the company’s viability than any accuracy benchmark.

The “Triangulation” Test

The Ask: “Show me your Revenue (or MAU) metric across three different dashboards: Finance, Product, and the Board Report.”

  • The Goal: If the numbers match, they have Data Governance.

  • The Red Flag: If the numbers differ and the team spends ten minutes explaining “why the logic is different for Marketing,” they don’t have an AI problem — they have a Truth Problem. You cannot build reliable AI on top of a fragmented truth.

The “Semantic” Test

The Ask: “Who owns the definition of ‘Active User’ or ‘Revenue’?”

  • The Goal: You’re looking for a single source of accountability — a “Data Steward” or a centralised governance layer.

  • The Red Flag: If the answer is “It depends” or “We all kind of just know,” the company doesn’t have data governance; it has Data Opinions. AI trained on opinions is just a high-speed hallucination engine.

The “Post-Mortem” Test

The Ask: “Tell me about the last time your data was wrong. How did you catch it, and what was the ‘Chain of Custody’ for the fix?”

  • The Goal: Genuine governance requires Traceability. A mature company can point to the specific pipeline, the specific logic, and the specific person who corrected it.

  • The Red Flag: A company that says “our data is never wrong” either hasn’t deployed at scale or isn’t looking. If they can’t trace an error, they can’t trust an insight.

Governance is the Engine, not the brake.

Governance is the engine, not the brake (Image generated by Gemini)

If there’s one idea I want people to take from that panel (and from this article), it’s this: governance is not overhead. It’s not a compliance cost. It’s not a box you tick for regulators.

Governance is the engineering discipline that makes everything else possible. AI capabilities, investor trust, regulatory compliance, operational scale — all of it sits on top of whether your data is governed, your decisions are traceable, and your definitions are shared.

The companies that build this into their architecture from day one don’t just avoid regulatory risk. They move faster than everyone else when it’s time to deploy AI. Their teams ask questions and get consistent answers. Their models train on data that means what it says. Their audits are straightforward because the traceability already exists. Their competitors who skipped the governance step? They’re retrofitting. And retrofitting governance onto systems that were never designed for it is expensive, slow, and fragile. We learned this the hard way, through the constraints of Nigerian banking infrastructure and the operational demands of processing over 100 billion transactions. The constraints forced good architecture. The architecture enabled AI. And the AI, built on governed data, is trustworthy — not because we say so, but because anyone can trace any decision back to its source.

That’s what trust looks like. Not a pitch deck. Not a policy document. A production system where every transaction has a life story, and you can read it from beginning to end.

About the Author

Wole Olorunleke is the Vice President, Finance Systems at Moniepoint, where his team builds the systems that process and reconcile over 100 billion transactions across Nigeria and the UK. He spoke on the “Trust, Transparency & Accountability: What Investors Want from AI Founders” panel at AI Everything MEA Egypt 2026 in Cairo, alongside Yehia Houry (Flat6Labs), Abdelrahman Hassan (Enza Capital), and moderator Mike Butcher (TechCrunch).


Get more stories like this

Sign up for exciting updates on Moniepoint and how we’re powering business dreams.