When Trust Becomes the Real Technology

End of Year Message 2025

Francis Michael, Chief Operating Officer, Gates AI (Gates Digital Pte Ltd)

As we close out 2025, I want to have a real conversation with you.

AI isn’t the shiny new toy anymore. It’s become something we actually depend on, and that changes everything. Because once people depend on something, they need to trust it. And here’s the thing about trust. Break it once, and you’ll spend years trying to earn it back. If you ever do.

This year, I watched the same pattern repeat itself over and over. AI kept getting faster, smarter, cheaper. More impressive demos. More capabilities. Meanwhile, the guardrails stayed pretty much where they were. Patchy, slow to catch up, and in too many cases, completely optional. That gap between what AI can do and how we’re managing it is where the danger lives.

We all saw fraud get disturbingly sophisticated. Voice cloning so good your mother couldn’t tell the difference. Deepfake videos that look completely real. Messages that seem to come from your colleague, your bank, your boss. And you know what worries me most? It’s not the technology itself. It’s how many people are absolutely certain they’d spot a fake. Most wouldn’t. I’ve seen seasoned executives, technical experts, people who should know better, all fooled.

Money laundering went to another level this year. Criminals are using AI to create synthetic identities that pass KYC checks. They’re generating fake business documents that look legitimate. They’re moving money through layered transactions so complex that traditional detection systems can’t keep up. What used to take organized crime syndicates months of planning now happens in days, sometimes hours.

Misinformation got scarier too. We’re not just talking about fake news headlines anymore. We’re talking about doctored images, manipulated video clips, manufactured evidence that spreads like wildfire while the truth is still getting its shoes on. This isn’t just a media literacy problem. When a society can’t agree on what’s real anymore, everything else starts falling apart.

But I also saw something change in a good way. Leaders started taking this seriously. Not because it sounded good in a press release, but because they had to. Security incidents stopped being theoretical. The question isn’t whether AI can be misused anymore. It’s how fast it will be exploited if we don’t lock this down.

 

So here’s what I need every leader to understand as we head into 2026.
If your AI system makes decisions that matter, if it moves money, controls access, influences what people believe, or touches anything critical, then you’re playing in the big leagues now. Whether you signed up for that or not.
And in the big leagues, having good intentions doesn’t cut it. That’s just the excuse you’ll wish you hadn’t said when everything goes sideways.

Here’s where Gates AI stands, and I’m not sugarcoating it. AI governance needs to be real. Not aspirational. Not a nice-sounding policy document gathering dust on a digital shelf. It needs to be measured, tested, monitored, audited, and enforced, just like cybersecurity, just like safety engineering. It needs teeth.
Look, Singapore has always prided itself on being ahead of the curve. We built our reputation on good governance, on systems that work, on being trustworthy in a region where trust can be hard to come by. That same discipline needs to apply to how we deploy AI. We cannot afford to be complacent when it comes to this. The stakes are too high.

Regulators worldwide are waking up to this reality. They’re not moving in perfect lockstep, but the direction is crystal clear. They’re done accepting promises of responsible behavior. They want proof. Evidence. Accountability. And honestly, that’s exactly how it should be.

But let me be straight with you. Don’t build proper governance just because you’re scared of fines. Build it because the alternative is so much worse. It’s your reputation in ruins. Your operations grinding to a halt. Real people getting hurt. And trust, the kind of trust that took you years to build, gone in an instant. And it won’t come back just because you want it to.

So let me leave you with a challenge for 2026.

Stop treating AI like it’s just another feature you bolted on. Treat it like the system it is. Don’t just ask whether this is impressive. Ask whether you can control this when things go wrong. Build safety in from day one, not as an afterthought when you’re already in crisis mode. Stop assuming your vendors carry your accountability. They don’t. That’s on you. And don’t rely on people to catch what the system carefully hides. We can only catch what we can actually see.

At Gates AI, we’re committed to pushing for what the world actually needs. Trust you can verify, not trust you just have to take someone’s word for. That means properly assessing risks, actually testing systems, real-time monitoring, genuine incident response plans, and the ability to pull the plug fast when something goes wrong.

Because the future isn’t about putting AI everywhere.
It’s about building AI that deserves to be everywhere.

To our partners, clients, government counterparts, and our incredible team across every region, thank you. Thank you for choosing to do this the right way, even when shortcuts were tempting. Thank you for choosing responsibility over hype.

From my Gates AI family to yours, Merry Christmas to all who celebrate. And to everyone else, whatever this season means to you, I hope you find some peace, some rest, and some time with the people who matter most.

Here’s to a strong, purposeful start to 2026.

Translate »