Identity verification is not a new concept. If you’ve ever moved to a new state, you know the drill – go to the DMV, bring your birth certificate, state issued license or passport, and a utility bill mailed to your current address. Beginning in the 1970s, in response to organized crime, lawmakers began extending these requirements to banks. These are generally referred to as Know Your Customer laws, or KYC. In recent years this same concept has extended to age verification for anti-pornography laws in the US and EU.
Moving from law to technology, I’ve been evaluating identity verification platforms for Cardless ID. There are quite a few choices.
These are SAAS products that charge a fee for each verification, ranging from 80 cents to $2.50 depending on the scope. This makes good financial sense for a bank or other institution that can expect to earn that back from the customer over time. It makes zero sense for adult sites (or BlueSky), where only a tiny percentage will ever give them any money. Still, it may be a viable option for a nonprofit that has a wide enough donor base (more on that in a future post).
Searching for Self Hosted
Given the sensitive information in driver’s licenses and passports, Cardless ID must be beyond reproach when it comes to personally identifiable information. In a perfect world, one where we can guarantee that no personal data ever leaves our sandbox, we would have an entirely self-hosted solution. So I set out to find something that would meet that need.
My search led me to a company called FaceOnLive. I am not linking to them on purpose, but you can find them easily enough. They have a very flashy web site, a GitHub repo, and a Canadian business address. To test it out, I used their GitHub example to build a prototype. The app requires a lot of Firebase setup, so it took me a bit to churn through it. However, the most important instruction — how to launch the actual server — was missing. Very strange! Usually it’s something like npm run dev but that didn’t work. None of the standard commands work because they have an idiosyncratic setup I haven’t seen before.
So I went to their website to get support. There is no support link, but there is a form to contact them, which I filled out. About 30 minutes later I received a WhatsApp message from someone named Zhu. It’s a little weird to be contacted that way, but maybe that’s how the young’uns do support these days. He explained over text that they could provide me with a license to install the software.
Installing facial verification tools without a security audit is like building a glass house during a hailstorm. The software could be quietly sending every verification to identity thieves! I noticed that their web site does not have an About page listing any of the people involved, nor does it have a LinkedIn page.
I raised these concerns. “Before installing your software,” I said, “I want to know more about your company.” For a normal company, this is par for the course. Customers have questions, you want them to buy your product, so you bend over backwards to address their concerns. For security products in particular, it’s common to highlight that you adhere to well known standards from dull-but-important organizations like ISO, NIST, and AICPA.
Instead of reeling off how standards-compliant they are, he said I could just install it and monitor network traffic. Yeah, no. This is not an acceptable answer, and “Zhu” never bothered to tell me his full name or really anything at all about his company.
Pass.
Next I asked Gemini to create a research report of potential solutions. It came back with an impressive looking document and listed several companies I already know about. But the number one recommendation for self hosting was a company called KBY-AI, which I had not encountered before.
Like FaceOnLive, KBY-AI has a very flashy web site, but no information about the people behind it. There is a LinkedIn icon, but clicking it results in a “not found” error. There is an X link that goes to someone with the handle JustinHong91852 and display name KBY-AI, but all of the tweets are just links to the web site. No comments, no replies to other tweets, just links. His listed location is Essex. They also have a YouTube channel with demos, but there isn’t a person behind them. Just short screen cams with music. The channel’s info section lists a phone number with an Inland Empire area code.
Pass.
This Will Only Get Worse
These “companies” probably used AI to create their products and websites, and it’s possible they have done this dozens or hundreds of times to target various sectors. They may even be covert operations by hostile nations. After 10+ years of fake news and deep fake operations targeting elections and policy debates, nations can now covertly create entire operating companies selling products that undermine our digital infrastructure. This has happened before, but the proliferation of “vibe coding” and micro-SAAS products raises the risk that neophyte coders will become unwitting pawns in the game, especially if it’s an AI recommending the product, as Google Gemini did for me.
This experience reveals a brute fact about the current landscape: our digital infrastructure has no reliable chain of trust. I can’t verify my users without trusting a verification company, but I have only indirect means to do that. I’m just one developer trying to add age verification to a nonprofit project, but scale this problem up. Every hospital evaluating patient record systems. Every school district choosing learning platforms. Every local government contracting with IT vendors. Every bank partnering with fintech services.
If AI can generate such convincing but hollow companies this easily, it will cascade upward through every layer of our institutions. When businesses aren’t verifiable, business-to-business transactions become risky. When we can’t verify government contractors, our public infrastructure is vulnerable. When educational institutions can’t verify which companies they’re sharing student data with, we’re one data breach away from catastrophe.
We’re not facing just personal identity risk but an institutional one. In case you haven’t noticed, trust in institutions is at an all time low. Given the almost daily stories of data breaches, it seems like most of our security practices boil down to “trust me, bro.”
Verification Must Become the Norm
My proposal is that practically all electronic services of any scale have verification baked in. Want to post on Instagram? Get verified. Feel like swiping on Tinder, leave a Yelp review, start a GoFundMe, hiring from Upwork, voice chat in Call of Duty? Get verified!
This doubly applies to the B2B realm, too. At every level of the supply chain, it should be possible for companies to verify that vendors are who they say they are.
(In fairness, many companies already do this, but it’s clumsy, repetitive and vulnerable to catastrophic attacks.)
Sounds exhausting, right? A little dystopian and authoritarian, even. Am I saying that everyone needs to go through the dance of taking a picture of your ID and a selfie just to post a review of the local sushi joint?
No I am not. I am saying that you should verify once, thoroughly, with a trusted authority. Just as you don’t get a new driver’s license every time you buy a beer, you shouldn’t have to repeatedly submit sensitive documents to every service. Instead, you use cryptographic verification, which creates a secure, digital, tamper-proof seal as proof that this initial verification happened. This allows you to prove your authenticity without resharing your sensitive personal data or storing it in a central database.
How Crypto Solves This
The absolute worst part about cryptocurrency is that it contains the word “currency.” Mention crypto and people immediately think of scams. The terms “wallet” and “token” don’t help either—they reinforce the idea that this technology is fundamentally about money and speculation. (Web3 is Going Just Great provides a hilarious/scary running history of crypto scams.)
But strip away the currency baggage and you’re left with something actually useful: a way to prove you own something without having to share sensitive private information. This is what cryptographic signatures and public-key infrastructure do. You verify once – and only once – with a trusted authority such as a government agency, a bank, or in our case, with Cardless ID. After verification, they issue a tamperproof credential. Then, whenever you need to prove that verification, you present a cryptographic proof that you hold that credential. The service you’re accessing never gets your underlying documents. They just get mathematical certainty that you are who you claim to be.

Infrastructure For the Next 100 Years
My ultimate goal is for Cardless ID to become obsolete. In the future, every credential will be issued on the blockchain. Every driver’s license, passport, diploma, property purchase, car registration, birth certificate, business incorporation, patent, vaccination record, background check, credit report, employment contract, and court judgment. All of it will be secured cryptographically, and proving it will be as simple as scanning a QR code.
Especially AI Slop
The same cryptographic infrastructure that verifies human identity can verify AI output. AI systems should cryptographically sign everything they create—images, videos, text, code, voice synthesis. When you receive a video clip, you could verify whether it was created by a camera or generated by an AI model. When you read a document, you could check whether a human or an AI wrote it. When you see a photo of a politician making a controversial statement, you could confirm it wasn’t a deepfake.
This works in both directions. Humans could cryptographically sign their own work to prove they created it without AI assistance. A journalist could prove their article wasn’t AI-generated. An artist could demonstrate their painting was made by human hands. A student could verify they wrote their own essay. A programmer could show they coded a solution themselves.
Nobody knows what’s real anymore.
Fake companies with AI-generated websites sell security products. Deepfake videos spread misinformation. AI-written articles masquerade as human journalism. Synthetic voices impersonate children to demand ransom. Job interviews become hack vectors.
Reality is manufactured. Trust is impossible because verification is impossible.
There is no putting the genie back in the bottle. The only solution is to make everything verifiable. Every AI model, every camera, every recording device, every human creator should cryptographically sign their output at the moment of creation. Then anyone can verify the provenance chain: Was this made by a human? Which AI created it? Has it been altered since creation? Who owns the rights to it?
This is infrastructure for the next century.
The technologies to verify identity – indeed, to verify reality – exist today. We just need to continue building the infrastructure and relentlessly pound the table until every organization embraces it.
Postscript. Ultimately my attempt to find a self-hosted solution was fruitless. I ended up using Amazon’s Texttract and Google’s Document AI. It turns out that these same platforms are used by most major ID verification companies, so I essentially skipped to the middleman and went to the source.
Excellent analysis. Not sure if it applies here, but a company called Anonybit has a compelling solution that I believe combines block chain with biometrics.