editorially independent. We may make money when you click on links
to our partners.
Learn More
An artificial intelligence-powered website that churned out thousands of fake passports and driver’s licenses has landed its alleged operator in federal court.
Yurii Nazarenko, a 27-year-old Ukrainian national, pleaded guilty to running OnlyFake, a subscription-based platform that generated more than 10,000 counterfeit identification documents for customers worldwide.
“OnlyFake’s manufacture of fraudulent IDs and other documents puts us all at risk and must be stopped,” said U.S. Attorney Jay Clayton in the DOJ press release.
Inside the AI-Powered Fake ID Operation
Federal prosecutors allege that OnlyFake enabled customers to create realistic digital replicas of U.S. driver’s licenses, passports, Social Security cards, and IDs from roughly 56 other countries.
Users could customize personal details such as name, date of birth, and identification numbers, or opt for randomized identity information.
The finished product could be rendered to resemble either a flatbed scan or a tabletop photograph — formats commonly used during digital identity verification.
Authorities allege that the primary purpose of these AI-generated documents was to bypass Know Your Customer (KYC) verification requirements at banks and cryptocurrency exchanges.
KYC controls, mandated under the USA PATRIOT Act, are designed to prevent money laundering, terrorist financing, and other financial crimes by verifying customer identities during account onboarding.
By offering counterfeit documents engineered to withstand automated document checks, OnlyFake reportedly targeted weaknesses in digital identity verification workflows used across the financial sector.
The platform operated as a cryptocurrency-only service and reportedly offered bulk discounts of up to 1,000 fake documents at a time, indicating a commercial-scale operation.
Undercover FBI agents conducted multiple controlled purchases between May and June 2024, acquiring counterfeit New York state driver’s licenses, U.S. passports, and a Social Security card to confirm the site’s capabilities.
Prosecutors state that artificial intelligence enabled the automation and scalability of the operation.
The system was designed to replicate official formatting, typography, and visual security elements associated with authentic government-issued IDs. This automation allowed customers with little technical expertise to generate convincing identity documents on demand.
Investigators further allege that Nazarenko attempted to conceal financial activity by routing cryptocurrency payments through multiple wallets and deleting email records after media coverage in early 2024 drew attention to the site.
He was extradited from Romania in September 2025, has agreed to forfeit $1.2 million in proceeds, and faces a maximum sentence of 15 years in prison. Sentencing is scheduled for June 2026.
Defending Against AI-Enabled Fraud
As AI-generated documents and synthetic identities become more sophisticated, traditional KYC controls are no longer sufficient on their own.
Fraud actors are increasingly using automation, deepfake media, and large-scale account creation tactics to bypass onboarding safeguards.
Organizations must respond with layered verification, continuous monitoring, and adaptive risk controls that evolve alongside emerging threats.
- Implement multi-layer identity verification that combines biometric liveness checks, document authenticity analysis, and cross-referencing with trusted external databases.
- Deploy AI-driven detection tools to identify synthetic media, deepfake imagery, and repeated document templates used in large-scale fraud attempts.
- Use adaptive, risk-based onboarding controls that trigger enhanced due diligence for high-risk geographies, devices, or transaction patterns.
- Monitor for bulk account creation, shared device fingerprints, and anomalous onboarding behavior that may indicate coordinated fraud activity.
- Apply graduated account privileges and stepped-up verification before enabling high-value transactions or cryptocurrency withdrawals.
- Strengthen post-onboarding monitoring with behavioral analytics and transaction surveillance to detect early signs of money laundering or synthetic identity abuse.
- Require phishing-resistant multi-factor authentication and enforce least-privilege access to reduce account takeover and internal misuse risks.
- Continuously audit KYC workflows and test incident response plans through red team simulations to ensure the organization can detect and respond effectively to AI-enabled identity fraud.
These steps help strengthen identity assurance, reduce fraud exposure, and improve resilience against AI-enabled identity abuse.
AI-Powered Identity Fraud Is Scaling Fast
The OnlyFake case illustrates how rapidly AI can scale identity fraud, lowering the technical barrier for criminals while increasing pressure on financial institutions and digital platforms.
As synthetic documents become more convincing and easier to generate, compliance-driven identity checks alone will not be enough to manage risk.
This shifting threat landscape is driving organizations to explore zero trust solutions that continuously verify identity and access beyond initial onboarding.
