FinCEN Asks: Are Your Customers Deepfaking It?
FinCEN Asks: Are Your Customers Deepfaking It?
“Can you create a New York driver’s license for a John Brown with brown hair and brown eyes, 5’11”, 200 lbs, living at 123 State Street in New York? Also, a bank statement from XYZ Bank displaying $235,887 in the account, with recent transactions for the last month?”
FraudGPT[1] can spin that up in a flash for criminals, who can then provide synthetic identification documents, photos, and even videos, to financial institutions to open accounts. Financial institutions, for their part, have been increasingly highlighting this trend in suspicious activity reports, according to a November 13, 2024 press release from the Financial Crimes Enforcement Network (FinCEN).
In the release, FinCEN warns financial institutions that bad actors have opened financial accounts using deepfakes created with generative artificial intelligence (GenAI) to evade identity verification and authentication measures. The criminals then use the accounts to launder funds and to engage in other illicit activity.
In a related alert, FinCEN advises financial institutions on how to detect deepfakes, provides sample red flags, and reminds financial institutions of their reporting requirements under the Bank Secrecy Act.
Per the alert, FinCEN notes certain indicators that warrant additional scrutiny by a financial institution. At account opening, those indicators might include inconsistencies in the customer’s identification documents or customer inability to authenticate an aspect of their profile. For an active account, a financial institution might consider additional due diligence where a customer accesses an account from an IP address inconsistent with the customer’s profile, where the account has coordinated activity among multiple similar accounts, or where there are high volumes of chargebacks or rejected payments.
FinCEN noted additional red flags, including:
In the event of a suspected deepfake image, the financial institution can undertake reverse image searches and other open-source research to see whether the photo matches a face in a gallery of faces created by GenAI. Financial institutions might also, on their own or through third-party vendors, examine the image’s metadata and check for specific manipulations.
The FinCEN alert regarding abuse of GenAI ties into two of FinCEN’s anti-money laundering and counterfinancing of terrorism priorities: cybercrime and fraud. FinCEN issued the priorities pursuant to the Anti-Money Laundering Act of 2020 to highlight threats to the U.S. financial system and national security. The FinCEN alert also aligns with the Department of Treasury’s larger effort to provide financial institutions with resources to combat challenges arising generally from the use of artificial intelligence.
[1] FraudGPT is a malicious, subscription-based artificial intelligence product that can generate deceptive content, which may be useful to criminals, e.g., in conducting cyberattacks or scams. Among other things, FraudGPT can create phishing pages and write malicious code.