Overview:
The U.S. Federal Bureau of Investigation (FBI) has issued a warning around reinforced cyber criminal attempts to weaponize artificial intelligence for the purpose of perpetrating financial fraud-related schemes.
At present, the financial and payments sector says that artificial intelligence is used in 42.5% of all detected fraud attempts, of which roughly 30% are successful.
Earlier this year, a PYMNTs report revealed that 62% of U.S. financial institutions with over $5 billion in assets have seen an increase in financial crimes, showing institutions’ vulnerability to AI-powered threats.
As the sophistication and scale of financial fraud schemes increases, officials encourage both organizations and individuals to proactively mitigate risks.
How cyber criminals are weaponizing AI for financial fraud:
- To coax people into believing a fictitious narrative, cyber criminals are applying AI-generated images to social media profiles, creating fake or synthetic profile pages. With said pages, criminals are reaching out, peddling lies, and duping people into acting against their best interests or those of their employer.
- To directly manipulate people into clicking on links and transferring money, cyber criminals are distributing AI-enhanced phishing scams, en masse. Attackers are deploying highly credible looking AI-generated text, photos, video and audio that can fool the savviest of email scam detectors, including SEGs.
Voice cloning tools can replicate a person's linguistic patterns, tone and speaking style with up to 95% accuracy. - Attackers are also using AI-generated audio to impersonate well-known public figures or people who are close to a targeted victim. When successful, these financially-focused attacks can swing markets, result in stock drops, or negatively impact personal financial accounts.
Combatting AI-powered, financially motivated cyber scams:
For individuals:
- The FBI recommends that family members create a secret word or phrase that can help verify their identities.
For example, in the event that a cyber criminal uses voice cloning technology to steal an identity, feigning to be a child calling his/her parents’ when in trouble, the parents can ask for the code word. If the cyber criminal doesn’t know the code word, the parents can assume that the event is a hoax. - The FBI also encourages people to search for “subtle imperfections” in suspicious images and videos. At present, AI is notoriously poor when it comes to representing ears, hands, fingers and toes in images. Does the image contain improperly rendered depictions of extremities?
In addition, people are advised to listen closely to tone and word choices, as to potentially detect voice cloning.
For organizations:
- Implement email & collaboration tool protection that identifies AI-powered cyber threats by utilizing machine learning algorithms that analyze email content, user behavior and other data points.
In turn, you’ll ensure that your organization can detect patterns that are indicative of sophisticated attacks, like AI-generated phishing emails, even when the content appears believable at first-glance.
Check Point’s Harmony Email & Collaboration tooling has all of the aforementioned capabilities and more. With 50 AI-based engines, impressive threat intelligence capabilities, and connection via API, the tool stands apart from those offered by competitors.
Learn more about leading-edge email security. Speak with a representative today and get a demo.