Personal Risk Insights
Learn How AI Is Transforming Technology and Accelerating Cyber Threats
OCTOBER 7, 2025
In 2025, cybercrime escalated to an estimated $10.5 trillion in damages globally. 1 Artificial intelligence (AI) is a key driver, and is transforming the digital landscape with both innovation and vulnerabilities.
Cybercriminals now use AI for faster, targeted attacks, synthetic identities, deepfakes, and personalized scams. Meanwhile, AI-powered surveillance tools raise privacy concerns by collecting sensitive data, often without consent. To counter these threats, individuals and organizations must adopt proactive strategies, including AI-driven security, ethical data practices, and strong regulatory frameworks.
AI Cyberattacks
Approximately 40% of cyberattacks are powered by AI.2 Malicious actors automate and amplify cyberattacks by extracting valuable information, then leverage AI to streamline execution. This makes attacks faster, scalable, and harder to detect, allowing for mass exploitation with minimal human oversight. Attackers can simultaneously target thousands of systems while customizing attacks to individual victims.3
AI and Identity Theft
AI is increasingly being used by cybercriminals to facilitate identity theft. AI-generated content can mimic individuals, making impersonation easier. It’s often used in the following type of crimes:
Deepfake. AI-generated deepfakes are powerful tools for impersonation. Deepfakes enable cybercriminals to mimic real individuals. Using advanced machine learning techniques, cybercriminals produce realistic audio and video content that replicates a person’s voice, facial expressions, and mannerisms with alarming accuracy. These synthetic representations are increasingly being used to bypass security protocols — particularly biometric authentication systems that rely on voice or facial recognition.4
Deepfakes are fueling account takeovers, financial fraud, and social engineering attacks. By exploiting people’s trust in familiar faces and voices, attackers can manipulate victims into revealing sensitive information, authorizing transactions, or granting access to secure systems.5
Synthetic identities. Cybercriminals are also using AI to fabricate synthetic identities. A synthetic identity is a digital persona that combines real and fictitious information. Stolen personal data, such as names, addresses, and social security numbers, are combined with fabricated elements to create profiles that appear authentic. These convincing profiles bypass fraud detection, making them difficult to trace.
Unlike stolen identities, they aren’t tied to a single person. Synthetic identities can be used to open bank accounts, apply for loans, or conduct other financial transactions, often remaining undetected until substantial damage has been done.
As AI continues to evolve, the creation of synthetic identities is becoming more sophisticated and scalable.6 The result is a convincing digital footprint that can pass initial verification checks and evade traditional fraud detection systems.
Other AI-powered crimes. AI-powered tools can help criminals execute fraudulent schemes by enabling greater automation, precision, and scale. By analyzing vast datasets, AI can identify patterns in consumer behavior, flag vulnerable targets, and craft highly personalized scams that significantly increase the likelihood of success. These capabilities allow cybercriminals to operate more efficiently and adapt their tactics in real time. AI-driven fraud now accounts for over 43% of fraudulent activities in the financial space. As AI continues to evolve, so does its potential to disrupt traditional fraud prevention strategies.7
Privacy and Surveillance
AI systems are deeply intertwined with personal data, raising significant privacy concerns. To function effectively, technologies rely on continuous access to sensitive information — such as browsing history, geolocation, voice recordings, and social media activity. While this data enables personalization and improved user experience, it also allows for intrusive surveillance and unauthorized data collection.
Facial recognition technologies and smart devices amplify these risks. Many of the systems track individuals in real time, often without explicit consent, capturing private conversations, movements, and daily routines. The line between convenience and intrusion becomes blurred, especially when users are unaware of how their data is being used or shared. The widespread deployment of AI in consumer electronics, public spaces, and online platforms makes privacy increasingly difficult to maintain. Individuals may find themselves exposed to profiling, behavioral manipulation, and even identity theft.
Prevention and Protection: Building AI-Resilient Defenses
To counter these threats, USI Insurance Services recommends individuals and organizations to adopt a multilayered approach to security:
- AI-powered threat detection. Use machine learning to detect anomalies in behavior, network traffic, and activity that may signal fraud or intrusion. Examples of this technology include CrowdStrike Falcon, IBM QRadar Advisor with Watson, or Darktrace Enterprise Immune System.
- Privacy-first design. Implement systems that minimize data collection, anonymize user information, and provide clear consent mechanisms.
- Deepfake detection tools. Deploy technologies that can identify manipulated media and verify the authenticity of communications. Examples of these include DuckDuckGoose, Reality Defender, and DeepTrace.
- Education and awareness. Learn how to recognize AI-enhanced scams, phishing attempts, and suspicious activity. For practical tips and practices you can easily implement to protect yourself, see our article, AI’s Impact on Cybercrime: What You Need to Know Right Now.
- Cyber insurance. Cyber insurance helps in the event of a loss by providing funds, system recovery and legal fees.
- Regulatory compliance. Stay aligned with evolving data protection laws and emerging AI governance frameworks. To learn more about this, read our article, AI Risks: Safeguarding Your Organization From Emerging Cyber Threats.
For further information on cybersecurity, review USI’s Cyber Best Practices Checklist.
How USI Can Help
USI’s personal risk team is here to help you manage your exposure to AI-related threats. To request a cyber assessment, explore cyber insurance, or receive a comprehensive, customized risk management plan, contact us at personalriskservice@usi.com.
Sources:
1 Cybercrime Statistics 2025 | BD Emerson
2 deepstrike.io/blog/cybercrime-statistics-2025
3 80% of ransomware attacks now use artificial intelligence | MIT Sloan
4 Identity theft is being fueled by AI & cyber-attacks | Thomson Reuters
5 The Evolving Threat: How AI Scams Are Targeting Your Identity | Forbes
SUBSCRIBE
Get USI insights delivered to your inbox monthly.