Cybersecurity Awareness Month: AI and Deepfakes
October 22, 2025 Fred Smith
As artificial intelligence becomes more sophisticated and accessible, cybercriminals are leveraging these same technologies to create unprecedented threats.
AI and Deepfakes: The New Frontier of Cyberthreats
Introduction
As artificial intelligence (AI) becomes more sophisticated and accessible, cybercriminals are leveraging these same technologies to create unprecedented threats. The rise of AI-generated phishing emails, deepfake videos, and voice cloning represents a fundamental shift in the cybersecurity landscape that every member of our University of Maryland, Baltimore (UMB) community must understand.
The Current AI Threat Landscape
The statistics paint a concerning picture. According to recent cybersecurity research, 67.4 percent of all phishing attacks in 2024 used some form of AI, and there has been a 442 percent increase in voice phishing attacks driven by AI-generated impersonation tactics. Even more alarming, 97 percent of organizations report difficulty verifying identity in our current digital environment. AI has transformed traditional cyberthreats in several key ways.
Enhanced Phishing Campaigns
Cybercriminals now use large language models like ChatGPT to create grammatically perfect, contextually relevant phishing emails that are virtually indistinguishable from legitimate communications. These AI-generated messages can be personalized at scale, making them far more convincing than traditional phishing attempts.
Deepfake Technology
Sophisticated deepfake videos and audio can impersonate university administrators, faculty members, or students with startling accuracy. In one documented case, criminals used deepfake technology during a video conference call to steal $25 million from a multinational company by impersonating the CFO and other executives.
Voice Cloning
With just a few minutes of reference audio, attackers can create convincing voice clones for phone-based social engineering attacks. These “vishing” (voice phishing) attacks are particularly dangerous because they exploit our natural tendency to trust familiar voices.
How These Attacks Target Universities
- Academic environments prioritize collaboration and information sharing, creating multiple entry points for attackers.
- Students, faculty, staff, and visitors create a complex ecosystem where unusual activity might go unnoticed.
- Universities house sensitive research, financial information, and personal data of thousands of students and employees.
- Academic leaders and researchers are attractive targets for impersonation attacks.
Protection Strategies
- If you receive an unusual request via email or phone, verify it through a separate communication method. Call the person directly using a known phone number or visit them in person if possible.
- While deepfakes are becoming more sophisticated, they often contain subtle flaws such as unnatural eye movements, inconsistent lighting, or slight audio delays. Trust your instincts if something feels “off.”
- For sensitive requests, establish verification procedures within your department. Create code words or ask personal questions that only the real person would know.
- AI threats evolve rapidly. Follow cybersecurity updates from the UMB IT security team and reputable sources to stay current on emerging threats.
University-Specific Guidelines
- Never share research data, especially sensitive or proprietary information, based solely on email or phone requests, even if they appear to come from authorized personnel.
- All financial requests exceeding our established thresholds must be verified through in-person or authenticated channels.
- Faculty and staff should never share student information based on AI-generated requests, regardless of how convincing they appear.
Reporting Suspicious Activity
If you encounter what you believe might be an AI-generated threat:
- Do not respond to or act on the suspicious communication
- Document the incident with screenshots or recordings if safe to do so
- Report immediately to CITS Security and compliance
- Inform colleagues who might be targeted with similar attacks
Building a Culture of Healthy Skepticism
In our expanding digital world, verification is not paranoia, it’s a reasonable precaution. We must adjust our thinking to include the possibility that any digital communication could be artificially generated. This doesn’t mean we should abandon trust entirely, but rather that we should verify important requests through established channels.
Conclusion
The integration of AI into cybersecurity threats represents both a challenge and an opportunity. While these technologies create new vulnerabilities, awareness and proper training can effectively mitigate these risks. By understanding how AI-enhanced attacks work and implementing appropriate verification procedures, our University community can maintain both security and the collaborative spirit that defines academic excellence.
Remember: When in doubt, verify. A few extra minutes spent confirming the authenticity of an unusual request could save thousands of dollars and protect sensitive data that affects our entire UMB community.