Phishing has been the number one attack vector for over a decade. But until now, phishing had a ceiling. Attackers had to write emails manually, or use templates that email filters could fingerprint and block. Spelling mistakes, awkward phrasing, and reused subject lines made many phishing attempts detectable — if not by software, then by trained employees.
That ceiling is gone. Generative AI — specifically large language models (LLMs) — is powering a new class of phishing attacks that are polymorphic, context-aware, and generated on the fly. No two payloads are the same. No signature matches. And the attacks are getting harder to distinguish from legitimate business communication.
This isn't theoretical. It's happening now, and it changes what cybersecurity has to look like for every business.
What Polymorphic AI Phishing Actually Is
Traditional phishing campaigns work like spam — blast the same email to thousands of addresses and hope a few people click. Security vendors have gotten very good at detecting these. They maintain databases of known phishing templates, flag emails with suspicious patterns, and quarantine messages that match known signatures.
Polymorphic phishing breaks that model entirely. Instead of sending one template to many people, the attacker uses an LLM to generate a unique email for every single recipient. The subject line is different. The body text is different. The tone, vocabulary, and even the pretext for clicking the link — all different. There's no signature to match because every email is a one-off.
Here's what makes it worse: the LLM can be instructed to mimic a specific writing style. It can be fed context about the target's company, industry, and role. A phishing email to an accountant at a Calgary construction firm will read completely differently from one sent to a paralegal at a law firm — because the AI is generating each one with context about who the recipient is and what would be most convincing to them.
How Attackers Are Using LLMs Right Now
Generating Unique Email Content at Scale
The simplest use case: an attacker feeds an LLM a prompt like "Write a professional email from an IT department asking the recipient to verify their Microsoft 365 credentials due to a security update." They run this prompt thousands of times with slight variations. Every email is grammatically perfect, professionally written, and unique. Filters that rely on matching known phishing text see nothing familiar.
Real-Time Phishing Page Generation
This is the JavaScript angle that's genuinely alarming. Attackers are embedding small scripts on compromised websites or inside malicious browser extensions. When a victim visits the page, the script makes an API call to an LLM endpoint — sometimes a stolen API key, sometimes an open-source model hosted on rented infrastructure — and generates a phishing page on the fly.
The generated page can pull context from the DOM. If the victim is logged into Microsoft 365, the phishing page mimics a Microsoft login. If they're on a banking site, it mimics the bank. The HTML, CSS, and even the copy on the page are generated in real time for that specific victim. It's a bespoke attack surface that exists only for a few seconds and never appears the same way twice.
Why this matters for your business: URL blocklists and safe-browsing databases are useless against a phishing page that's generated dynamically and never existed before. The page has no history, no reputation score, and no known signature. By the time it could be reported, it's already gone.
Voice and Video Deepfakes Combined with Phishing
The next evolution is multimodal. Attackers are combining AI-generated emails with AI-generated voice calls. An employee receives a phishing email that appears to come from the CFO. Ten minutes later, they receive a phone call from what sounds like the CFO's voice, confirming the instructions in the email. The voice is synthesized from publicly available recordings — earnings calls, conference talks, even voicemail greetings.
A finance team member in Airdrie or Calgary who has been trained to verify wire transfer requests by phone — which is the right instinct — may now believe the phone call is the verification. The attacker has closed the loop.
Automated Reconnaissance and Targeting
LLMs are also being used for the research phase. An attacker can feed an LLM a company's website, LinkedIn profiles of its employees, and recent press releases, then ask it to generate a targeted phishing campaign tailored to that company. The LLM identifies who handles invoices, who reports to whom, which vendors the company uses, and what language would be most convincing for each target.
This used to require hours of manual research per target. Now it takes seconds.
Why Traditional Email Security Isn't Enough Anymore
Most email security platforms — even good ones — rely on some combination of signature matching, sender reputation, URL analysis, and attachment scanning. These layers still catch a lot of attacks. But they were designed for a world where phishing emails were templated and reusable.
AI-generated phishing defeats signature matching entirely (every email is unique), makes sender reputation harder to assess (attackers use compromised legitimate accounts), and bypasses URL analysis when the phishing page is generated client-side after the email is delivered.
This doesn't mean email security is useless — it means it's necessary but insufficient. You need additional layers that don't depend on recognizing the attack itself.
What Actually Stops AI-Powered Phishing
The defenses that work against polymorphic AI phishing are the ones that don't try to identify the phishing content at all. Instead, they make the attack irrelevant even if the employee clicks.
MFA on Everything, No Exceptions
If an attacker steals credentials through an AI-generated phishing page, multi-factor authentication stops them from using those credentials. This is the single most effective control against credential theft. Not some users, not just admin accounts — every user, every application, every time. Microsoft 365 conditional access policies make this straightforward to enforce.
Conditional Access and Zero Trust
Even with MFA, you should be evaluating the context of every login. Is this user logging in from a new device? A new country? At an unusual time? Conditional access policies can require additional verification or block the login entirely when the risk signals are high. This catches attackers who have somehow obtained both the password and the MFA token — because the login context still looks wrong.
Email Authentication: SPF, DKIM, DMARC
These three protocols prevent attackers from sending emails that appear to come from your domain. If your SPF, DKIM, and DMARC records are properly configured, an attacker can't spoof your CEO's email address to send a phishing email to your finance team. This doesn't stop all phishing — attackers can still use lookalike domains — but it eliminates the most convincing type of internal impersonation.
Anti-Phishing Training That Reflects the New Reality
Employee training still matters, but it has to evolve. The old advice — "look for spelling mistakes and suspicious sender addresses" — is outdated when AI-generated emails are grammatically perfect and sent from compromised legitimate accounts. Training now needs to focus on behavioral verification: never trust a wire transfer instruction received by email alone, always verify through a separate channel, and treat any request for credentials with suspicion regardless of how professional it looks.
Endpoint Detection and Response
If a phishing attack succeeds in delivering malware — through a malicious attachment or a compromised download — endpoint detection and response (EDR) is your last line of defense. EDR monitors every process, file operation, and network connection on the device. If the malware tries to move laterally, exfiltrate data, or encrypt files, EDR detects the behavior and isolates the device before the damage spreads.
DNS Filtering and Web Isolation
DNS-level filtering blocks connections to known malicious domains before the browser even loads the page. Browser isolation takes it further — rendering web content in a sandboxed environment so that even if the page is malicious, it can't interact with the user's actual system. These controls are especially effective against the JavaScript-based phishing page generation described above.
What This Means for Alberta Businesses
You don't need to be a Fortune 500 company to be targeted by AI-powered phishing. In fact, small and mid-sized businesses are more attractive targets precisely because they're less likely to have the layered security controls that make these attacks fail. A 30-person Calgary accounting firm or a 50-person Airdrie construction company handles enough money and sensitive data to make the attack worthwhile — and they're less likely to have conditional access, EDR, and DMARC properly configured.
The gap between the sophistication of the attacks and the sophistication of most SMB defenses is widening. AI is making attacks cheaper and more effective. The only answer is to raise your baseline security posture so that even a successful phishing email doesn't lead to a successful breach.
Quick check: Is MFA enforced for every user in your organization? Is DMARC set to "reject" on your domain? Do you have EDR on every endpoint? If the answer to any of these is no, your business has gaps that AI-powered phishing is specifically designed to exploit. Download our IT Security Checklist to run a full self-audit.
AI as a Defensive Tool, Not Just a Threat
The same technology being used offensively is also being deployed on the defensive side. Modern email security platforms are starting to use LLMs to analyze the intent and context of incoming emails — not just matching patterns, but understanding whether the content of a message is consistent with the sender's normal communication style and the business context.
AI-powered security tools can flag an email that asks for a wire transfer if the sender has never previously requested one, even if the email itself is perfectly written and comes from a legitimate (compromised) account. This behavioral analysis is the future of email security, and it's one of the reasons businesses should be paying attention to AI capabilities in their security stack — not just their productivity tools.
The Bottom Line
AI-powered phishing isn't coming — it's here. The old playbook of signature-based detection and "check the sender address" training isn't enough when every attack is unique and professionally crafted. The businesses that will be protected are the ones with layered security that makes credential theft useless (MFA and conditional access), domain spoofing impossible (DMARC), and endpoint compromise detectable (EDR).
This is the new baseline. Not a premium add-on. Not something to think about next quarter. The baseline.
Find out where your security gaps are. Free assessment.
We review your email security, identity controls, endpoint protection, and backup configuration — and give you a clear picture of what's working and what's exposed. No sales pitch. No obligation.
Book a Free IT AssessmentLearn more about our cybersecurity services · Managed IT in Calgary