Hacker-City
Hacker-City
Get the brief
Technology|March 25, 2026|2 min read

Bubble AI app builder abused to steal Microsoft account credentials

Threat actors are using the legitimate no-code platform Bubble to create phishing apps that bypass email security solutions by hosting malicious content on trusted domains, targeting Microsoft 365 credentials.

#phishing#artificial intelligence#Microsoft#Bubble#security#malware#credentials#no-code platform#threat actors#Kaspersky

Bubble AI app builder abused to steal Microsoft account credentials

Threat actors are evading phishing detection in campaigns targeting Microsoft accounts by abusing the no-code app-building platform Bubble to generate and host malicious web apps.

Because the web app is hosted on a legitimate platform, email security solutions do not flag the link as a potential threat, allowing users to access the page.

Security researchers at Kaspersky say that threat actors are using the new method to redirect users to the actual phishing page, which is often mimicking a Microsoft login portal that is sometimes hidden behind a Cloudflare check.

Any credentials entered on these fake web pages are siphoned to the phishing actor, who may then use them to access email, calendar, and other sensitive data associated with Microsoft 365 accounts.

Bubble is a no-code AI-powered platform where users describe the app they want to build and then the platform automatically generates the backend logic and frontend.

The resulting apps are hosted on Bubble's infrastructure under *.bubble.io, which is a trusted domain unlikely to trigger security warnings from email security solutions.

Phishing actors take advantage of this by creating Bubble apps that consist of large, complex JavaScript bundles and Shadow DOM-heavy structures, which are not flagged as redirection scripts or classified as malicious by static and automated analysis tools.

"The code generated by this no-code platform is a massive jumble of JavaScript and isolated Shadow DOM (Document Object Model) structures," explains Kaspersky.

"Even for an expert, it's difficult to grasp what's happening at first glance; you really have to dig through it to understand how it all works and what the purpose is."

"Automated web-code analysis algorithms are even more likely to get tripped up, frequently reaching the verdict that this is just a functional, useful site."

The researchers warn that the tactic of abusing AI-powered app builders for evasion in phishing campaigns is very likely to be adopted by phishing-as-a-service (PhaaS) platforms and integrated into phishing kits that are widely used by lower-tier cybercriminals.

These platforms already provide session cookie theft, adversary-in-the-middle (AiTM) layers that bypass two-factor authentication (2FA), geo-fencing, anti-analysis tricks, and AI-generated email content, so the abuse of legitimate platforms will only increase the stealth of these attacks.

BleepingComputer has contacted Bubble for a comment about Kaspersky's findings and any plans to strengthen anti-abuse protections, but we have not received a response by publishing time.

Share this story