A news website with apparent links to OpenAI is using AI agents that pose as flesh-and-blood reporters to get quotes from human experts — and many of its articles discuss the AI industry, pushing pro-AI arguments and attacking the tech’s critics.
At least, that’s according to a provocative new investigative piece from The Midas Project’s Model Republic. The links to OpenAI are circumstantial yet eyebrow-raising; we reached out to the Sam Altman-helmed firm to ask about them, but didn’t hear back by press time.
The site, which has the peculiar name of The Wire by Acutus, was launched on December 29, 2025 and doesn’t appear to have any human contributors. In addition to an analysis using the AI detector Pangram finding that 97 percent of its articles are either fully or partially AI-generated, Model Republic found that looking into the site’s publicly accessible code revealed clear fingerprints of AI involvement. These included fields for providing “background information for the AI to use when generating questions and writing the story,” and “suggested questions for the AI interviewer to ask.”
Details in its RSS feed also describe an automated editorial review process carried out by the site’s AI, with only one of the five steps conducted by a human. The median time it takes for this entire “review” process to complete is 44 seconds, per the reporting. One field called “aiOriginalText” shows the AI model’s original wording next to a suggested edit.
We’ve seen plenty of AI-generated content mills before. But Acutus also appears to be using AI agents to get comments from and interviews with human subject matter experts, which is far more unusual. For instance, Model Republic obtained an an email received by Nathan Calvin, vice president and general counsel of the advocacy group Encode. The email claimed to be from an Acutus reporter named Michael Chen, inviting Calvin to answer a “Written Q&A” for a story about an AI bill in Tennessee. Web searches for Chen turned up nothing about a reporter with that name, and the email was sent from the generic address “reporter@acutuswire.com,” despite the publication claiming it has numerous contributors. The site’s client side code also revealed fields referring to an “AI interviewer” and “reporter agent.”
Even more strangely, Model Republic‘s reporting also unearthed eyebrow-raising links between Acutus and OpenAI, one of the most prominent AI companies in the world. Though the publication remains obscure, its articles have been repeatedly boosted on social media by Patrick Hynes, the president of Novus Public Affairs, a Republican public relations firm. (Out of just four X posts linking to Acutus on the entire social media platform, two are from Hynes.) Novus does work for Targeted Victory, whose CEO Zac Moffatt also co-founded the $125 million super PAC Leading The Future, which is funded by OpenAI president Greg Brockman.
While not a smoking gun, the implications are striking. Model Republic infers, based on the apparent connections, that “OpenAI’s super PAC may be using Acutus to push its political agenda under the guise of independent journalism.”
Part of that playbook is smearing AI critics. One Acutus piece blasts AI safety advocate and journalist John Sherman for a comment he made about burning data centers on his podcast, going as far as to contact each of the organizations listed as clients for Sherman’s consulting firm about the comments and “whether they intended to continue working with his firm.”
And even if the connections to OpenAI prove to be unsubstantiated, the fact that AI agents posing as real reporters for a website that pushes tech industry favorable talking points is alarming on its own. The use of AI in the newsroom, even for supposedly limited applications like brainstorming ideas or reviewing prose, remains controversial, so Acutus represents a major escalation.
The purported links also come amid OpenAI openly making inroads into news media. Last month, it bought the tech talk show TPBN, which is widely listened to in Silicon Valley circles, in a move that could allow it to control is faltering public image. To be fair, though, it’s only following the playbook made by other tech monoliths, as when Jeff Bezos acquired The Washington Post, Palantir launched its own faux-academic publication, and Marc Benioff bought Time magazine.
Do you have any information about The Wire by Acutus? Email us: tips@futurism.com. We can keep you anonymous.
A crypto founder had his laptop compromised when he joined what appeared to be a Microsoft Teams call with Pierre Kaklamanos, a Cardano Foundation contact he had spoken with before.
When “Pierre” reached out about Atrium and sent a Teams invite, nothing looked out of place. On the call, the face and voice matched what he remembered, and two other apparent foundation members were present.
When the call lagged and dropped him, a prompt told him his Teams software was out of date and needed reinstalling through Terminal. He ran the command, then shut the laptop off because the battery was dying, which limited the damage in retrospect.
He describes himself as “quite technically savvy,” which is part of the point that the attack worked because the context felt legitimate.
Social engineers have always relied on familiarity, and executing that at scale once required either a compromised account or weeks of text-based rapport-building.
The video call was the authentication layer, the thing victims learned to trust, and replicating it is now within reach.
Fake update
Microsoft documented campaigns in February and March 2026 in which malicious files masqueraded as workplace apps, such as msteams.exe and zoomworkspace.clientsetup.exe, with phishing lures that mimicked legitimate Teams and Zoom meeting workflows.
In a separate warning, Microsoft described “ClickFix”-style prompts targeting macOS users, instructing them to paste commands into Terminal and targeting browser passwords, crypto wallets, cloud credentials, and developer keys.
The fake Teams update fits both patterns simultaneously.
Mandiant said it could not independently verify which AI model, if any, generated the video, but confirmed the group used fake meetings and AI tools during social engineering.
On Apr. 24, the real Pierre Kaklamanos posted on X saying his Telegram had been hacked and that someone was impersonating him, along with “a few other people in the industry this week.”
He told followers to avoid clicking links or booking meetings through the account and to verify contact through LinkedIn direct messages.
By then, the founder had already messaged the account suggesting they switch to Google Meet. Whoever controlled Pierre’s Telegram account replied that he had gotten busy and asked to reschedule, with the attacker still managing the persona once the call ended.
That exchange turns the incident from an isolated embarrassment into a live campaign signal that the method is active, the account compromise is the entry point, and the relationship history is the weapon.
Stage
What the victim saw
Why it looked legitimate
What the attacker was likely trying to achieve
Initial outreach
“Pierre” reached out about Atrium and suggested a call
The victim had spoken with Pierre before, including on video
Reopen an existing trust relationship instead of starting from a cold approach
Meeting setup
A Microsoft Teams invite for the next day
Teams is a normal business workflow and the topic was plausible
Move the target into a controlled environment that felt routine
Live call
Familiar face, familiar voice, plus two other apparent Cardano Foundation members
The social context matched the victim’s memory of prior interactions
Lower suspicion and make the call itself feel like verification
Call disruption
Lagging, instability, then getting kicked out
Technical glitches are common in video calls
Create frustration and set up the fake “fix” as a normal troubleshooting step
Fake update prompt
A message saying Teams was out of date and needed reinstalling through Terminal
Software update prompts are familiar, and the user rarely used Teams
Get the victim to execute a malicious command directly
Command execution
The victim ran the command, then shut down the laptop because the battery was dying
The workflow still felt like a routine app fix at that moment
Launch the infection chain and gain access to credentials or device data
Post-call follow-up
The victim suggested switching to Google Meet; the attacker said he got busy and asked to reschedule
The persona continued behaving like a real contact after the failed attempt
Keep the relationship alive for another attempt and avoid immediate suspicion
Why generative media changes the threat surface
The founder said he now believes the call may have involved AI-generated or manipulated video. Forensic confirmation of the tools is lacking, and the OpenAI connection here is governed by its own safety documentation.
OpenAI launched its 4o image generation model on Mar. 25, describing it as capable of “precise, accurate, photorealistic outputs,” and released the ChatGPT Images 2.0 System Card on Apr. 21.
The firm stated that the model’s “heightened realism” could, absent safeguards, enable more convincing deepfakes of real people, places, or events. One of the leading AI labs has now put on record that its own image model raises the ceiling on what a convincing fake can look like.
The World Economic Forum said in January 2026 that generative AI lowers the barrier to phishing while raising its credibility, through realistic deepfake audio and video that can evade both detection systems and human scrutiny.
INTERPOL declared financial fraud one of the world’s most severe and rapidly evolving transnational crimes in March 2026, identifying deepfake videos, audio, and chatbots as tools that make impersonation of trusted people easier to carry out at scale.
Chainalysis data shows crypto scams reached $17 billion in 2025, impersonation scams up 1,400%, and AI-enabled scams generating 4.5 times traditional revenue.
Crypto attracts this class of attack because it combines high-value targets, fast settlement rails, and an informal communications culture in which Telegram introductions and ad hoc video calls between founders are routine.
Mandiant documented that the group behind the crypto Zoom intrusion targeted software firms, developers, venture firms, and executives across payments, brokerage, staking, and wallet infrastructure.
Mandiant noted that the victim’s data could be used to seed future social engineering, with each compromise generating material for the next.
Two paths forward
Zoom announced on Apr. 17 a partnership to add real-time human verification to meetings, a “Verified Human” badge, and a “Deep Face Waiting Room,” treating participant authenticity as a product problem.
In the bull case, that buildout reaches critical mass quickly enough that attackers must defeat multiple independent trust layers to complete a conversion, and the economics of impersonation campaigns deteriorate.
In the bear case, the timeline compresses before defenses do. Gartner warned that AI agents may halve the time required to exploit account takeovers by 2027, narrowing the window for human hesitation or security team intervention.
Deloitte estimated that generative AI-enabled fraud losses in the US alone could climb from roughly $12 billion in 2023 to $40 billion by 2027.
Scenario
What changes
What stays vulnerable
Implication for crypto firms
Bull case
Verification tools spread quickly: human-verification badges, liveness checks, stronger internal trust rails, and more formal approval workflows
Informal founder-to-founder chats, legacy messaging habits, and ad hoc scheduling still create openings
Attackers face more friction and lower conversion rates because they must defeat several trust layers instead of one
Bear case
AI-generated impersonation improves faster than defenses are adopted; fake meetings and fake troubleshooting become standard playbooks
Public-facing executives, Telegram-based outreach, video-first verification habits, and staff under time pressure
Relationship hijacking becomes routine, and each compromise creates material for the next scam
What success looks like
Sensitive requests get verified across separate channels, with known numbers, shared passphrases, hardware keys, or pre-agreed internal systems
Social pressure, urgency, and trust in familiar faces and voices cannot be fully removed
Firms reduce the chance that one spoofed call can lead directly to compromise
What failure looks like
Teams rely on the call itself as proof of identity, even as deepfake and impersonation tools improve
Video remains persuasive even when it is no longer reliable as authentication
Crypto organizations become easier to target because executives are both high-value victims and reusable lure assets
Every public-facing crypto executive becomes both a target and a lure asset, a source of voice recordings, video clips, and relationship graphs that attackers can deploy against the next victim.
Zoom is building liveness checks into meetings, Microsoft is documenting attack chains that impersonate its own software, and the FBI has warned that malicious actors are already using AI-generated voice and text to impersonate trusted contacts, advising against assuming a message is authentic because it appears to come from a known person.
Verification now requires independent rails, such as a known phone number, a hardware key, a shared passphrase established before any meeting, or a pre-agreed internal channel that no attacker has accessed.
Just hours before attending another event in the White House from which he had to be evacuated after multiple gunshots were heard, US President Donald Trump delivered a 45-minute keynote speech at his own meme coin gathering at Mar-a-Lago.
According to attendees cited by several journalists, he spoke about several major hot topics, including the war in Iran, Joe Biden, and the CLARITY Act.
To Sign ‘Immediately’ But…
Introduced by House Committees on Financial Services and Agriculture in June last year, the Digital Asset Market Clarity Act of 2025 (or simply, the CLARITY Act) passed in the House months later, and moved to the Senate Banking Committee where it faced multiple delays as all parties involved continue to dispute over certain regulations, especially those related to stablecoins.
Some of the key features include splitting jurisdictions between the CFTC and the SEC, with the former regulating digital commodities and the latter overseeing investment contract assets (tokens sold via securities offerings). It also wants to enhance DeFi protection by regulating centralized intermediaries rather than software developers or decentralized protocols.
Arguably, the most divisive feature was the regulation of stablecoins and potential yields, with some industry experts calling it a ‘horrible’ bill, while Coinbase was blamed for undermining it.
Nevertheless, US President Donald Trump remains optimistic that it will be passed soon and, while speaking at the Mar-a-Lago event, reportedly said he would “sign it immediately” once it lands on his desk. He has been adamant in the past that this bill has to pass as soon as possible, and even lashed out at some of the parties that were allegedly blocking it.
The Catch
In case the catch isn’t obvious until now: even though the POTUS wants it passed and he pledged to sign it immediately, it still has a long way to go. It has been roughly nine months since the House did its job, and the reports coming within this timeframe have been promising, but to no avail so far.
Deadlines have slipped, interested parties have spoken against each other, while industry experts have weighed in on the potential impact once (or if) it passes. With the midterms approaching and the Democrats’ expected victory, uncertainty is likely to increase if there’s no official resolution by then.
Have you been to Kenya? Are you familiar with Kenyans on the streets and online get-togethers? Whatever Kenyans want, they demonstrate, threaten to abandon, and have global trending hashtags to that effect. This time around, the round-up was sent to Binance after freezing several crypto accounts. And Binance has responded.
According to reports, the leading crypto exchange is set to address Kenyans next week. The exchange has already confirmed that it will host a live X Spaces session next week in partnership with the AML Association of Kenya.
Binance prepares to face Kenyan’s next week
According to a statement by comedian Eddie Butita, Binance will go live on X Spaces with the AML Association of Kenya to clarify the facts and address compliance concerns.
Next week, Binance will go live on X Spaces with the AML Association of Kenya to clarify the facts and address concerns around compliance. More details to follow. @binance@BinanceAfricapic.twitter.com/i8VcGU8JEI
So how did we get here? As earlier reported by Cryptopolitan, Kenyan crypto traders voiced frustration after months of restricted access to funds on Binance. The exchange’s compliance with directives from law enforcement agencies sparked discussions about customer rights, regulation, and overreach.
According to affected traders, their Binance accounts have been frozen for more than 2 months at the behest of DCI, with no charges laid, no court order issued, and no timetable for resolving the matter.
“No complainant identified. No formal charges. No timeline given,” the trader posted on X. “Funds remain inaccessible. Meanwhile, real life doesn’t pause. Bills are piling up. Debt is growing.”
The public mood has soured significantly, with a boycott gaining steam under the hashtag #BinanceUnmasked.
These actions coincide with developments in the country’s legal context, such as the 2025 Virtual Assets Service Provider Act, as well as changes to the Proceeds of Crime and Anti-Money Laundering Act, which classify cryptocurrency platforms as reporting entities.
Binance argues that it works with local law enforcement agencies, as such measures are consistent with existing regulations.
What this means for Kenya’s crypto ecosystem
Kenya ranks among the most dynamic and active countries in Africa for crypto activities. Millions of users use platforms such as Binance to make transactions and remittances, and to save money. The current tensions highlight the growing pains of rapid adoption and the need to meet stricter oversight.
The comments under Eddie Bututa’s X post are negative at this point. Binance’s partnership with the Kenyan authorities has scarred its reputation among traders.
The National Treasury also mentioned that the submissions of all the interested parties regarding the Draft VASP Regulations 2026 have been received. This will set the ball rolling for the completion of the entire process.
It is pertinent to mention here that the Draft VASP Regulations are meant to bring into effect the provisions of the Virtual Asset Service Providers Act passed in the year 2025.
Some of the major recommendations in this regard include imposing strict capital requirements, which could be as high as Ksh 500 million for stablecoin issuers; stringent AML/CFT and consumer protection guidelines; asset isolation; and restrictions on market manipulation. Supervision of the entities is to be carried out through collaboration between the CBK and the CMA.
Meet the chain where hackers cash out; that’s how analysts are summing up THORChain. Fresh data again tied the protocol has dropped the debate into the light. In a post, the analyst pointed out that multiple high-profile exploits have routed funds through THORChain. Amid all the funds flowing out, the protocol continued to generate fees.
The list of exploited funds being driven out from THORChain includes the FTX exploiter ($124 million), Bybit hacker ($1.2 billion+), and Balancer exploiter ($120 million). It also holds the name of the recent KelpDAO attack ($175 million in just 36 hours).
THORChain bags millions in fees
Data shows that THORChain reportedly generated around $910,000 in fees just from the KelpDAO incident. This exceeded its previous month’s total of $709,000. Meanwhile, the protocol has maintained a stance of neutrality, even as hundreds of millions in illicit funds pass through its rails.
According to data from Arkham Intelligence, the attacker split the stolen funds across three wallets. They were holding around 25,000 ETH (approx $57–59 million each). Only one of those wallets has actively begun laundering. Its balance dropped from 25,000 ETH to around 3,800 ETH.
A good portion of those funds has already been bridged into Bitcoin using THORChain. On-chain data shows that nearly 99% of the funds in that wallet have moved. This adds to a surge in protocol usage. Swap volume on THORChain reportedly hit $540 million in 24 hours. It helped the protocol generate about $660,000 in fees during that period.
Lookonchain reported that the KelpDAO hacker had swapped all 75,701 ETH (approx worth $175 million) through THORChain. Mantle has proposed providing 30,000 ETH (approx worth $70 million) to Aave as a loan. While Lido announces a one-time donation of 2,500 stETH (approx worth $5.8 million)
The approach from attackers looks pretty straightforward. THORChain allows cross-chain swaps without intermediaries or know-your-customer checks. This allows stolen assets to move quickly between ecosystems. It often happens from Ethereum to Bitcoin. This is where tracing becomes more fragmented due to the UTXO model. Ether price has dropped by almost 3% over the last 24 hours. ETH is trading at $2,310 at the press time.
The laundering activity picked up pace after intervention from the Arbitrum Security Council. It froze 30,766 ETH (approx $71 million) linked to the exploit. This move managed to restrict access to a portion of the funds, which required governance votes for any recovery.
THORChain defends neutrality
The freeze may have also pushed the attacker’s strategy. The exploiter began moving funds more aggressively soon after it. This highlights an ongoing tension in DeFi between intervention and decentralization. Protocol-level actions can limit damage, but they may also push attackers toward faster and more complex laundering routes.
This pattern is not new, as attackers often allow wallets to remain dormant for months before reactivating them. The delay in moving funds allows it to outlast initial tracking efforts from investigators
THORChain, in a post, stated that it was modeled after Bitcoin. This lets it be permissionless and censorship-resistant. It mentioned that there’s no single person or entity in control of the protocol, and there’s no admin key. It added that there’s no 2-of-3 multisig and there are 95 nodes spread globally that control the network.
THORChain was modelled after Bitcoin, to be permissionless and censorship resistant.
There’s no single person or entity in control of the protocol. There’s no admin key. There’s no 2-of-3 multisig. Currently, there’s 95 nodes spread globally that control the network. For the… pic.twitter.com/Za2Obrh9dO
The protocol stated that Bitcoin is neutral because the code is neutral, and the nodes enforce it. Similarly, THORChain is neutral because the code is neutral, and the nodes enforce it.
The protocol has been in headlines due to its large-scale exploits and fund links. This dates back to the February 2025 hack of Bybit. Attackers linked to the Lazarus Group stole roughly $1.5 billion in assets. This includes over 400,000 ETH. A major portion of those funds was laundered through THORChain.
It is estimated that over 70% of the stolen assets flowed through the protocol. It led the protocol’s daily volumes to exceed $700 million at that time. The massive laundering activity generated over $3 million to $5.5 million in transaction fees for the protocol. The attackers were identified as the North Korean Lazarus Group by the FBI.
FROZEN — Tether’s $344M USDT Lockdown | Crypto Coin Show
Sanctions Enforcement · Stablecoins · Iran
Frozen $344 Million in USDT Locked on Tron
In one of the largest single compliance actions in crypto history, Tether moved to freeze
$344 million worth of USDT across two Tron blockchain wallets at the request of U.S. authorities — wallets now linked by U.S. officials to the Iranian regime.
Crypto Coin Show Editorial Desk|April 24, 2026|Exclusive Analysis
$344M
Total USDT Frozen
2 Wallets
Blacklisted on Tron
$4.4B+
Total Tether Freezes to Date
340+
Global Agency Partners
A Landmark Freeze — and an Iran Connection
On Thursday, April 23, 2026, Tether — the issuer of the world’s largest stablecoin by volume — announced it had frozen $344 million in USDT across two blockchain addresses on the Tron network. The action was carried out in coordination with the U.S. Office of Foreign Assets Control (OFAC) and multiple federal law enforcement agencies, following intelligence that the wallets were tied to illicit financial activity.
Within 24 hours, the story grew considerably larger. U.S. officials told CNN on Friday that the frozen funds carried material links to the Iranian regime, including transaction trails running through Iranian exchanges and intermediary wallets connected to accounts associated with Iran’s Central Bank. Treasury Secretary Scott Bessent confirmed the sanctions action, framing it as part of a broader Trump administration campaign to cut off Tehran’s financial lifelines as nuclear diplomacy stalls.
USDT is not a safe haven for illicit activity. When credible links to sanctioned entities or criminal networks are identified, we act immediately and decisively.
— Paolo Ardoino, CEO, Tether · April 23, 2026
📊 Key Figures
Total USDT Frozen
$344M
Wallet 1 (TNiq9…)
~$213M
Wallet 2 (TTiDL…)
~$131M
Network
Tron (TRC-20)
Coordination
OFAC + FBI
Alleged Nexus
Iran / CBoI
Action Date
Apr 23, 2026
🌐 Tether Compliance Scale
Total Assets Frozen ($4.4B)
U.S.-Linked Cases ($2.1B)
This Action ($344M)
Global Agency Partners
340+
Countries
65
Cases Supported
2,300+
The Two Wallets
Blockchain security firm PeckShield flagged the two addresses after they appeared on Tether’s blacklist on April 23, before any official explanation was given. Together, the wallets held slightly more than $344 million in USDT at the time of the freeze.
🔒 Locked
TNiq9AXBp9EjUqhDhrwrfvAA8U3GUQZH81
~$213M
USDT · Tron Network
🔒 Locked
TTiDLWE6fZK8okMJv6ijg42yrH6W2pjSr9
~$131M
USDT · Tron Network
According to Chainalysis, the two Tron addresses were regularly active years ago — moving tens of millions of dollars in single transfers, often to private wallets. U.S. officials noted the behavior mirrored patterns seen in other known IRGC-linked addresses. The wallets were blacklisted at the smart contract level, meaning no further movement of the funds is possible until cleared by authorities.
⚠ Iran’s Crypto Strategy
According to the U.S. Treasury Department, Iran’s central bank has increasingly leaned into digital assets — particularly stablecoins on the Tron network — to mask cross-border transactions and support trade flows under sanctions pressure. Blockchain analytics firms TRM Labs and Chainalysis estimate that Iran-related crypto flows reached billions of dollars in 2025 alone.
🔍 Context: Tron & Iran
The Tron blockchain has become a preferred rail for sanctions-evasion activity due to its low fees and high USDT liquidity. U.S. authorities have increasingly focused enforcement actions on Tron-based USDT wallets linked to Iranian exchanges, IRGC-associated entities, and intermediary networks routing funds through complicit third-country actors.
How Tether Can Freeze Funds
Unlike decentralized tokens, USDT is a centralized stablecoin — meaning Tether retains the technical ability to freeze or blacklist any wallet at the smart contract level. The company describes this as a feature, not a flaw: public blockchains create a visible transaction trail that investigators can follow in near-real time, something traditional cash networks cannot provide.
When OFAC or a law enforcement partner flags an address, Tether’s compliance team can restrict the wallet within hours — preventing any further transfer of funds. The frozen USDT remains in the address but is effectively inert, unable to be spent, sent, or swapped, until legal proceedings determine its fate.
A Growing Compliance Empire
This action does not exist in isolation. Tether has been systematically expanding its compliance infrastructure over the past several years, and Thursday’s move is a statement of that ambition. The company now reports collaborating with more than 340 law enforcement agencies across 65 countries, having assisted in more than 2,300 investigations globally — over 1,200 of which involve U.S. authorities.
Cumulatively, Tether has now frozen more than $4.4 billion in USDT to date, including $2.1 billion specifically tied to U.S. law enforcement cases. The $344 million freeze on April 23 ranks as one of the single largest compliance actions the company has ever executed.
A Pattern of Major Freezes
November 2023
~$225M frozen — Wallets linked to a Southeast Asia human-trafficking and “pig butchering” scam ring. One of the first major cooperative actions with U.S. DOJ.
January 2026
~$182M frozen — Five Tron wallets restricted in another coordinated action with OFAC. Linked to sanctions evasion networks.
April 2026 (Current)
$344M frozen — Two Tron wallets blacklisted at the request of U.S. authorities. Linked within 24 hours to the Iranian regime and Central Bank of Iran intermediaries. Largest single action to date.
The Stablecoin Compliance Debate
The freeze arrives amid a broader, heated debate about what stablecoin issuers owe the public — and regulators — when it comes to stopping illicit financial flows. The controversy was reignited earlier this month when the Drift Protocol was exploited for $285 million. Critics argued that Circle, the issuer of the competing USDC stablecoin, moved too slowly to freeze funds connected to the exploit.
Circle pushed back, with Chief Strategy Officer Dante Disparte stating that the company only freezes funds when the law explicitly requires it or when court orders mandate action — not through unilateral judgment. Tether has taken the opposite stance, positioning itself as a proactive partner to law enforcement even before formal legal orders arrive.
The way to get at Iran at this point — because Iran is truly sanctioned out — is to go with the third-country actors enabling them.
— Daniel Tannebaum, Atlantic Council · Senior Fellow
⚖️ Circle vs. Tether
Tether Freeze Philosophy
Proactive
Circle Stance
Court Order Only
Drift Protocol Fallout
Circle Sued
Drift Adopted
USDT (Tether)
The fallout from Drift was swift: the protocol announced it would dump USDC in favor of USDT, citing Tether’s more assertive compliance posture. A class-action lawsuit against Circle followed. The episode cemented Tether’s narrative as the enforcement-friendly stablecoin — and its April 23 action is a deliberate reinforcement of that brand.
Geopolitical Dimensions
The Iran link elevates this story beyond a routine compliance action. Treasury Secretary Scott Bessent confirmed the sanctions in a statement framing it as part of the Trump administration’s escalating economic campaign against Tehran — describing Washington’s intent to “follow the money” as diplomatic efforts around the conflict stall.
Iran has spent years developing techniques to route funds through third-country actors, shell companies, and now increasingly through decentralized blockchain infrastructure. Earlier in 2026, both Tether and Circle were involved in blacklisting a hot wallet belonging to Iranian exchange Wallex, while U.S. authorities sanctioned additional platforms accused of routing IRGC funds through USDT on the Tron network.
Some analysts caution against overstating the impact. Experts note that Iran has decades of experience adapting to economic pressure, and that the more consequential choke point may be the third-country jurisdictions — particularly China — that continue to enable Iranian trade flows. Still, the ability to surgically freeze $344 million in a matter of hours marks a significant expansion of the U.S. sanctions toolkit into the digital asset space.
What Comes Next
Tether has confirmed it is expanding further into the U.S. domestic market. The company recently launched USAT — a new stablecoin token built for compliance with emerging federal stablecoin regulation — in partnership with federally regulated crypto bank Anchorage Digital. The initiative is led by former White House crypto advisor Bo Hines.
Regulators and lawmakers are watching closely. With stablecoin legislation advancing on Capitol Hill, the question of whether issuers like Tether should be required — rather than just permitted — to freeze funds linked to sanctions is becoming a central policy debate. For now, Tether is volunteering. And with $344 million locked on Tron, Washington appears to appreciate the help.
ApeCoin (APE) price surged more than 80% on Friday to roughly $0.18, breaking out of a tight range around $0.10, drawing optimism from Yuga Labs confirmation of Michael Figge as chief executive earlier this week.
On-chain analytics firm Lookonchain flagged a newly created wallet that rotated out of ether and into a 5x leveraged long on 9.19 million APE, sitting on a $713,000 unrealized gain at the time of reporting.
Leadership Shift Reignites ApeCoin Price Rally
Yuga Labs, the company behind the Bored Ape Yacht Club (BAYC) and the Otherside metaverse, promoted longtime chief product officer Figge to the top role around April 16. Co-founder Greg Solano moved to chairman of the board.
Some news to share:
After serving as CEO the past couple years, I’m moving into the role of Chairman of the Board, and @mfigge will become Yuga’s next CEO.
Figge is the absolute best person for the job. There’s no one I trust more to lead Yuga through this next chapter.
Figge has been with Yuga since 2021 and joined from a background in film, animation, and digital art. His appointment coincides with fresh ecosystem plans, including a proposed Yuga Grails over-the-counter desk for high-end NFT liquidity.
The @mfigge new @BoredApeYC playbook is one of the best I have seen in the history of the entire space. Now @apecoin up 80 % today ! Proud to see one of my best, most trusted friends literally crushing it 💪.
Data from Lookonchain shows the anonymous trader sold 75 ether worth $174,000 on Hyperliquid before opening the 5x APE position valued at $1.03 million.
The wallet has no prior transaction history, which fueled speculation about informed trading ahead of the move.
“We spotted this insider before $APE surged 80%! He is now up $713K,” Lookonchain reported.
APE had traded near $0.10 for months before Friday’s breakout, leaving it still roughly 99% below its 2022 peak. In it’s latest surge, the altcoin topped out at $0.1965, up 70% in the last hour.
Top prediction market platforms, including Kalshi and Polymarket, are rushing to offer highly leveraged crypto derivatives at the exact moment state and federal authorities are clashing in court over whether the industry’s core products constitute illegal betting or legitimate financial instruments.
Over the past year, these companies have gained national prominence by facilitating wagers on discrete, real-world occurrences, ranging from political races to macroeconomic data releases.
Now, by preparing to list perpetual futures, which are complex contracts that never expire and allow traders to multiply their market exposure using borrowed funds, these platforms are blurring the line between niche forecasting hubs and full-service digital asset exchanges.
Against this backdrop, this shift drastically expands their potential customer base, but it also amplifies the legal risks associated with the platforms.
Historically, platforms like Kalshi operated on a cyclical, event-driven basis, with traffic and trading volume spiking around major catalysts such as a presidential debate or a championship sporting event and then plummeting once the outcome was settled.
In this kind of market, a user purchased a binary “Yes” or “No” share, and the contract expired upon the event’s resolution.
Perpetual futures fundamentally alter that business model. Because these derivatives lack an expiration date, participants can maintain their market positions indefinitely, provided they meet ongoing margin requirements.
The instruments frequently allow users to leverage their bets up to 50 times their initial capital, attracting aggressive speculators seeking rapid returns from minute price fluctuations.
By rolling out these derivatives, Polymarket and Kalshi are abandoning their siloed event-contract operations to compete directly with centralized exchanges and retail brokerages. The underlying strategy for both platforms is to convert occasional political bettors into daily, high-frequency traders.
While Kalshi has explicitly stated its intention to enter the perpetuals arena, Polymarket’s exact roadmap remains guarded, including which specific assets it will cover and whether it will restrict access for US customers.
Why prediction markets are moving into perpetual futures
Why perps, why now?
The motivation to embrace this new feature comes down to basic market structure.
Traditional crypto spot trading, which is the simple buying and holding of digital assets, has decelerated from the frenzied peaks of previous market cycles, logging $18.6 trillion in volume last year.
Meanwhile, perpetual futures generated more than three times that amount. Data from CryptoQuant show that the global trading volume for crypto perpetual futures hit $61.7 trillion last year.
That volume disparity dictates corporate strategy. Platforms recognize that to maintain engagement during periods of low volatility, they must offer instruments that allow users to short the market, hedge portfolios, and employ leverage.
While prediction markets currently command significant capital, with all-time notional volume surpassing $150 billion, the episodic nature of event contracts cannot match the continuous, around-the-clock fee generation of a highly active derivatives order book.
Moreover, the broader financial technology sector is experiencing a rapid collapse of operational boundaries, with centralized platforms like Robinhood, Coinbase, and Gemini all embracing event-based offerings.
Mo Shaikh, co-founder of the Aptos blockchain network, noted that financial applications have historically trended toward consolidation, citing the expansions of legacy platforms like PayPal. However, he warned that forcing disparate user bases into a single application rarely succeeds seamlessly.
“The trader, the bettor, the long-term investor, the payments user, they show up for different reasons,” Shaikh said, adding that true value lies in controlling the underlying infrastructure. “Clearing, liquidity, identity, settlement, data, those layers can unify even if the frontends remain fragmented.”
Meanwhile, the shift among prediction market players is partially defensive.
Offshore decentralized exchange Hyperliquid, a dominant force in perpetual futures, recently encroached on the prediction sector by revealing plans to list its own event contracts.
As a result, the market is split on who holds the strategic advantage in the ensuing turf war.
Jiani Chen, a growth officer with the Solana Foundation, noted the technical disparities, arguing that decentralized derivatives exchanges have a much easier time adding prediction markets to their backend than prediction platforms do spinning up complex futures trading engines.
However, Kyle Samani, chairman of Forward Industries, dismissed the technical hurdles, arguing that customer acquisition is the true bottleneck for digital asset platforms. He said:
“It’s way harder to bootstrap liquidity and acquire normie users for prediction markets. Kalshi perps are going to crush.”
The legal fight is still about who gets to call it gambling
Legal battle over prediction markets
The aggressive product expansion coincides with an existential legal threat as state regulators are launching coordinated efforts to classify the prediction platforms as unlicensed casinos, rejecting the premise that event contracts are sophisticated financial tools.
On April 21, New York Attorney General Letitia James filed sweeping lawsuits against digital asset firms Coinbase and Gemini, demanding $3.4 billion in combined penalties and restitution.
James alleged the companies bypass state taxes and consumer protection laws by offering prediction markets to retail users, including minors.
State officials pointed to research by the National Institutes of Health linking early exposure to mobile betting with heightened risks of anxiety and financial distress, while noting American Psychological Association data showing severe mental health risks associated with gambling disorders.
James said:
“Gambling by another name is still gambling, and it is not exempt from regulation under our state laws and Constitution.”
The industry firmly rejects the gambling label, countering that the contracts are vital instruments for hedging geopolitical and economic risks.
The judiciary is already untangling the overlapping claims. A federal appeals court in Philadelphia ruled against New Jersey gaming regulators earlier this year, determining the CFTC held sole regulatory authority over Kalshi’s election and sports-related contracts.
This sequence of litigation reflects a deeply fractured regulatory perimeter that companies must navigate as they deploy new derivative products.
A bigger market, and a bigger regulatory target
The move into perpetual futures would further position prediction markets as part of mainstream financial infrastructure rather than a niche corner of online speculation.
That shift is already drawing attention from traditional finance. The Intercontinental Exchange, parent of the New York Stock Exchange, recently invested $2 billion in Polymarket, a sign that major market operators see commercial value in platforms built around event-driven pricing.
Supporters of the model argue that prediction markets are proving useful as both forecasting tools and trading venues.
In high-liquidity markets, Brier scores, a standard measure of probabilistic accuracy, have fallen as low as 0.0247 shortly before resolution, suggesting pricing errors narrow sharply as capital and participation deepen. Industry estimates also show that about 10% of proprietary trading firms are already active in event contracts, using them in part to hedge macro and policy risk.
That combination of data value and trading activity helps explain why platforms are racing to broaden their product mix.
Rob Hadick, managing partner at Dragonfly, framed the commercial logic bluntly:
“Owning your customer will be the only way to have longevity in this new world of broad financialization.”
However, not everyone sees perpetual futures as the natural next step.
Alex Momot, chief executive and co-founder of Peanut Trade, told CryptoSlate that the current push looks more like a response to tightening legal pressure than a durable product strategy.
He noted that regulators and some jurisdictions are moving against prediction markets, and as a result, these operators appear to be shifting closer to the crypto-exchange model, where the rules are clearer, and the risk of being classified as gambling is lower.
Momot argued that strategy may offer only limited relief. In his view, the deeper problem is liquidity. Without more depth, many of the sector’s most promising use cases, including hedging and insurance against real-world event risk, remain too small to scale.
He said the stronger long-term path may lie in index-style products, market aggregation, and pooled liquidity across events, structures that could make prediction markets look more like traditional derivatives or synthetic exposures.
That viewpoint points to a broader tension now shaping the industry. One camp sees perpetual futures as the fastest way to capture more trading volume and keep users active between headline-driven events. Another sees them as a tactical detour from the harder task of building deeper, more resilient liquidity.
Either way, the legal risk is rising. Dyma Budorin, founder and chief executive of CORE3, said the merging of prediction and derivatives markets is likely to draw closer scrutiny from regulators already struggling to define the sector.
He said:
“What we’re really seeing is a convergence toward perp-like behavior without the corresponding risk controls. If this trend continues, regulators won’t treat prediction markets as harmless forecasting tools, they’ll treat them as derivatives platforms operating outside the rules. And historically, that doesn’t end quietly.”
The New York litigation has already ensured that the fight over jurisdiction will remain central to the industry’s future. That battle could eventually reach the U.S. Supreme Court or force Congress to step in with a clearer statutory framework.
Until then, prediction-market operators appear willing to keep expanding through the uncertainty, betting that the commercial upside of perpetual futures is worth the legal exposure.
Immigration and Customs Enforcement bureaucrats are reportedly planning to use specialty facial recognition glasses to collect data on Americans in real time, independent journalist Ken Klippenstein revealed.
Financial statements viewed by Klippenstein point to the development of a facial recognition platform modeled after commercially available AI smart glasses, like Meta’s widely-panned “pervert glasses.” ICE’s in-house model, it seems, will allow agents to monitor video and reference vast federal databases of biometric information on subjects regardless of if they’ve been arrested, or even charged with a crime.
“The project will deliver innovative hardware, such as operational prototypes of smart glasses, to equip agents with real-time access to information and biometric identification capabilities in the field,” read an ICE budget document leaked to Klippenstein.
Perhaps most alarmingly, Department of Homeland Security insiders told the investigative journalist that the technology involved isn’t limited to immigration enforcement.
“It might be portrayed as seeking to identify illegal aliens on the streets,” one anonymous DHS attorney told Klippenstein, “but the reality is that a push in this direction affects all Americans, particularly protestors.”
That reveal comes just a few months after an incident in Maine in which an ICE agent admitted to scanning protestors’ faces with his phone. “We have a nice little database, and now you’re considered domestic terrorists,” the agent tells a couple who were out documenting the immigration agents in their community.
In October, 404 Media reported that ICE agents were scanning peoples’ faces in order to check whether they were citizens. These targets for surveillance are often chosen at random — we now know that many of ICE’s arrests over the past year have been circumstantial, a far cry from the targeted enforcement of known criminals the Trump administration promised.
Taken together, what began as surveillance infrastructure marketed for catching illegal immigrants now seems to be coming for residents as well. ICE’s smart glasses represent the next iteration of a creeping panopticon that, once in place, will be nearly impossible to uproot — as the history of US immigration enforcement and domesticsurveillance has shown us time and again.
The Lovable Hack: Vibe Coding’s Security Reckoning
Security Breach
Lovable Left the Door Open. For 48 Days.
A $6.6 billion AI vibe-coding platform left tens of thousands of developer projects wide open — source code, database credentials, customer data, AI chat histories — readable by anyone with a free account. It sat unfixed for 48 days.
Disclosed · April 20, 2026
Attack Type · Anyone with a free account could read other users’ projects
Root Cause · Platform never checked if you owned what you were accessing
Status · Partially patched (new projects only)
18K+
Users Exposed (Single App)
170
Apps with Critical Flaws
303
Vulnerable Endpoints
48
Days Left Unfixed
$6.6B
Lovable Valuation
Lovable built its $6.6 billion valuation on a simple promise: anyone can build a real app just by describing it. Founders. Teachers. Solo creators. No engineers required. It worked. Then a security researcher made a free account, typed five API calls, and walked into someone else’s source code, their database, their customer records, and every conversation they’d ever had with the AI. The door had been open for 48 days.
What unfolded around Lovable between March and April 2026 isn’t just the story of one platform’s security failure. It’s a stress test of the entire vibe-coding movement — and the results should make anyone who has ever shipped an AI-generated app stop and check their permissions.
How a Free Account Unlocked Everyone’s Data
The Lovable vulnerability has two distinct layers. The first was discovered in early 2025 and assigned CVE-2025-48757. The second — arguably worse — was disclosed publicly on April 20, 2026 by researcher @weezerOSINT. Both stem from the same root cause: Lovable’s AI generates code that looks functional, ships smoothly, and is completely exposed underneath.
Strike One — The Database Was Left Wide Open on Hundreds of Apps
Most Lovable apps are powered by Supabase, a backend-as-a-service platform built on PostgreSQL. Supabase has a security feature called Row Level Security (RLS) — policies that control which users can read or write which rows of data. If RLS is not configured, the database is essentially public to anyone who knows the API key.
The problem: Lovable’s AI was consistently generating apps without properly configuring RLS. In many cases, the Supabase anon_key — a public-facing API key — was embedded directly in the client-side code. With that key and no RLS, anyone could query the database directly and retrieve everything.
The Core Flaw — Lovable-generated auth logic
// What Lovable's AI generated (WRONG)functioncheckAccess(user) {
// Logic is INVERTED — blocks logged-in users, allows anonymousif (user.isAuthenticated) {
return false; // ← blocks real users
}
return true; // ← lets attackers in
}
// What it should have beenfunctioncheckAccess(user) {
return user.isAuthenticated; // allow only logged-in users
}
Security engineer Matt Palmer discovered this in March 2025 while testing a LinkedIn profile generator built on Lovable. He removed an authorization header, resent the request, and received the platform’s entire user database. He reported it privately. Lovable’s response was to delete their own acknowledgment tweets and deny the issue.
A subsequent scan of 1,645 apps from Lovable’s public Discover page found 303 vulnerable endpoints across 170 projects — roughly 10% of all apps analyzed. Exposed data included Google Maps API tokens, Gemini API keys, eBay authentication tokens, full user databases, payment records, Stripe credentials, and subscription details.
⚠️ What attackers could do
With no credentials beyond a browser: view all user records, delete accounts, change credit balances, send bulk emails to 18,000+ users, access course grades and submissions, retrieve payment data — all without logging in.
Strike Two — The Platform Itself Was Broken for 48 Days
While the RLS issue affected individual apps built on Lovable, the second vulnerability struck the Lovable platform itself. On April 20, 2026, researcher @weezerOSINT published findings that exposed a critical Broken Object Level Authorization (BOLA) flaw in Lovable’s own API.
BOLA is ranked #1 on the OWASP API Security Top 10. It occurs when an API confirms you’re logged in, but never checks whether you actually own the resource you’re requesting. In Lovable’s case, the /projects/{id}/* endpoints verified Firebase authentication tokens — but skipped ownership checks entirely.
Translation: create a free Lovable account, make five API calls, read anyone else’s project.
📁
Source Code
Full project source code for every project created before November 2025, including hardcoded API keys, database credentials, and business logic.
🗄️
Database Credentials
Supabase credentials embedded in source code, providing direct database access — customer records, payment data, PII, API tokens for third-party services.
💬
AI Chat Histories
Every conversation a developer ever had with Lovable’s AI — including pasted error logs, business logic discussions, and credentials shared mid-session.
👤
Customer Data
Real names, job titles, LinkedIn profiles, and Stripe customer IDs from end users of apps built on the platform — not just developers, but their users too.
The researcher demonstrated the severity by accessing the admin panel of Connected Women in AI, a Danish nonprofit with over 3,700 developer edits in 2026 — an actively maintained project. From its source code, they extracted Supabase credentials and queried the live database, pulling real employee records from Accenture Denmark and Copenhagen Business School.
“This is not hacking. This is five API calls from a free account.”
— @weezerOSINT, April 20, 2026
From Discovery to Public Disclosure — A Year of Silence
March 20, 2025
Matt Palmer identifies the RLS vulnerability in Linkable, a Lovable-built LinkedIn profile generator. Unauthenticated database access confirmed.
March 21, 2025
Palmer reports the vulnerability privately. Lovable acknowledges receipt — then deletes their acknowledgment tweets and denies the issue publicly.
April 14, 2025
A Palantir engineer independently discovers and publicly tweets the same vulnerability, demonstrating live extraction of debt amounts, home addresses, and API keys.
April 24, 2025
Lovable ships “Lovable 2.0” with a new security scanner. The scanner flags the presence of RLS — but not whether it actually works. Researchers call it a false sense of security.
May 29, 2025
After a 45-day disclosure window with no meaningful fix, CVE-2025-48757 is formally published. 170 apps confirmed vulnerable across 303 endpoints.
February 27, 2026
The Register reports on a Lovable-hosted EdTech app with 16 vulnerabilities, 6 critical, exposing 18,000+ records from teachers and students at major universities.
March 3, 2026
@weezerOSINT files a BOLA vulnerability report on HackerOne. Lovable triages it, ships ownership checks for new projects, and quietly leaves all pre-existing projects exposed.
March–April 2026
Researcher files a second report documenting additional affected endpoints. Lovable marks it as a duplicate and closes it. Pre-November 2025 projects remain wide open.
April 20, 2026 — Statement 1
@weezerOSINT goes public. Lovable responds: “We did not suffer a data breach.” Attributes the issue to unclear documentation and calls code visibility “intentional behavior, consistent and by design.”
April 20, 2026 — Statement 2
Under public pressure, Lovable issues a second statement apologizing for the first. Admits the February regression was their own coding mistake, that HackerOne partners incorrectly closed the reports, and that the fix was only deployed after public disclosure — 48 days after it was first reported.
AI Writes the Code. Nobody Checks the Locks.
The Lovable story isn’t unique. It’s the canary in the coal mine for an entire generation of AI-built software. Veracode found that 45% of AI-generated code contains security vulnerabilities. The DORA report found a 7.2% decrease in delivery stability for every 25% increase in AI code usage.
The underlying problem is architectural. Most AI app builders — Lovable included — generate a frontend that talks directly to a hosted backend like Supabase or Firebase using an API key baked into the client. That architecture can be made secure, but it requires explicit configuration of access policies. The AI doesn’t do that by default. And the developers who chose AI tools specifically because they’re not security experts are exactly the people least likely to catch it.
💡 The Vibe Hacking Problem
Researcher Taimur Khan coined the term “vibe hacking” to describe how less technically-minded attackers can exploit AI-generated code. Because AI defaults to functionality over security, and because vibe-coded apps all share similar patterns, a single technique works across hundreds of apps simultaneously. You don’t need to be a hacker. You need to know how Lovable works.
Former Facebook CSO Alex Stamos put the client-direct-to-database pattern bluntly: “You can do it correctly. The odds of doing it correctly are extremely low.”
Lovable’s initial response to the April 20 disclosure was to place responsibility on users — their CISO had previously told The Register: “It is at the discretion of the user to implement our security recommendations.” The security community was blunt in return: “Telling developers to review security before publishing doesn’t work when those developers chose AI tools because they’re not security experts. That’s the whole point.”
After significant public pressure following the April 20 disclosure, Lovable issued two statements on X. The first, posted hours after the story broke, attempted to contain the damage:
First Statement
Lovable (@Lovable)
April 20, 2026 · X / Twitter
“We were made aware of concerns regarding the visibility of chat messages and code on Lovable projects with public visibility settings. To be clear: We did not suffer a data breach. Our documentation of what ‘public’ implies was unclear, and that’s a failure on us.
Specifically for public projects, chat messages used to be visible — this is now no longer possible. When it comes to code of public projects: That is intentional behavior. We have experimented with different UX for how the build history is surfaced on public projects, but the core behavior has been consistent and by design. Importantly, for enterprise customers, being able to set visibility to public for new projects has been disabled since May 25, 2025.”
Three things stand out about this statement. First, the explicit denial — “We did not suffer a data breach” — was a direct pushback against the researcher’s framing and every headline that followed. Second, it attributed the problem entirely to documentation confusion, placing implicit responsibility on users who misunderstood what “public” meant. Third, it described code visibility as “intentional behavior” and “consistent and by design” — language that would be significantly walked back within hours.
The community response was swift and negative. Security researchers pointed out that calling a 48-day unfixed disclosure a documentation problem — when the underlying cause was a Lovable-introduced regression — was at best incomplete and at worst misleading. Under sustained pressure, Lovable issued a second statement the same day:
Updated Statement
Lovable (@Lovable)
April 20, 2026 · X / Twitter
“We’re sorry our initial statement didn’t properly address our mistake. Here’s what a public project on Lovable means, and how we got to where we are today: In the early days, people didn’t know what Lovable was capable of. So we wanted to make it easy to explore what others were building, as a way to spark ideas and lower the barrier to getting started. Like scrolling GitHub or Dribbble: you browse projects to see what’s possible, then go build your own.
When you create a project on GitHub, you can make it private or public. Lovable worked the same. Users had a ‘Public’ or ‘Private’ option right in the chatbox. A public project meant the entire project was public, both chat and code. ‘Just like a public project on GitHub,’ we thought. Over time, we realized this was confusing. Many users thought ‘public’ just meant others could see their published app, not the chat of an unpublished project. That’s reasonable.
On the free tier, users originally couldn’t create private projects. They had to upgrade to a paid plan to do so. In May 2025, we changed this: users on the free tier could choose to make their projects private. For enterprise customers, the public visibility setting was disabled altogether. And in December 2025, we switched to private by default across all tiers. We also retroactively patched our API so public project chats couldn’t be accessed, no matter what.
Unfortunately, in February, while unifying permissions in our backend, we accidentally re-enabled access to chats on public projects. This was reported through our vulnerability disclosure program (via HackerOne). Unfortunately, the reports were closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour. Upon learning this, we immediately reverted the change to make all public projects’ chats private again. We appreciate the researchers who uncovered this. We understand that pointing to documentation issues alone was not enough here. We’ll do better.”
The contrast between the two statements tells you more than either one alone. The first said “not a data breach” and blamed documentation. The second apologized for the first, admitted the regression was their own mistake, and acknowledged their disclosure system failed. What changed between them was public pressure — not new information. Lovable knew about the HackerOne reports when they wrote the first statement.
What neither statement addresses: the 170+ apps with misconfigured RLS that remain exposed. The free-tier users who couldn’t make their projects private before May 2025 and whose data was structurally public for months without meaningful notice. And the question of whether a platform designed to default to public-by-design for cost reasons should be hosting apps that handle sensitive user data at all.
◆ ◆ ◆
If You’ve Ever Built or Used a Lovable App, Do This Today
If you have ever built anything on Lovable, or if you’ve used any app built on Lovable — especially one created before November 2025 — take these steps immediately:
Rotate all database credentials. Any Supabase API keys, database URLs, or service role keys embedded in a pre-November 2025 Lovable project should be considered compromised. Rotate them now, before anything else.
Audit your AI chat history. Lovable stores every conversation. If you ever pasted a database URL, an API key, a password, or any business logic mid-session, assume that conversation has been read.
Check your RLS configuration. Use Symbiotic Security’s open-source Vibe-Scanner — 62 detection rules against your Supabase RLS setup, no sign-up required.
Set projects to private. If you’re on a paid Lovable tier, set all sensitive projects to private immediately. Free-tier users cannot do this — which is a separate problem.
Rotate credentials regardless of Lovable’s fix. Lovable says they reverted the February regression that re-exposed chat histories. That’s positive — but it doesn’t undo 48 days of exposure. Any credentials, keys, or sensitive data in your chat history during that window should be treated as compromised and rotated at the source.
Move your backend. If your Lovable app handles real user data, consider decoupling your backend entirely. Use Lovable only for the frontend and manage your backend, auth, and data access through a framework where you control the security posture.
🔒 For developers using any AI coding tool
This is not a Lovable-only problem. Cursor, Bolt, Replit, and every other AI coding assistant can generate code with the same class of vulnerabilities. Before shipping any AI-generated app that handles real user data: manually verify RLS policies, never hardcode secrets in client-side code, and treat every AI-generated auth function as guilty until proven innocent.
The People Hurt Most Are the Ones Who Never Heard of Lovable
Lovable’s $6.6 billion valuation was built on democratizing software development. That mission is real and valuable. Millions of people who couldn’t previously build software now can. But democratizing development without democratizing security awareness creates a new class of risk — not for the platforms, but for the end users of apps built on them.
The 18,000 users exposed in that EdTech app didn’t choose Lovable. They signed up for a course platform. The professionals whose records were pulled from Connected Women in AI didn’t know their data sat on a platform with an unfixed BOLA bug for 48 days. The accountability chain breaks down exactly where the users are least able to defend themselves.
The fix isn’t complicated technically. Secure-by-default configurations. Mandatory RLS. Ownership checks on every endpoint. Automatic secret rotation. Post-mortems that are public, not deleted. These are solved problems. The question is whether AI coding platforms will implement them before regulators force their hand.
“Security is bolted on after the damage is done. Ship secure-by-default configurations. Defaults should follow least-privilege principles so new projects start from a safe baseline.”
— Symbiotic Security, March 2026
The liability dimension of this breach is equally significant — and legally novel. The BOLA vulnerability exposed not just user data but something courts have not yet fully grappled with: the complete AI conversation histories developers shared with the platform, including privileged business logic, credentials, and in some cases what may constitute attorney work product.
Legal Expert
Dave Rodman
Founder & Managing Partner, The Rodman Law Group
“Existing case law establishes that there is no privacy in what a user communicates to a chatbot, which is consistent with the broader principle that consumers generally lack a reasonable expectation of privacy in digital services where providers can access user inputs. More notably, entering attorney work product into a chatbot may result in a waiver of attorney-client privilege unless — perhaps — it is done at the direction of counsel or within a controlled, private AI system.
This situation represents a straightforward extension of that principle: information that was already ‘public’ in a legal sense has now become public in a practical sense. In this context, Lovable may have a pretty significant lawsuit on their hands, arriving at a particularly inopportune moment as Claude advances with its latest updates.”
Rodman’s framing cuts to something most technical post-mortems miss: the legal exposure here isn’t only about user records or API keys. Every developer who pasted sensitive business discussions, client information, or strategic context into Lovable’s AI chat window — trusting it was private — may now be staring at a privilege waiver problem on top of a data breach. Lovable has since issued a statement acknowledging the regression was their own mistake. But acknowledgment and remediation are not the same thing, and the window during which that data was accessible was 48 days long.
Every app that promises to ship in minutes carries a hidden cost. This week, we found out what it is.