Skip to main content
Cyber Security

A Clause Too Far: Why WeTransfer’s Terms of Service Update Sparked Outrage - And What It Means for Trust in AI

When WeTransfer quietly expanded its terms of service to allow AI model training on user-uploaded files, public backlash was swift and fierce. Within days, the company backtracked. But the damage was done. This incident reveals growing tension between AI development and user trust and highlights why privacy-first platforms like Wire must set a different standard.

Earlier this month, WeTransfer, one of the most widely used file-sharing platforms in the creative industry, quietly rolled out an update to its terms of service. Hidden in the legal fine print was a clause that granted WeTransfer extensive rights to user-uploaded content: not only the right to host or display files, but to reproduce, modify, commercialize, and even use them to train machine learning models.

For most users, this update flew under the radar until it didn’t. As soon as the wording began to circulate on social media, outrage exploded. Designers, authors, and filmmakers voiced concern that the work they shared on WeTransfer, often sensitive, proprietary, or unreleased, could now be repurposed without consent or compensation. Some feared their intellectual property could be used to power AI systems. Others pointed out that the clause did not even require the uploader to be the rightful owner of the files, potentially exposing third parties to liability.

The backlash was swift, amplified by legal experts who described the move as overly aggressive and advised clients to stop using the platform. Within a matter of days, the pressure mounted enough for WeTransfer to respond publicly and reverse course.

From Overreach to Retraction: WeTransfer Walks It Back

On July 16, WeTransfer updated its terms once again, stripping out the most controversial elements and issuing a public statement to reassure users. The company clarified that it does not use customer data to train AI, nor does it sell or share content with third parties. The earlier clause, it explained, was meant to reflect the possible future use of AI tools to detect harmful content, not to commercialize user files or use them in generative AI models.

The revised clause is now significantly more narrow. Users grant WeTransfer a simple, royalty-free license to use files for operating and improving the service, “in accordance with our Privacy & Cookie Policy.” There is no longer any mention of machine learning or sublicensing rights.

Despite this reversal, many users remain skeptical. The perception lingers that WeTransfer had “tested the waters” with its initial language, and only walked it back once public trust began to erode.

Why the Backlash Was Inevitable and What It Signals

At first glance, this might look like a routine legal misstep. But WeTransfer’s case speaks to a much broader tension in today’s digital ecosystem, especially in Europe, where data protection, digital sovereignty, and ethical AI are top of mind.

The timing couldn't be worse: AI is dominating headlines, trust in Big Tech is thin, and creators of all kinds are increasingly wary of how their content might be used to feed machine learning models. The mere suggestion that a file-sharing platform might be quietly claiming expansive rights to user data, even if not yet implemented in practice, was enough to trigger alarm.

The clause struck a particularly raw nerve because it blurred the line between service operation and data exploitation. WeTransfer wasn’t just requesting the minimal permissions needed to host or transmit files. The company claimed a perpetual, global, sub-licensable license that would allow it to develop, market, and improve new technologies—including AI-driven tools—without notifying users or compensating rights holders. For many in the creative and professional sectors, that felt like a betrayal of trust.

Why It Matters: Trust, Consent, and the AI Gold Rush

This isn’t the first time a SaaS platform has tested the waters on AI usage rights—and quickly retreated. Adobe, Zoom, Dropbox, Slack, and others have all revised or clarified terms in the face of public pressure. The pattern is clear: vague AI language + user data = reputational blowback.

In WeTransfer’s case, the backlash struck a particularly sensitive nerve for three reasons:

  1. It was expansive. The clause extended far beyond what’s needed to operate a file-sharing service.
  2. It was vague. It left open the possibility of commercializing AI models trained on user content.
  3. It came without meaningful consent. There was no opt-out, no data segmentation, no clarification of AI model scope or storage. 

This erosion of user control strikes at the heart of today’s debates around data sovereignty, intellectual property, and responsible AI development.

The Wire Perspective: Why Privacy-First Design Still Matters

At Wire, we’ve taken a different approach from day one. As a secure collaboration platform trusted by European governments, NGOs, and global enterprises, we believe privacy must be structurally guaranteed not left to trust, promises, or terms buried in legalese.

Here’s how we’re different:

  • No access by design: All communication and file transfers are end-to-end encrypted. Not even Wire can see the content.
  • No training, no tracking: We do not and will not train AI models on customer data. Period.
  • No ambiguity: Our terms are clear, narrow, and reflect what’s technically possible within our architecture.
  • Sovereign-first infrastructure: Wire is developed and hosted in Europe. We are not subject to extraterritorial data access laws like the U.S. CLOUD Act. 

In a world increasingly shaped by AI, we believe platforms need to make a fundamental choice: optimize for data extraction or optimize for trust. We choose trust. 


Key Takeaways for Leaders and Teams Using SaaS Platforms

  1. Read the terms. AI clauses are becoming more common and more ambiguous. Legal reviews should be part of your procurement process.
  2. Map your risk. If your teams are using tools like WeTransfer, Zoom, or Slack to share sensitive or IP-rich content, re-evaluate your exposure.
  3. Push for clarity. Vendors should be able to explain in plain language what rights they claim, how data is used, and what’s off-limits.
  4. Champion alternatives. The market is responding. Privacy-first, European solutions like Wire, Pydio, Nextcloud, and others offer a way forward for organizations that cannot afford ambiguity. 
WeTransfer’s terms of service update wasn’t just a footnote it was a warning signal. The AI gold rush is colliding with long-standing user expectations around privacy, consent, and control. If even creative-friendly brands like WeTransfer risk overstepping, what does that say about platforms with weaker governance? In 2025, trust is no longer a marketing asset. It’s a core infrastructure decision. And it must be earned, architected, and protected.

Wire

As a leader in secure communication, we empower businesses and government agencies with expert-driven content that helps protect what matters. Stay ahead with industry trends, compliance updates, and best practices for secure digital exchanges.

Similar posts

Subscribe to our newsletter