Why Digital Sovereignty in Europe Is at Risk | Interview with Tuta
Explore how Europe’s digital sovereignty is undermined by Big Tech, encryption backdoors, and chat control. Insights from Tuta’s Hanna Bozakov on...
When WeTransfer quietly expanded its terms of service to allow AI model training on user-uploaded files, public backlash was swift and fierce. Within days, the company backtracked. But the damage was done. This incident reveals growing tension between AI development and user trust and highlights why privacy-first platforms like Wire must set a different standard.
Earlier this month, WeTransfer, one of the most widely used file-sharing platforms in the creative industry, quietly rolled out an update to its terms of service. Hidden in the legal fine print was a clause that granted WeTransfer extensive rights to user-uploaded content: not only the right to host or display files, but to reproduce, modify, commercialize, and even use them to train machine learning models.
For most users, this update flew under the radar until it didn’t. As soon as the wording began to circulate on social media, outrage exploded. Designers, authors, and filmmakers voiced concern that the work they shared on WeTransfer, often sensitive, proprietary, or unreleased, could now be repurposed without consent or compensation. Some feared their intellectual property could be used to power AI systems. Others pointed out that the clause did not even require the uploader to be the rightful owner of the files, potentially exposing third parties to liability.
The backlash was swift, amplified by legal experts who described the move as overly aggressive and advised clients to stop using the platform. Within a matter of days, the pressure mounted enough for WeTransfer to respond publicly and reverse course.
On July 16, WeTransfer updated its terms once again, stripping out the most controversial elements and issuing a public statement to reassure users. The company clarified that it does not use customer data to train AI, nor does it sell or share content with third parties. The earlier clause, it explained, was meant to reflect the possible future use of AI tools to detect harmful content, not to commercialize user files or use them in generative AI models.
The revised clause is now significantly more narrow. Users grant WeTransfer a simple, royalty-free license to use files for operating and improving the service, “in accordance with our Privacy & Cookie Policy.” There is no longer any mention of machine learning or sublicensing rights.
Despite this reversal, many users remain skeptical. The perception lingers that WeTransfer had “tested the waters” with its initial language, and only walked it back once public trust began to erode.
At first glance, this might look like a routine legal misstep. But WeTransfer’s case speaks to a much broader tension in today’s digital ecosystem, especially in Europe, where data protection, digital sovereignty, and ethical AI are top of mind.
The timing couldn't be worse: AI is dominating headlines, trust in Big Tech is thin, and creators of all kinds are increasingly wary of how their content might be used to feed machine learning models. The mere suggestion that a file-sharing platform might be quietly claiming expansive rights to user data, even if not yet implemented in practice, was enough to trigger alarm.
The clause struck a particularly raw nerve because it blurred the line between service operation and data exploitation. WeTransfer wasn’t just requesting the minimal permissions needed to host or transmit files. The company claimed a perpetual, global, sub-licensable license that would allow it to develop, market, and improve new technologies—including AI-driven tools—without notifying users or compensating rights holders. For many in the creative and professional sectors, that felt like a betrayal of trust.
This isn’t the first time a SaaS platform has tested the waters on AI usage rights—and quickly retreated. Adobe, Zoom, Dropbox, Slack, and others have all revised or clarified terms in the face of public pressure. The pattern is clear: vague AI language + user data = reputational blowback.
In WeTransfer’s case, the backlash struck a particularly sensitive nerve for three reasons:
This erosion of user control strikes at the heart of today’s debates around data sovereignty, intellectual property, and responsible AI development.
At Wire, we’ve taken a different approach from day one. As a secure collaboration platform trusted by European governments, NGOs, and global enterprises, we believe privacy must be structurally guaranteed not left to trust, promises, or terms buried in legalese.
Here’s how we’re different:
In a world increasingly shaped by AI, we believe platforms need to make a fundamental choice: optimize for data extraction or optimize for trust. We choose trust.
As a leader in secure communication, we empower businesses and government agencies with expert-driven content that helps protect what matters. Stay ahead with industry trends, compliance updates, and best practices for secure digital exchanges.
Explore how Europe’s digital sovereignty is undermined by Big Tech, encryption backdoors, and chat control. Insights from Tuta’s Hanna Bozakov on...
Opt-in security often fails to protect users. WhatsApp’s encrypted backup feature is a prime example. Discover why security should be the default,...
Despite prioritizing encryption and EU data hosting, European organizations still rely on US platforms like Microsoft Teams. Explore the four key...