In recent days, this access granted by Musk to a few external reporters has led to the publication of what he and his cheerleaders are framing as an exposé of the platform’s prior approach to content moderation. So far these “Twitter Files” releases, as he has branded them, have been a damp squib in terms of newsworthy revelations — unless the notion that a company with a large volume of user generated content A) employs trust and safety staff who discuss how to implement policies, including in B) fast-moving situations where all the facts around pieces of content may not yet be established; and C) also has moderation systems in place that can be applied to reduce the visibility of potentially harmful content (as an alternative to taking it down) is a particularly wild newsflash. But these heavily amplified data dumps could yet create some hard news for Twitter — if Musk’s tactic of opening up its systems to external reporters boomerangs back in the form of regulatory sanctions. Ireland’s Data Protection Commission (DPC), which is (at least for now) Twitter’s lead data protection regulator in the European Union is seeking more details from Twitter about the outsider data access issue. “The DPC has been in contact with Twitter this morning. We are engaging with Twitter on the matter to establish further details,” a spokeswomen told TechCrunch. Earlier today, Bloomberg also reported on concerns over the pond about outsiders accessing Twitter user data — citing tweets by Facebook’s former CISO, Alex Stamos, who posited publicly that a Twitter thread posted yesterday by one of the reporters given access by Musk “should be enough for the FTC to open an investigation of the consent decree”. Twitter’s FTC consent decree dates back to 2011 — and relates to allegations that the company misrepresented the “security and privacy” of user data over several years. The social media firm was already fined $150M back in May for breaching the order. But future penalties could be a lot more severe if the FTC deems it is flagrantly breaching the terms of the agreement. And the signs are foreboding, given the FTC already put Twitter on notice last month — warning that “no CEO or company is above the law”. Another consideration here is the European Union’s General Data Protection Regulation (GDPR) — which contains a legal requirement that personal data is adequately protected. This is known as the security — or “integrity and confidentiality” — principle of the GDPR, which states that personal data shall be:
processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).
Handing user data (and/or systems access that could expose user data) over to non-staff to sift through might therefore raise questions over whether Twitter is in full compliance with the GDPR’s security principle. There is a further question to consider here, too — of what legal basis Twitter is relying upon to hand over (non-public) user data to outsiders, if indeed that’s what’s happening. On the face of it, Twitter users would hardly have knowingly consented to such extraordinary processing under its standard T&Cs. And it’s not clear what other legal bases could reasonably apply here. (Twitter’s terms invoke contractual necessity, legitimate interests, consent, or legal obligation, variously, as regards processing users’ direct messages or other non-public comms depending on the processing scenario — but which of any of those bases would fit, if it is indeed handing this kind of non-public user data to non-employees who are neither Twitter service providers nor entities like law enforcement etc, is debatable.) Asked for her views on this, Lilian Edwards — a professor of Law, Innovation and Society at Newcastle Law School — told us that how the GDPR applies here isn’t cut and dried but she suggested Twitter disclosing data to unforeseen third parties (“who might share it willy-nilly”) could be a breach of the security principle. “If you’ve consented [to Twitter’s expansive terms], have you authorized these uses — so no security breach? I think there has to be an element of egregiousness here,” she argued. “How much you didn’t expect this and how open to security and privacy threats it leaves you — e.g. if it includes personal info like passwords or phone numbers?” “It’s tricky,” she added — citing guidance put out by the UK’s data protection authority which notes that security measures required under the GDPR “should seek to ensure that the data: can be accessed, altered, disclosed or deleted only by those you have authorized to do so (and that those people only act within the scope of the authority you give them”. “Well Musk has authorized them right, but should he? Are they security risks? I think a reasonable DPA would look at that quite sternly.” At the time of writing, it is not clear which data exactly or how much systems access Twitter is providing to its chosen outsider reporters — so it’s not clear whether any non-public user data has been handed over or not. One of the reporters given access by Twitter, journalist Bari Weiss, claimed in a tweet thread (which references four other writers associated with the publication she founded that will be reporting on the data) that: “The authors have broad and expanding access to Twitter’s files. The only condition we agreed to was that the material would first be published on Twitter.” Another of these writers, Abigail Shrier, further claimed: “Our team was given extensive, unfiltered access to Twitter’s internal communication and systems.” Still, both tweets lack specific detail on the kind of data they’re able to access. Twitter has also — via an employee — denied it is providing the reporters with live access to non-public user data in response to alarm over the level of access being granted. The company’s new trust & safety lead, Ella Irwin, tweeted in the last few hours to claim that screenshots of an internal system view of accounts that were being shared online, seemingly showing details of the internal access provided to the outsiders by Twitter, did not depict live access to its systems. Rather said she had herself provided these screenshots of this internal tool view to the reporters — “for security purposes”. Irwin’s tweet also claimed that this screenshot sharing methodology was chosen to “ensure no PII [personally identifiable information] was exposed”. “We did not give this access to reporters and no, reporters were not accessing user DMs,” she added in response to a Twitter user who had raised security concerns about the reporters’ access to its systems (and potentially to DMs). Irwin only joined Twitter in June as a product lead for trust & safety — but was elevated to head of trust & safety last month (via The Information) to replace the former head, Yoel Roth, who resigned after just two weeks working under Musk over concerns about “dictatorial edict” by Musk taking over from a good faith application of policy. Setting aside the question of why Twitter’s new head of trust & safety is spending her time screenshotting internal data to share with non-staff whose purpose is to publish reports incorporating such information, her choice of nomenclature here is notable: “PII” is not a term you will find anywhere in the GDPR. It’s a term preferred by US entities keen to whittle the idea of ‘user privacy’ down to its barest minimum (i.e. actual name, email address etc), rather than recognizing that people’s privacy can be compromised in many more ways than via direct exposure of PII. This is important because the relevant legal terminology in the GDPR is “personal data” — which is far broader than PII, encompassing a variety of data than might not be considered PII (such as IP address, advertiser IDs, location etc). So if Irwin’s primary concern is to avoid exposing “PII” she either does not understand — or is not prioritizing — the security of personal data as the EU’s GDPR understands it. That should make European Union regulators concerned. While Ireland’s DPC is currently the lead data supervisor for Twitter, since Musk took over the company at the end of October — and set about slashing headcount and driving scores more staff to leave of their own volition, including a trio of senior security, privacy and compliance executives who resigned simultaneously a month ago — questions have been raised about the status of its claim to be “main established” in Ireland for the GDPR. As we’ve reported before, unilateral US-based decision making by Musk risks Twitter crashing out of the GDPR’s one-stop-shop (OSS) mechanism, as it requires decision making that affects EU users’ data to involve Twitter’s Irish entity. And if the company loses its claim to main establishment status in Ireland it would immediately crank up its regulatory risk as data supervisors across the EU, not just the DPC, would be able to open their own enquiries if they felt local users’ data was at risk. With Musk now opening Twitter’s systems up to unexpected outsiders he’s putting on a very public spectacle that invokes big questions about security and privacy risks which — failing robust oversight by the DPC — could make other EU data protection authorities increasingly concerned about the integrity of Twitter’s Irish oversight, too. (And the GDPR does allows for emergency interventions by non-lead DPAs if they see a pressing risk to local users’ data so Twitter could face dialled up scrutiny elsewhere in the EU even while still ostensibly inside in the OSS, such as TikTok has recently in Italy.) Since Musk took over the company, Twitter has shuttered its communications function — so it was not possible to put questions to a press office about the level of data access that is being provided by Twitter to outsider reporters or the legal basis it’s relying upon for sharing this information. But we’re happy to include a statement from Twitter if it wants to send one. Ireland’s privacy watchdog engaging with Twitter over data access to reporters by Natasha Lomas originally published on TechCrunch