What The Twitter Whistleblower Teaches Us About Data Protection
This week, Peiter "Mudge" Zatko, Twitter’s former security chief and now known as the Twitter whistleblower, testified to the American Senate. He claimed that Twitter prioritizes profits over security and that it has outdated and risky security practices, allowing foreign agents to infiltrate its workforce. In this edition of The Privacy Whisperer, I will show that 1- his allegations are not only security-related but also serious in terms of privacy and data protection law; 2- data protection law should be preventive and regulate technical standards; 3- non-compliance with technical standards should lead to significant fines.
Like everyone else on the internet, I was struck by Mr. Zatko's allegations, such as that when receiving a warning that an additional foreign agent had infiltrated Twitter's workforce, a Twitter executive answered, “well, since we already have one, what is the problem if we have more? Let’s keep growing the office.”
Mr. Zatko said that Twitter had received warnings of possible spies in its ranks but had no motivation or resources to act. He also said Twitter failed to safeguard users’ personal information, as sensitive parts of its operation were easily accessible to thousands of employees, including to the foreign spies (i.e., from China and India) who potentially infiltrated its payroll. He added that "any employee could take over the accounts of any senator in this room.” Links about the whole story are all around the internet, and you can also watch Mr. Zatko testifying and draw your own conclusions.
To summarize, so that we can move to the privacy and data protection aspects of this story, these were the main allegations brought by Mr. Zatko against Twitter:
Providing false information to governments regarding its security structure;
Lack of background checks when hiring new employees (to avoid foreign interference);
When warned of the existence of foreign interference, inaction to tackle it;
Weak security measures to prevent internal abuses (i.e., hacking, bribery);
Excessive internal access to users’ accounts - according to specialists, only a small group of employees should have this level of access;
Lack of internal control over user data;
Outdated security structures;
Infrastructure drift;
Lack of a staging environment to test updates.
When reading this story from multiple sources and listening to Mr. Zatko speak, something that caught my attention was the seriousness of the privacy and data protection violations that were being raised by him. Some of them seemed straight General Data Protection Regulation (GDPR) violations that should immediately trigger an investigation and possibly fines. There was also a cause for breaking privacy promises that could be further inspected by the Federal Trade Commission (FTC) in the United States. However, I have not seen articles focusing on these privacy and data protection violations that, in my view, deserve to be examined by data protection authorities. Every allegation was framed as security-related. Why?
Perhaps because Mr. Zatko is a security specialist, so everything that he said was framed as security-related by some media vehicles and then repeated by the other vehicles without further investigation. Or because talking about security and foreign influence triggers more negative emotions, which can generate more clicks than talking about privacy violations. Or both.
Despite the lack of attention from the media, I will now point out the possible privacy and data protection violations (GDPR & FTC) that I could extract from this story. Feel free to comment below if you found other issues that are not listed here:
Infringement of the principles of integrity and confidentiality. GDPR Art 5.1(f) Reminder that organizations that violate this GDPR article may be subject to administrative fines of up to 20 million Euros or up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
Not ensuring ongoing confidentiality, integrity, availability, and resilience of processing systems and services. GDPR, Art 32.1 (b)
Not having a process for regularly testing, assessing, and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing. GDPR, Art 32.1 (d)
Failing to provide adequate security measures, taking into account the particular risks (i.e., accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored, or otherwise processed). GDPR, Art. 32.2
In their privacy policy, Twitter states: "we use information we collect to provide for the safety and security of our users, our products, services, and your account." The whistleblower's allegations might be seen as breaking a privacy promise, and if confirmed, the FTC could act upon it.
Besides bringing awareness to privacy and data protection, I would like to stress the lack of accountability of tech giants such as Twitter and how something needs to change in the way law regulates and enforces privacy.
Twitter has a full privacy department, one or more Data Protection Officers (DPOs), a privacy policy, and so on. It probably produced a data protection impact assessment (DPIA). Yet, Twitter has multiple security and privacy weaknesses, and a bombshell testimony by a whistleblower was needed for the public to become aware of these serious weaknesses - and perhaps for regulators to do something. Why?
Data protection lawmakers must work together with privacy and security specialists to establish minimum standards that should be observed by organizations. Mandating that "appropriate technical and organizational measures to ensure a level of security appropriate to the risk" is not enough. Mandating that design should be accessible, clear, and visually informative is not enough. Mandating "data protection by design and by default" is not enough. It is subjective, vague, and does not say anything.
How must organizations promote security and privacy? What are the minimum standards? What are the best practices? What are unacceptable practices? And what are the fines for non-compliance? What triggers fines for non-compliance? What are the accountability requirements to be held compliant?
Companies will not invest money in preventive measures if they think that: a) they will not yield more profits from that; b) they will not lose significant money if they do not comply. So enforcement and monetary fines are extremely important.
This would create uniform mandatory market standards that would raise the level of protection for all users. Instead of waiting for Zuckerberg to say sorry for his next privacy misstep, why not be preemptive and mandate in advance what should be done?
Technologies change, products change. I know. These minimum standards do not need to be amended to the GDPR. They can form a body of "soft laws" that will be constantly updated by security and privacy specialists. But the standards need to be enforced by data protection authorities, and non-compliance must lead to fines.
This week, the Information Commissioner's Officer (ICO) of the United Kingdom issued a draft Guide on Privacy Enhancing Technologies (PETs). In this draft, it mentions eight different types of PETs: homomorphic encryption (HE), secure multiparty computation (SMPC), private set intersection (PSI), federated learning, trusted execution environments, zero-knowledge proofs, differential privacy, and synthetic data. This is a great beginning. Why not select a group of experts and ask them to design minimum standards for different groups of organizations with different risk types and then enforce these standards?
In past articles of this newsletter, I discussed similar measures regarding UX design and ways to fight dark patterns in privacy. Instead of broadly recommending Privacy-by-Design - which does not say much despite being a catchy phrase - why not be specific: what measures? What are the guidelines? What should the design prioritize? What are the best practices? What are forbidden practices? What are the criteria? I created a framework called Privacy-Enhancing Design, where I propose standards to help designers implement privacy - you can read it here.
Privacy and security experts should get together with lawmakers to help make data protection laws more efficient and preventive. Companies should be the ones responsible, upfront, for implementing the best technical measures available, and non-compliance should lead to significant fines.
What are your thoughts? How do you think that regulation can be more efficient and preventive? I would love to hear your opinion in the comments below.
See you next week. All the best, Luiza Jarovsky