Deepfakes got worse: AI-based Photoshop
Plus: we must regulate algorithmic externalities | The Privacy Whisperer #53
👋 Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
Today's newsletter is sponsored by Piiano:
Looking for a secure and reliable way to store sensitive data in your backend? Whether it's financial records, payment details, or personally identifiable information, Piiano has got you covered. It offers a secure, cloud-native data storage solution designed to safeguard your customers' personal data (e.g., PII, PCI, SSN, KYC, etc.) and preserve their privacy. Accelerate your GDPR, CCPA, SOC2 or PCI-DSS compliance. Forget about encryption and managing keys - get your data protected in days, not months, with Piiano's data protection APIs for developers. It’s really simple.
🔥 The largest GDPR fine ever is out. What does it mean in practice?
Tomorrow, May 25th, the GDPR will complete 5 years. It was an interesting coincidence that two days ago, the largest fine in the history of the GDPR was issued. The Irish Data Protection Authority imposed Meta (Facebook) a 1.2 billion euro fine due to Meta's transfers of personal data to the U.S. on the basis of standard contractual clauses (SCCs). On the fine, Andrea Jelinek, Chair of the European Data Protection Board (EDPB), said: “The EDPB found that Meta IE’s infringement is very serious since it concerns transfers that are systematic, repetitive, and continuous. Facebook has millions of users in Europe, so the volume of personal data transferred is massive. The unprecedented fine is a strong signal to organizations that serious infringements have far-reaching consequences.” On the decision, Max Schrems, the privacy activist and founder of 'noyb’ who was behind the complaint in 2013, said in noyb's article about the decision: "We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems." In their official response, Meta's representatives said: “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and US. It also raises serious questions about a regulatory process that enables the EDPB to overrule a lead regulator in this way, disregarding the findings of its multi-year inquiry without giving the company in question a right to be heard.” At this point, the privacy community is watching what Meta will do next, and you can read some of the different opinions and points of view on the case, such as Anupam Chander's, Whitney Merrill's, Johnny Ryan's, and Sam Schechner's.
🔥 New report on generative AI's harms
The Electronic Privacy Information Center (EPIC) has just launched its new report, “Generating Harms - Generative AI's Impact & Paths Forward.” It is an interesting and well research document building a much-needed bridge between the fields of privacy & AI from the perspective of harm. It is organized by risks and harms; in each section, they first introduce the main category of potential risk emerging from AI and the relevant background information and then explain a list of potential harms that can happen in the described context. The analyses and commentary on harms are based both on Prof. Danielle Citron and Prof. Daniel Solove's typology of privacy harms, which they developed in their paper (I will discuss privacy harms with Prof. Solove on June 6th, join us live), as well as on Dr. Joy Buolamwini's taxonomy of algorithmic harms (make sure to check out Dr. Buolamwini's groundbreaking Gender Shades project). I recommend reading the whole EPIC report, and I want to bring special attention to two of its sections. First, the part approaching privacy and consent (pages 12-15) as it discusses deepfakes and possible clashes between free speech and privacy. Second, the section “Profits Over Privacy: Increased Opaque Data Collection” (pages 24-29) as it brings some of the core data protection issues involved in generative AI, such as those related to data scraping to train the models, personal information in the interaction with AI-based tools, and personal information eventually output by AI systems.
🔥 Deepfakes got worse: AI-based Photoshop
Adobe has recently announced that it will integrate Adobe Firefly, its generative AI tool, into its widely popular Photoshop program. One of the consequences of this integration is that