Please regulate us, but not really
Plus: For how long will the AI hype last?
🔥 The GDPR needs bigger teeth
On May 25th, the GDPR completed five years. Despite growing fines - see the chart I prepared with the 10 largest GDPR fines in history - enforcement is still a major issue, and this might be undermining global privacy advocacy efforts. The latest major GDPR fine was last week's 1.2 billion euro fine against Meta due to Meta's transfers of personal data to the U.S. on the basis of standard contractual clauses (SCCs) - which I commented on last week. It was the biggest fine in the history of the GDPR, with privacy advocates arguing that it could have been much higher, while Meta argues that it was a “flawed and unjustified decision.” We still have to watch the legal repercussions of this case and what Meta's next steps will be. In any case, in 2022, Meta's total revenue was 108 billion euros, so the 1.2 billion euros fine is low even for GDPR standards, which establish that less severe infringements can be up to 2% of the total worldwide annual turnover of the preceding financial year (GDPR, Art. 83) and more severe infringements can be up to 4%. On the topic, Max Schrems and his team at noyb published an overview of the more than 800 GDPR cases they filed in the last 5 years. Of all the cases, 86% are still waiting for a decision. You can read more about their complaints here. While filing these hundreds of complaints, noyb's team identified more than 60 procedural issues that are hindering a more effective GDPR enforcement, and they listed them on a dedicated website (it is worth reading). These are serious issues that make clear that the procedural aspects of GDPR enforcement are lagging behind and undermining the GDPR's effectiveness.
🔥 What happens when lawyers use ChatGPT at work
As with anyone else with an internet connection, lawyers around the world are curious about ChatGPT and want to understand how the AI-based chatbot can help them be more efficient in their daily legal tasks. Many of the emerging websites that teach people “how to prompt” have a section for lawyers, and suggested prompts involve researching legal cases, drafting legal documents, and so on. I personally would not recommend using ChatGPT for any legal task: there are privacy, security, intellectual property, and additional legal issues that are prohibitive in the context of a lawyer's work; there will also be AI hallucinations creating fake content, and the work of checking and correcting these hallucinations would probably invalidate any incremental efficiency generated by the chatbot. In any case, it looks like my opinion is unpopular, as there seem to be lawyers everywhere using ChatGPT to do their work (judging by the number of websites and videos I can find teaching lawyers how to prompt). For now, at least one of these ChatGPT-enthusiast lawyers put himself in trouble. In what quickly became a Twitter meme, a lawyer used ChatGPT to research legal cases for a brief (exactly as the “how to prompt” websites are suggesting), and at least six of the legal cases he cited were fake - a typical case of AI hallucination, in which the chatbot makes up information. The judge in the case wrote: “The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.” The lawyer apologized and is now facing a sanctions hearing.
🔥 Please regulate us, but not really
On May 16th, Sam Altman, OpenAI's CEO, testified before the American Congress. After recognizing that AI could potentially go wrong, he said: “We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.” Despite this