Adversarial prompting: revealing privacy risks
Plus: generative AI & social media - complex challenges ahead
👋 Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
Today's newsletter is sponsored by Guardsquare:
Developers are being called on to reevaluate their mobile app security architecture and implement security best practices throughout their dev lifecycle. What does that look like in practice? In their latest piece, Guardsquare covers the three pillars of mobile application security - testing, protection, and threat monitoring - and provides implementation tips that won't derail developers' workflows. For tips on how to level up your security efforts, check out "Defense in Depth: A Layered Approach to Mobile App Security."
🔥 Generative AI & social media: complex challenges ahead
Today I read an excellent essay by Sayash Kapoor and Arvind Narayanan - part of the project “Algorithmic Amplification and Society” - analyzing possible malicious uses of generative AI in the context of social media, as well as positive uses and recommendations for platforms, civil society, and other stakeholders. I agree with most of their analysis, including the way they map the risks and potential positive uses, the distinction between new capabilities and cost reductions, as well as their recommendations, which include changes in platforms’ policies, considerations on the impact on civil society, and public education. I disagree, however, when they fail to acknowledge the cumulative changes in terms of the economic incentives coupled with cultural normalization and banalization of “fake” or “AI-based,” which leads to a much greater potential to harm. When everything can be fake/AI-generated when we open social media, when we do not know if an image is true or AI-based, when there is a generative AI-based app for any possible creative task - and this is all normalized, unregulated, and even culturally expected - the incentives for AI-based manipulation are very high, and they materially change the risk profile of generative AI in the context of social media. Kapoor and Narayanan are right that, for many years already, it was possible to create a fake image of an explosion in the Pentagon or of a pope with a white coat (as the recent viral examples of fake AI-based content). But the broad availability of generative AI tools changes everything in terms of both the economic incentives and the cultural norms: any bored teenager can do that anywhere in the world with a single prompt, without liability, causing harm to people and to civil society. Kapoor & Narayanan said: “Still, the cost of distributing misinformation by getting it into people’s social media feeds remains far higher than the cost of creating it—and generative AI does nothing to reduce this cost. In other words, the bottleneck for successful disinformation operations is not the cost of creating it.” I disagree: the broad availability, low cost, absolute lack of regulation or mandatory standards for content verification (for now), and cultural absorption (AI-based as a new hype) make generative AI a great “opportunity” for bad actors. The incentives have changed, and the potential for harm too. And yes, from my point of view, it is as bad as it sounds.
🔥 New report: generative AI and privacy
Last month, the Congressional Research Service (in the US) launched its new report, “Generative Artificial Intelligence and Data Privacy: A Primer,” and it is a very helpful document for privacy professionals who want to understand better the privacy implications of generative AI from a US perspective. It uses simple and technically accessible language, answers basic questions such as “What happens to data shared with generative AI models?” and offers policy considerations for Congress. It is eight pages long and a very good introduction to the topic - recommended.
🔥 OpenAI's lobbying & world tour: tech PR 3.0
Sam Altman, OpenAI's CEO, has spent the last month touring around the world and speaking with crowds about AI, ChatGPT, regulation, and various questions from the audiences. I was in one of these events and wrote my impressions in this now widely shared Twitter thread with more than 580,000 views. Right after the event, I wrote: “when mentioning regulation, Sam focused on the fact that it would be bad to slow down innovation. My personal opinion is that they are heavily lobbying against the AI Act as it is, as it does not fit their agenda.” It looks like it was exactly what was happening. According to TIME, “behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the EU’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.” You can read TIME's detailed analysis of OpenAI's lobbying efforts, including the document “OpenAI White Paper on the European Union’s Artificial Intelligence Act,” which OpenAI sent to EU Commission and Council officials in September 2022. It looks like they got what they wanted. This whole situation highlights how tech PR and lobbying 3.0 happen in 2023: you have a technology that poses various types of risk, and you are a successful first mover in the field; you publicly announce how terrible it can be if this technology gets out of its way or gets into the wrong hands; you publicly say how regulation is important; in the backstage, you organize meetings with officials to make sure regulation has the lowest impact possible on your product, and you threaten to leave in case it does not happen the way you planned. Large tech players are probably taking note - and I hope lawmakers and regulators are ready to fight for strong laws and the protection of fundamental rights.
🔥 Adversarial prompting: revealing privacy risks
Adversarial prompts, according to Robi Sen, are “carefully crafted inputs used to mislead or exploit the vulnerabilities of AI systems, particularly machine learning models, without them detecting anything unusual.” In the context of AI-based chatbots,